All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery
@ 2017-11-08  6:00 Peter Xu
  2017-11-08  6:00 ` [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile Peter Xu
                   ` (32 more replies)
  0 siblings, 33 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Tree is pushed here for better reference and testing:
  github.com/xzpeter postcopy-recovery-support

Please review, thanks.

v4:
- fix two compile errors that patchew reported
- for QMP: do s/2.11/2.12/g
- fix migrate-incoming logic to be more strict

v3:
- add r-bs correspondingly
- in ram_load_postcopy() capture error if postcopy_place_page() failed
  [Dave]
- remove "break" if there is a "goto" before that [Dave]
- ram_dirty_bitmap_reload(): use PRIx64 where needed, add some more
  print sizes [Dave]
- remove RAMState.ramblock_to_sync, instead use local counter [Dave]
- init tag in tcp_start_incoming_migration() [Dave]
- more traces when transmiting the recv bitmap [Dave]
- postcopy_pause_incoming(): do shutdown before taking rp lock [Dave]
- add one more patch to postpone the state switch of postcopy-active [Dave]
- refactor the migrate_incoming handling according to the email
  discussion [Dave]
- add manual trigger to pause postcopy (two new patches added to
  introduce "migrate-pause" command for QMP/HMP). [Dave]

v2 note (the coarse-grained changelog):

- I appended the migrate-incoming re-use series into this one, since
  that one depends on this one, and it's really for the recovery

- I haven't yet added (actually I just added them but removed) the
  per-monitor thread related patches into this one, basically to setup
  "need-bql"="false" patches - the solution for the monitor hang issue
  is still during discussion in the other thread.  I'll add them in
  when settled.

- Quite a lot of other changes and additions regarding to v1 review
  comments.  I think I settled all the comments, but the God knows
  better.

Feel free to skip this ugly longer changelog (it's too long to be
meaningful I'm afraid).

Tree: github.com/xzpeter postcopy-recovery-support

v2:
- rebased to alexey's received bitmap v9
- add Dave's r-bs for patches: 2/5/6/8/9/13/14/15/16/20/21
- patch 1: use target page size to calc bitmap [Dave]
- patch 3: move trace_*() after EINTR check [Dave]
- patch 4: dropped since I can use bitmap_complement() [Dave]
- patch 7: check file error right after data is read in both
  qemu_loadvm_section_start_full() and qemu_loadvm_section_part_end(),
  meanwhile also check in check_section_footer() [Dave]
- patch 8/9: fix error_report/commit message in both patches [Dave]
- patch 10: dropped (new parameter "x-postcopy-fast")
- patch 11: split the "postcopy-paused" patch into two, one to
  introduce the new state, the other to implement the logic. Also,
  print something when paused [Dave]
- patch 17: removed do_resume label, introduced migration_prepare()
  [Dave]
- patch 18: removed do_pause label using a new loop [Dave]
- patch 20: removed incorrect comment [Dave]
- patch 21: use 256B buffer in qemu_savevm_send_recv_bitmap(), add
  trace in loadvm_handle_recv_bitmap() [Dave]
- patch 22: fix MIG_RP_MSG_RECV_BITMAP for (1) endianess (2) 32/64bit
  machines. More info in the commit message update.
- patch 23: add one check on migration state [Dave]
- patch 24: use macro instead of magic 1 [Dave]
- patch 26: use more trace_*() instead of one, and use one sem to
  replace mutex+cond. [Dave]
- move sem init/destroy into migration_instance_init() and
  migration_instance_finalize (new function after rebase).
- patch 29: squashed this patch most into:
  "migration: implement "postcopy-pause" src logic" [Dave]
- split the two fix patches out of the series
- fixed two places where I misused "wake/woke/woken". [Dave]
- add new patch "bitmap: provide to_le/from_le helpers" to solve the
  bitmap endianess issue [Dave]
- appended migrate_incoming series to this series, since that one is
  depending on the paused state.  Using explicit g_source_remove() for
  listening ports [Dan]

FUTURE TODO LIST
- support migrate_cancel during PAUSED/RECOVER state
- when anything wrong happens during PAUSED/RECOVER, switching back to
  PAUSED state on both sides

As we all know that postcopy migration has a potential risk to lost
the VM if the network is broken during the migration. This series
tries to solve the problem by allowing the migration to pause at the
failure point, and do recovery after the link is reconnected.

There was existing work on this issue from Md Haris Iqbal:

https://lists.nongnu.org/archive/html/qemu-devel/2016-08/msg03468.html

This series is a totally re-work of the issue, based on Alexey
Perevalov's recved bitmap v8 series:

https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg06401.html

Two new status are added to support the migration (used on both
sides):

  MIGRATION_STATUS_POSTCOPY_PAUSED
  MIGRATION_STATUS_POSTCOPY_RECOVER

The MIGRATION_STATUS_POSTCOPY_PAUSED state will be set when the
network failure is detected. It is a phase that we'll be in for a long
time as long as the failure is detected, and we'll be there until a
recovery is triggered.  In this state, all the threads (on source:
send thread, return-path thread; destination: ram-load thread,
page-fault thread) will be halted.

The MIGRATION_STATUS_POSTCOPY_RECOVER state is short. If we triggered
a recovery, both source/destination VM will jump into this stage, do
whatever it needs to prepare the recovery (e.g., currently the most
important thing is to synchronize the dirty bitmap, please see commit
messages for more information). After the preparation is ready, the
source will do the final handshake with destination, then both sides
will switch back to MIGRATION_STATUS_POSTCOPY_ACTIVE again.

New commands/messages are defined as well to satisfy the need:

MIG_CMD_RECV_BITMAP & MIG_RP_MSG_RECV_BITMAP are introduced for
delivering received bitmaps

MIG_CMD_RESUME & MIG_RP_MSG_RESUME_ACK are introduced to do the final
handshake of postcopy recovery.

Here's some more details on how the whole failure/recovery routine is
happened:

- start migration
- ... (switch from precopy to postcopy)
- both sides are in "postcopy-active" state
- ... (failure happened, e.g., network unplugged)
- both sides switch to "postcopy-paused" state
  - all the migration threads are stopped on both sides
- ... (both VMs hanged)
- ... (user triggers recovery using "migrate -r -d tcp:HOST:PORT" on
  source side, "-r" means "recover")
- both sides switch to "postcopy-recover" state
  - on source: send-thread, return-path-thread will be waked up
  - on dest: ram-load-thread waked up, fault-thread still paused
- source calls new savevmhandler hook resume_prepare() (currently,
  only ram is providing the hook):
  - ram_resume_prepare(): for each ramblock, fetch recved bitmap by:
    - src sends MIG_CMD_RECV_BITMAP to dst
    - dst replies MIG_RP_MSG_RECV_BITMAP to src, with bitmap data
      - src uses the recved bitmap to rebuild dirty bitmap
- source do final handshake with destination
  - src sends MIG_CMD_RESUME to dst, telling "src is ready"
    - when dst receives the command, fault thread will be waked up,
      meanwhile, dst switch back to "postcopy-active"
  - dst sends MIG_RP_MSG_RESUME_ACK to src, telling "dst is ready"
    - when src receives the ack, state switch to "postcopy-active"
- postcopy migration continued

Testing:

As I said, it's still an extremely simple test. I used socat to create
a socket bridge:

  socat tcp-listen:6666 tcp-connect:localhost:5555 &

Then do the migration via the bridge. I emulated the network failure
by killing the socat process (bridge down), then tries to recover the
migration using the other channel (default dst channel). It looks
like:

        port:6666    +------------------+
        +----------> | socat bridge [1] |-------+
        |            +------------------+       |
        |         (Original channel)            |
        |                                       | port: 5555
     +---------+  (Recovery channel)            +--->+---------+
     | src VM  |------------------------------------>| dst VM  |
     +---------+                                     +---------+

Known issues/notes:

- currently destination listening port still cannot change. E.g., the
  recovery should be using the same port on destination for
  simplicity. (on source, we can specify new URL)

- the patch: "migration: let dst listen on port always" is still
  hacky, it just kept the incoming accept open forever for now...

- some migration numbers might still be inaccurate, like total
  migration time, etc. (But I don't really think that matters much
  now)

- the patches are very lightly tested.

- Dave reported one problem that may hang destination main loop thread
  (one vcpu thread holds the BQL) and the rest. I haven't encountered
  it yet, but it does not mean this series can survive with it.

- other potential issues that I may have forgotten or unnoticed...

Anyway, the work is still in preliminary stage. Any suggestions and
comments are greatly welcomed.  Thanks.

Peter Xu (32):
  migration: better error handling with QEMUFile
  migration: reuse mis->userfault_quit_fd
  migration: provide postcopy_fault_thread_notify()
  migration: new postcopy-pause state
  migration: implement "postcopy-pause" src logic
  migration: allow dst vm pause on postcopy
  migration: allow src return path to pause
  migration: allow send_rq to fail
  migration: allow fault thread to pause
  qmp: hmp: add migrate "resume" option
  migration: pass MigrationState to migrate_init()
  migration: rebuild channel on source
  migration: new state "postcopy-recover"
  migration: wakeup dst ram-load-thread for recover
  migration: new cmd MIG_CMD_RECV_BITMAP
  migration: new message MIG_RP_MSG_RECV_BITMAP
  migration: new cmd MIG_CMD_POSTCOPY_RESUME
  migration: new message MIG_RP_MSG_RESUME_ACK
  migration: introduce SaveVMHandlers.resume_prepare
  migration: synchronize dirty bitmap for resume
  migration: setup ramstate for resume
  migration: final handshake for the resume
  migration: free SocketAddress where allocated
  migration: return incoming task tag for sockets
  migration: return incoming task tag for exec
  migration: return incoming task tag for fd
  migration: store listen task tag
  migration: allow migrate_incoming for paused VM
  migration: init dst in migration_object_init too
  migration: delay the postcopy-active state switch
  migration, qmp: new command "migrate-pause"
  migration, hmp: new command "migrate_pause"

 hmp-commands.hx              |  21 +-
 hmp.c                        |  13 +-
 hmp.h                        |   1 +
 include/migration/register.h |   2 +
 migration/exec.c             |  20 +-
 migration/exec.h             |   2 +-
 migration/fd.c               |  20 +-
 migration/fd.h               |   2 +-
 migration/migration.c        | 609 ++++++++++++++++++++++++++++++++++++++-----
 migration/migration.h        |  26 +-
 migration/postcopy-ram.c     | 110 ++++++--
 migration/postcopy-ram.h     |   2 +
 migration/ram.c              | 252 +++++++++++++++++-
 migration/ram.h              |   3 +
 migration/savevm.c           | 240 ++++++++++++++++-
 migration/savevm.h           |   3 +
 migration/socket.c           |  44 ++--
 migration/socket.h           |   4 +-
 migration/trace-events       |  23 ++
 qapi/migration.json          |  34 ++-
 20 files changed, 1283 insertions(+), 148 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
@ 2017-11-08  6:00 ` Peter Xu
  2017-11-30 10:24   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 02/32] migration: reuse mis->userfault_quit_fd Peter Xu
                   ` (31 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

If the postcopy down due to some reason, we can always see this on dst:

  qemu-system-x86_64: RP: Received invalid message 0x0000 length 0x0000

However in most cases that's not the real issue. The problem is that
qemu_get_be16() has no way to show whether the returned data is valid or
not, and we are _always_ assuming it is valid. That's possibly not wise.

The best approach to solve this would be: refactoring QEMUFile interface
to allow the APIs to return error if there is. However it needs quite a
bit of work and testing. For now, let's explicitly check the validity
first before using the data in all places for qemu_get_*().

This patch tries to fix most of the cases I can see. Only if we are with
this, can we make sure we are processing the valid data, and also can we
make sure we can capture the channel down events correctly.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c |  5 +++++
 migration/ram.c       | 26 ++++++++++++++++++++++----
 migration/savevm.c    | 40 ++++++++++++++++++++++++++++++++++++++--
 3 files changed, 65 insertions(+), 6 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index c0206023d7..eae34d0524 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1708,6 +1708,11 @@ static void *source_return_path_thread(void *opaque)
         header_type = qemu_get_be16(rp);
         header_len = qemu_get_be16(rp);
 
+        if (qemu_file_get_error(rp)) {
+            mark_source_rp_bad(ms);
+            goto out;
+        }
+
         if (header_type >= MIG_RP_MSG_MAX ||
             header_type == MIG_RP_MSG_INVALID) {
             error_report("RP: Received invalid message 0x%04x length 0x%04x",
diff --git a/migration/ram.c b/migration/ram.c
index 8620aa400a..960c726ff2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2687,7 +2687,7 @@ static int ram_load_postcopy(QEMUFile *f)
     void *last_host = NULL;
     bool all_zero = false;
 
-    while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
+    while (!(flags & RAM_SAVE_FLAG_EOS)) {
         ram_addr_t addr;
         void *host = NULL;
         void *page_buffer = NULL;
@@ -2696,6 +2696,16 @@ static int ram_load_postcopy(QEMUFile *f)
         uint8_t ch;
 
         addr = qemu_get_be64(f);
+
+        /*
+         * If qemu file error, we should stop here, and then "addr"
+         * may be invalid
+         */
+        ret = qemu_file_get_error(f);
+        if (ret) {
+            break;
+        }
+
         flags = addr & ~TARGET_PAGE_MASK;
         addr &= TARGET_PAGE_MASK;
 
@@ -2776,6 +2786,13 @@ static int ram_load_postcopy(QEMUFile *f)
             error_report("Unknown combination of migration flags: %#x"
                          " (postcopy mode)", flags);
             ret = -EINVAL;
+            break;
+        }
+
+        /* Detect for any possible file errors */
+        if (qemu_file_get_error(f)) {
+            ret = qemu_file_get_error(f);
+            break;
         }
 
         if (place_needed) {
@@ -2789,9 +2806,10 @@ static int ram_load_postcopy(QEMUFile *f)
                 ret = postcopy_place_page(mis, place_dest,
                                           place_source, block);
             }
-        }
-        if (!ret) {
-            ret = qemu_file_get_error(f);
+
+            if (ret) {
+                break;
+            }
         }
     }
 
diff --git a/migration/savevm.c b/migration/savevm.c
index 4a88228614..1da0255cd7 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1765,6 +1765,11 @@ static int loadvm_process_command(QEMUFile *f)
     cmd = qemu_get_be16(f);
     len = qemu_get_be16(f);
 
+    /* Check validity before continue processing of cmds */
+    if (qemu_file_get_error(f)) {
+        return qemu_file_get_error(f);
+    }
+
     trace_loadvm_process_command(cmd, len);
     if (cmd >= MIG_CMD_MAX || cmd == MIG_CMD_INVALID) {
         error_report("MIG_CMD 0x%x unknown (len 0x%x)", cmd, len);
@@ -1830,6 +1835,7 @@ static int loadvm_process_command(QEMUFile *f)
  */
 static bool check_section_footer(QEMUFile *f, SaveStateEntry *se)
 {
+    int ret;
     uint8_t read_mark;
     uint32_t read_section_id;
 
@@ -1840,6 +1846,13 @@ static bool check_section_footer(QEMUFile *f, SaveStateEntry *se)
 
     read_mark = qemu_get_byte(f);
 
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        error_report("%s: Read section footer failed: %d",
+                     __func__, ret);
+        return false;
+    }
+
     if (read_mark != QEMU_VM_SECTION_FOOTER) {
         error_report("Missing section footer for %s", se->idstr);
         return false;
@@ -1875,6 +1888,13 @@ qemu_loadvm_section_start_full(QEMUFile *f, MigrationIncomingState *mis)
     instance_id = qemu_get_be32(f);
     version_id = qemu_get_be32(f);
 
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        error_report("%s: Failed to read instance/version ID: %d",
+                     __func__, ret);
+        return ret;
+    }
+
     trace_qemu_loadvm_state_section_startfull(section_id, idstr,
             instance_id, version_id);
     /* Find savevm section */
@@ -1922,6 +1942,13 @@ qemu_loadvm_section_part_end(QEMUFile *f, MigrationIncomingState *mis)
 
     section_id = qemu_get_be32(f);
 
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        error_report("%s: Failed to read section ID: %d",
+                     __func__, ret);
+        return ret;
+    }
+
     trace_qemu_loadvm_state_section_partend(section_id);
     QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
         if (se->load_section_id == section_id) {
@@ -1989,8 +2016,14 @@ static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
     uint8_t section_type;
     int ret = 0;
 
-    while ((section_type = qemu_get_byte(f)) != QEMU_VM_EOF) {
-        ret = 0;
+    while (true) {
+        section_type = qemu_get_byte(f);
+
+        if (qemu_file_get_error(f)) {
+            ret = qemu_file_get_error(f);
+            break;
+        }
+
         trace_qemu_loadvm_state_section(section_type);
         switch (section_type) {
         case QEMU_VM_SECTION_START:
@@ -2014,6 +2047,9 @@ static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
                 goto out;
             }
             break;
+        case QEMU_VM_EOF:
+            /* This is the end of migration */
+            goto out;
         default:
             error_report("Unknown savevm section type %d", section_type);
             ret = -EINVAL;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 02/32] migration: reuse mis->userfault_quit_fd
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
  2017-11-08  6:00 ` [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 03/32] migration: provide postcopy_fault_thread_notify() Peter Xu
                   ` (30 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

It was only used for quitting the page fault thread before. Let it be
something more useful - now we can use it to notify a "wake" for the
page fault thread (for any reason), and it only means "quit" if the
fault_thread_quit is set.

Since we changed what it does, renaming it to userfault_event_fd.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.h    |  6 ++++--
 migration/postcopy-ram.c | 29 ++++++++++++++++++++---------
 2 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/migration/migration.h b/migration/migration.h
index 663415fe48..6d36400975 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -36,6 +36,8 @@ struct MigrationIncomingState {
     bool           have_fault_thread;
     QemuThread     fault_thread;
     QemuSemaphore  fault_thread_sem;
+    /* Set this when we want the fault thread to quit */
+    bool           fault_thread_quit;
 
     bool           have_listen_thread;
     QemuThread     listen_thread;
@@ -43,8 +45,8 @@ struct MigrationIncomingState {
 
     /* For the kernel to send us notifications */
     int       userfault_fd;
-    /* To tell the fault_thread to quit */
-    int       userfault_quit_fd;
+    /* To notify the fault_thread to wake, e.g., when need to quit */
+    int       userfault_event_fd;
     QEMUFile *to_src_file;
     QemuMutex rp_mutex;    /* We send replies from multiple threads */
     void     *postcopy_tmp_page;
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index bec6c2c66b..9ad4f20f82 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -387,17 +387,18 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
          * currently be at 0, we're going to increment it to 1
          */
         tmp64 = 1;
-        if (write(mis->userfault_quit_fd, &tmp64, 8) == 8) {
+        atomic_set(&mis->fault_thread_quit, 1);
+        if (write(mis->userfault_event_fd, &tmp64, 8) == 8) {
             trace_postcopy_ram_incoming_cleanup_join();
             qemu_thread_join(&mis->fault_thread);
         } else {
             /* Not much we can do here, but may as well report it */
-            error_report("%s: incrementing userfault_quit_fd: %s", __func__,
+            error_report("%s: incrementing userfault_event_fd: %s", __func__,
                          strerror(errno));
         }
         trace_postcopy_ram_incoming_cleanup_closeuf();
         close(mis->userfault_fd);
-        close(mis->userfault_quit_fd);
+        close(mis->userfault_event_fd);
         mis->have_fault_thread = false;
     }
 
@@ -520,7 +521,7 @@ static void *postcopy_ram_fault_thread(void *opaque)
         pfd[0].fd = mis->userfault_fd;
         pfd[0].events = POLLIN;
         pfd[0].revents = 0;
-        pfd[1].fd = mis->userfault_quit_fd;
+        pfd[1].fd = mis->userfault_event_fd;
         pfd[1].events = POLLIN; /* Waiting for eventfd to go positive */
         pfd[1].revents = 0;
 
@@ -530,8 +531,18 @@ static void *postcopy_ram_fault_thread(void *opaque)
         }
 
         if (pfd[1].revents) {
-            trace_postcopy_ram_fault_thread_quit();
-            break;
+            uint64_t tmp64 = 0;
+
+            /* Consume the signal */
+            if (read(mis->userfault_event_fd, &tmp64, 8) != 8) {
+                /* Nothing obviously nicer than posting this error. */
+                error_report("%s: read() failed", __func__);
+            }
+
+            if (atomic_read(&mis->fault_thread_quit)) {
+                trace_postcopy_ram_fault_thread_quit();
+                break;
+            }
         }
 
         ret = read(mis->userfault_fd, &msg, sizeof(msg));
@@ -610,9 +621,9 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
     }
 
     /* Now an eventfd we use to tell the fault-thread to quit */
-    mis->userfault_quit_fd = eventfd(0, EFD_CLOEXEC);
-    if (mis->userfault_quit_fd == -1) {
-        error_report("%s: Opening userfault_quit_fd: %s", __func__,
+    mis->userfault_event_fd = eventfd(0, EFD_CLOEXEC);
+    if (mis->userfault_event_fd == -1) {
+        error_report("%s: Opening userfault_event_fd: %s", __func__,
                      strerror(errno));
         close(mis->userfault_fd);
         return -1;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 03/32] migration: provide postcopy_fault_thread_notify()
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
  2017-11-08  6:00 ` [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 02/32] migration: reuse mis->userfault_quit_fd Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 04/32] migration: new postcopy-pause state Peter Xu
                   ` (29 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

A general helper to notify the fault thread.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/postcopy-ram.c | 35 ++++++++++++++++++++---------------
 migration/postcopy-ram.h |  2 ++
 2 files changed, 22 insertions(+), 15 deletions(-)

diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 9ad4f20f82..032abfbf1a 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -377,25 +377,15 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
     trace_postcopy_ram_incoming_cleanup_entry();
 
     if (mis->have_fault_thread) {
-        uint64_t tmp64;
-
         if (qemu_ram_foreach_block(cleanup_range, mis)) {
             return -1;
         }
-        /*
-         * Tell the fault_thread to exit, it's an eventfd that should
-         * currently be at 0, we're going to increment it to 1
-         */
-        tmp64 = 1;
+        /* Let the fault thread quit */
         atomic_set(&mis->fault_thread_quit, 1);
-        if (write(mis->userfault_event_fd, &tmp64, 8) == 8) {
-            trace_postcopy_ram_incoming_cleanup_join();
-            qemu_thread_join(&mis->fault_thread);
-        } else {
-            /* Not much we can do here, but may as well report it */
-            error_report("%s: incrementing userfault_event_fd: %s", __func__,
-                         strerror(errno));
-        }
+        postcopy_fault_thread_notify(mis);
+        trace_postcopy_ram_incoming_cleanup_join();
+        qemu_thread_join(&mis->fault_thread);
+
         trace_postcopy_ram_incoming_cleanup_closeuf();
         close(mis->userfault_fd);
         close(mis->userfault_event_fd);
@@ -824,6 +814,21 @@ void *postcopy_get_tmp_page(MigrationIncomingState *mis)
 
 /* ------------------------------------------------------------------------- */
 
+void postcopy_fault_thread_notify(MigrationIncomingState *mis)
+{
+    uint64_t tmp64 = 1;
+
+    /*
+     * Wakeup the fault_thread.  It's an eventfd that should currently
+     * be at 0, we're going to increment it to 1
+     */
+    if (write(mis->userfault_event_fd, &tmp64, 8) != 8) {
+        /* Not much we can do here, but may as well report it */
+        error_report("%s: incrementing failed: %s", __func__,
+                     strerror(errno));
+    }
+}
+
 /**
  * postcopy_discard_send_init: Called at the start of each RAMBlock before
  *   asking to discard individual ranges.
diff --git a/migration/postcopy-ram.h b/migration/postcopy-ram.h
index 77ea0fd264..14f6cadcbd 100644
--- a/migration/postcopy-ram.h
+++ b/migration/postcopy-ram.h
@@ -114,4 +114,6 @@ PostcopyState postcopy_state_get(void);
 /* Set the state and return the old state */
 PostcopyState postcopy_state_set(PostcopyState new_state);
 
+void postcopy_fault_thread_notify(MigrationIncomingState *mis);
+
 #endif
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 04/32] migration: new postcopy-pause state
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (2 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 03/32] migration: provide postcopy_fault_thread_notify() Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic Peter Xu
                   ` (28 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Introducing a new state "postcopy-paused", which can be used when the
postcopy migration is paused. It is targeted for postcopy network
failure recovery.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 2 ++
 qapi/migration.json   | 5 ++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index eae34d0524..dd270f8bc5 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -530,6 +530,7 @@ static bool migration_is_setup_or_active(int state)
     switch (state) {
     case MIGRATION_STATUS_ACTIVE:
     case MIGRATION_STATUS_POSTCOPY_ACTIVE:
+    case MIGRATION_STATUS_POSTCOPY_PAUSED:
     case MIGRATION_STATUS_SETUP:
     case MIGRATION_STATUS_PRE_SWITCHOVER:
     case MIGRATION_STATUS_DEVICE:
@@ -609,6 +610,7 @@ MigrationInfo *qmp_query_migrate(Error **errp)
     case MIGRATION_STATUS_POSTCOPY_ACTIVE:
     case MIGRATION_STATUS_PRE_SWITCHOVER:
     case MIGRATION_STATUS_DEVICE:
+    case MIGRATION_STATUS_POSTCOPY_PAUSED:
          /* TODO add some postcopy stats */
         info->has_status = true;
         info->has_total_time = true;
diff --git a/qapi/migration.json b/qapi/migration.json
index bbc4671ded..dbcd43aa40 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -89,6 +89,8 @@
 #
 # @postcopy-active: like active, but now in postcopy mode. (since 2.5)
 #
+# @postcopy-paused: during postcopy but paused. (since 2.12)
+#
 # @completed: migration is finished.
 #
 # @failed: some error occurred during migration process.
@@ -106,7 +108,8 @@
 ##
 { 'enum': 'MigrationStatus',
   'data': [ 'none', 'setup', 'cancelling', 'cancelled',
-            'active', 'postcopy-active', 'completed', 'failed', 'colo',
+            'active', 'postcopy-active', 'postcopy-paused',
+            'completed', 'failed', 'colo',
             'pre-switchover', 'device' ] }
 
 ##
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (3 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 04/32] migration: new postcopy-pause state Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-30 10:49   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 06/32] migration: allow dst vm pause on postcopy Peter Xu
                   ` (27 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Now when network down for postcopy, the source side will not fail the
migration. Instead we convert the status into this new paused state, and
we will try to wait for a rescue in the future.

If a recovery is detected, migration_thread() will reset its local
variables to prepare for that.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  | 98 +++++++++++++++++++++++++++++++++++++++++++++++---
 migration/migration.h  |  3 ++
 migration/trace-events |  1 +
 3 files changed, 98 insertions(+), 4 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index dd270f8bc5..46e7ca36a4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1111,6 +1111,8 @@ static void migrate_fd_cleanup(void *opaque)
     }
     notifier_list_notify(&migration_state_notifiers, s);
     block_cleanup_parameters(s);
+
+    qemu_sem_destroy(&s->postcopy_pause_sem);
 }
 
 void migrate_set_error(MigrationState *s, const Error *error)
@@ -1267,6 +1269,7 @@ MigrationState *migrate_init(void)
     s->migration_thread_running = false;
     error_free(s->error);
     s->error = NULL;
+    qemu_sem_init(&s->postcopy_pause_sem, 0);
 
     migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
 
@@ -2159,6 +2162,80 @@ bool migrate_colo_enabled(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_X_COLO];
 }
 
+typedef enum MigThrError {
+    /* No error detected */
+    MIG_THR_ERR_NONE = 0,
+    /* Detected error, but resumed successfully */
+    MIG_THR_ERR_RECOVERED = 1,
+    /* Detected fatal error, need to exit */
+    MIG_THR_ERR_FATAL = 2,
+} MigThrError;
+
+/*
+ * We don't return until we are in a safe state to continue current
+ * postcopy migration.  Returns MIG_THR_ERR_RECOVERED if recovered, or
+ * MIG_THR_ERR_FATAL if unrecovery failure happened.
+ */
+static MigThrError postcopy_pause(MigrationState *s)
+{
+    assert(s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
+    migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
+                      MIGRATION_STATUS_POSTCOPY_PAUSED);
+
+    /* Current channel is possibly broken. Release it. */
+    assert(s->to_dst_file);
+    qemu_file_shutdown(s->to_dst_file);
+    qemu_fclose(s->to_dst_file);
+    s->to_dst_file = NULL;
+
+    error_report("Detected IO failure for postcopy. "
+                 "Migration paused.");
+
+    /*
+     * We wait until things fixed up. Then someone will setup the
+     * status back for us.
+     */
+    while (s->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
+        qemu_sem_wait(&s->postcopy_pause_sem);
+    }
+
+    trace_postcopy_pause_continued();
+
+    return MIG_THR_ERR_RECOVERED;
+}
+
+static MigThrError migration_detect_error(MigrationState *s)
+{
+    int ret;
+
+    /* Try to detect any file errors */
+    ret = qemu_file_get_error(s->to_dst_file);
+
+    if (!ret) {
+        /* Everything is fine */
+        return MIG_THR_ERR_NONE;
+    }
+
+    if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE && ret == -EIO) {
+        /*
+         * For postcopy, we allow the network to be down for a
+         * while. After that, it can be continued by a
+         * recovery phase.
+         */
+        return postcopy_pause(s);
+    } else {
+        /*
+         * For precopy (or postcopy with error outside IO), we fail
+         * with no time.
+         */
+        migrate_set_state(&s->state, s->state, MIGRATION_STATUS_FAILED);
+        trace_migration_thread_file_err();
+
+        /* Time to stop the migration, now. */
+        return MIG_THR_ERR_FATAL;
+    }
+}
+
 /*
  * Master migration thread on the source VM.
  * It drives the migration and pumps the data down the outgoing channel.
@@ -2183,6 +2260,7 @@ static void *migration_thread(void *opaque)
     /* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
     enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
     bool enable_colo = migrate_colo_enabled();
+    MigThrError thr_error;
 
     rcu_register_thread();
 
@@ -2255,12 +2333,24 @@ static void *migration_thread(void *opaque)
             }
         }
 
-        if (qemu_file_get_error(s->to_dst_file)) {
-            migrate_set_state(&s->state, current_active_state,
-                              MIGRATION_STATUS_FAILED);
-            trace_migration_thread_file_err();
+        /*
+         * Try to detect any kind of failures, and see whether we
+         * should stop the migration now.
+         */
+        thr_error = migration_detect_error(s);
+        if (thr_error == MIG_THR_ERR_FATAL) {
+            /* Stop migration */
             break;
+        } else if (thr_error == MIG_THR_ERR_RECOVERED) {
+            /*
+             * Just recovered from a e.g. network failure, reset all
+             * the local variables. This is important to avoid
+             * breaking transferred_bytes and bandwidth calculation
+             */
+            initial_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+            initial_bytes = 0;
         }
+
         current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
         if (current_time >= initial_time + BUFFER_DELAY) {
             uint64_t transferred_bytes = qemu_ftell(s->to_dst_file) -
diff --git a/migration/migration.h b/migration/migration.h
index 6d36400975..36aaa13f50 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -156,6 +156,9 @@ struct MigrationState
     bool send_configuration;
     /* Whether we send section footer during migration */
     bool send_section_footer;
+
+    /* Needed by postcopy-pause state */
+    QemuSemaphore postcopy_pause_sem;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
diff --git a/migration/trace-events b/migration/trace-events
index 6f29fcc686..da1c63a933 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -99,6 +99,7 @@ migration_thread_setup_complete(void) ""
 open_return_path_on_source(void) ""
 open_return_path_on_source_continue(void) ""
 postcopy_start(void) ""
+postcopy_pause_continued(void) ""
 postcopy_start_set_run(void) ""
 source_return_path_thread_bad_end(void) ""
 source_return_path_thread_end(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 06/32] migration: allow dst vm pause on postcopy
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (4 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-30 11:17   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 07/32] migration: allow src return path to pause Peter Xu
                   ` (26 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

When there is IO error on the incoming channel (e.g., network down),
instead of bailing out immediately, we allow the dst vm to switch to the
new POSTCOPY_PAUSE state. Currently it is still simple - it waits the
new semaphore, until someone poke it for another attempt.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  |  1 +
 migration/migration.h  |  3 +++
 migration/savevm.c     | 60 ++++++++++++++++++++++++++++++++++++++++++++++++--
 migration/trace-events |  2 ++
 4 files changed, 64 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 46e7ca36a4..b166e19785 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -150,6 +150,7 @@ MigrationIncomingState *migration_incoming_get_current(void)
         memset(&mis_current, 0, sizeof(MigrationIncomingState));
         qemu_mutex_init(&mis_current.rp_mutex);
         qemu_event_init(&mis_current.main_thread_load_event, false);
+        qemu_sem_init(&mis_current.postcopy_pause_sem_dst, 0);
         once = true;
     }
     return &mis_current;
diff --git a/migration/migration.h b/migration/migration.h
index 36aaa13f50..55894ecb79 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -61,6 +61,9 @@ struct MigrationIncomingState {
     /* The coroutine we should enter (back) after failover */
     Coroutine *migration_incoming_co;
     QemuSemaphore colo_incoming_sem;
+
+    /* notify PAUSED postcopy incoming migrations to try to continue */
+    QemuSemaphore postcopy_pause_sem_dst;
 };
 
 MigrationIncomingState *migration_incoming_get_current(void);
diff --git a/migration/savevm.c b/migration/savevm.c
index 1da0255cd7..93e308ebf0 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1529,8 +1529,8 @@ static int loadvm_postcopy_ram_handle_discard(MigrationIncomingState *mis,
  */
 static void *postcopy_ram_listen_thread(void *opaque)
 {
-    QEMUFile *f = opaque;
     MigrationIncomingState *mis = migration_incoming_get_current();
+    QEMUFile *f = mis->from_src_file;
     int load_res;
 
     migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
@@ -1544,6 +1544,14 @@ static void *postcopy_ram_listen_thread(void *opaque)
      */
     qemu_file_set_blocking(f, true);
     load_res = qemu_loadvm_state_main(f, mis);
+
+    /*
+     * This is tricky, but, mis->from_src_file can change after it
+     * returns, when postcopy recovery happened. In the future, we may
+     * want a wrapper for the QEMUFile handle.
+     */
+    f = mis->from_src_file;
+
     /* And non-blocking again so we don't block in any cleanup */
     qemu_file_set_blocking(f, false);
 
@@ -1626,7 +1634,7 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
     /* Start up the listening thread and wait for it to signal ready */
     qemu_sem_init(&mis->listen_thread_sem, 0);
     qemu_thread_create(&mis->listen_thread, "postcopy/listen",
-                       postcopy_ram_listen_thread, mis->from_src_file,
+                       postcopy_ram_listen_thread, NULL,
                        QEMU_THREAD_DETACHED);
     qemu_sem_wait(&mis->listen_thread_sem);
     qemu_sem_destroy(&mis->listen_thread_sem);
@@ -2011,11 +2019,44 @@ void qemu_loadvm_state_cleanup(void)
     }
 }
 
+/* Return true if we should continue the migration, or false. */
+static bool postcopy_pause_incoming(MigrationIncomingState *mis)
+{
+    trace_postcopy_pause_incoming();
+
+    migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
+                      MIGRATION_STATUS_POSTCOPY_PAUSED);
+
+    assert(mis->from_src_file);
+    qemu_file_shutdown(mis->from_src_file);
+    qemu_fclose(mis->from_src_file);
+    mis->from_src_file = NULL;
+
+    assert(mis->to_src_file);
+    qemu_file_shutdown(mis->to_src_file);
+    qemu_mutex_lock(&mis->rp_mutex);
+    qemu_fclose(mis->to_src_file);
+    mis->to_src_file = NULL;
+    qemu_mutex_unlock(&mis->rp_mutex);
+
+    error_report("Detected IO failure for postcopy. "
+                 "Migration paused.");
+
+    while (mis->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
+        qemu_sem_wait(&mis->postcopy_pause_sem_dst);
+    }
+
+    trace_postcopy_pause_incoming_continued();
+
+    return true;
+}
+
 static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
 {
     uint8_t section_type;
     int ret = 0;
 
+retry:
     while (true) {
         section_type = qemu_get_byte(f);
 
@@ -2060,6 +2101,21 @@ static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
 out:
     if (ret < 0) {
         qemu_file_set_error(f, ret);
+
+        /*
+         * Detect whether it is:
+         *
+         * 1. postcopy running
+         * 2. network failure (-EIO)
+         *
+         * If so, we try to wait for a recovery.
+         */
+        if (mis->state == MIGRATION_STATUS_POSTCOPY_ACTIVE &&
+            ret == -EIO && postcopy_pause_incoming(mis)) {
+            /* Reset f to point to the newly created channel */
+            f = mis->from_src_file;
+            goto retry;
+        }
     }
     return ret;
 }
diff --git a/migration/trace-events b/migration/trace-events
index da1c63a933..bed1646cd6 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -100,6 +100,8 @@ open_return_path_on_source(void) ""
 open_return_path_on_source_continue(void) ""
 postcopy_start(void) ""
 postcopy_pause_continued(void) ""
+postcopy_pause_incoming(void) ""
+postcopy_pause_incoming_continued(void) ""
 postcopy_start_set_run(void) ""
 source_return_path_thread_bad_end(void) ""
 source_return_path_thread_end(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 07/32] migration: allow src return path to pause
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (5 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 06/32] migration: allow dst vm pause on postcopy Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail Peter Xu
                   ` (25 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Let the thread pause for network issues.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  | 35 +++++++++++++++++++++++++++++++++--
 migration/migration.h  |  1 +
 migration/trace-events |  2 ++
 3 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index b166e19785..8d93b891e3 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1114,6 +1114,7 @@ static void migrate_fd_cleanup(void *opaque)
     block_cleanup_parameters(s);
 
     qemu_sem_destroy(&s->postcopy_pause_sem);
+    qemu_sem_destroy(&s->postcopy_pause_rp_sem);
 }
 
 void migrate_set_error(MigrationState *s, const Error *error)
@@ -1271,6 +1272,7 @@ MigrationState *migrate_init(void)
     error_free(s->error);
     s->error = NULL;
     qemu_sem_init(&s->postcopy_pause_sem, 0);
+    qemu_sem_init(&s->postcopy_pause_rp_sem, 0);
 
     migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
 
@@ -1692,6 +1694,18 @@ static void migrate_handle_rp_req_pages(MigrationState *ms, const char* rbname,
     }
 }
 
+/* Return true to retry, false to quit */
+static bool postcopy_pause_return_path_thread(MigrationState *s)
+{
+    trace_postcopy_pause_return_path();
+
+    qemu_sem_wait(&s->postcopy_pause_rp_sem);
+
+    trace_postcopy_pause_return_path_continued();
+
+    return true;
+}
+
 /*
  * Handles messages sent on the return path towards the source VM
  *
@@ -1708,6 +1722,8 @@ static void *source_return_path_thread(void *opaque)
     int res;
 
     trace_source_return_path_thread_entry();
+
+retry:
     while (!ms->rp_state.error && !qemu_file_get_error(rp) &&
            migration_is_setup_or_active(ms->state)) {
         trace_source_return_path_thread_loop_top();
@@ -1799,13 +1815,28 @@ static void *source_return_path_thread(void *opaque)
             break;
         }
     }
-    if (qemu_file_get_error(rp)) {
+
+out:
+    res = qemu_file_get_error(rp);
+    if (res) {
+        if (res == -EIO) {
+            /*
+             * Maybe there is something we can do: it looks like a
+             * network down issue, and we pause for a recovery.
+             */
+            if (postcopy_pause_return_path_thread(ms)) {
+                /* Reload rp, reset the rest */
+                rp = ms->rp_state.from_dst_file;
+                ms->rp_state.error = false;
+                goto retry;
+            }
+        }
+
         trace_source_return_path_thread_bad_end();
         mark_source_rp_bad(ms);
     }
 
     trace_source_return_path_thread_end();
-out:
     ms->rp_state.from_dst_file = NULL;
     qemu_fclose(rp);
     return NULL;
diff --git a/migration/migration.h b/migration/migration.h
index 55894ecb79..ebb049f692 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -162,6 +162,7 @@ struct MigrationState
 
     /* Needed by postcopy-pause state */
     QemuSemaphore postcopy_pause_sem;
+    QemuSemaphore postcopy_pause_rp_sem;
 };
 
 void migrate_set_state(int *state, int old_state, int new_state);
diff --git a/migration/trace-events b/migration/trace-events
index bed1646cd6..a4031cfe00 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -99,6 +99,8 @@ migration_thread_setup_complete(void) ""
 open_return_path_on_source(void) ""
 open_return_path_on_source_continue(void) ""
 postcopy_start(void) ""
+postcopy_pause_return_path(void) ""
+postcopy_pause_return_path_continued(void) ""
 postcopy_pause_continued(void) ""
 postcopy_pause_incoming(void) ""
 postcopy_pause_incoming_continued(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (6 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 07/32] migration: allow src return path to pause Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-30 12:13   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 09/32] migration: allow fault thread to pause Peter Xu
                   ` (24 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

We will not allow failures to happen when sending data from destination
to source via the return path. However it is possible that there can be
errors along the way.  This patch allows the migrate_send_rp_message()
to return error when it happens, and further extended it to
migrate_send_rp_req_pages().

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 38 ++++++++++++++++++++++++++++++--------
 migration/migration.h |  2 +-
 2 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 8d93b891e3..db896233f6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -199,17 +199,35 @@ static void deferred_incoming_migration(Error **errp)
  * Send a message on the return channel back to the source
  * of the migration.
  */
-static void migrate_send_rp_message(MigrationIncomingState *mis,
-                                    enum mig_rp_message_type message_type,
-                                    uint16_t len, void *data)
+static int migrate_send_rp_message(MigrationIncomingState *mis,
+                                   enum mig_rp_message_type message_type,
+                                   uint16_t len, void *data)
 {
+    int ret = 0;
+
     trace_migrate_send_rp_message((int)message_type, len);
     qemu_mutex_lock(&mis->rp_mutex);
+
+    /*
+     * It's possible that the file handle got lost due to network
+     * failures.
+     */
+    if (!mis->to_src_file) {
+        ret = -EIO;
+        goto error;
+    }
+
     qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
     qemu_put_be16(mis->to_src_file, len);
     qemu_put_buffer(mis->to_src_file, data, len);
     qemu_fflush(mis->to_src_file);
+
+    /* It's possible that qemu file got error during sending */
+    ret = qemu_file_get_error(mis->to_src_file);
+
+error:
     qemu_mutex_unlock(&mis->rp_mutex);
+    return ret;
 }
 
 /* Request a range of pages from the source VM at the given
@@ -219,26 +237,30 @@ static void migrate_send_rp_message(MigrationIncomingState *mis,
  *   Start: Address offset within the RB
  *   Len: Length in bytes required - must be a multiple of pagesize
  */
-void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
-                               ram_addr_t start, size_t len)
+int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
+                              ram_addr_t start, size_t len)
 {
     uint8_t bufc[12 + 1 + 255]; /* start (8), len (4), rbname up to 256 */
     size_t msglen = 12; /* start + len */
+    int rbname_len;
+    enum mig_rp_message_type msg_type;
 
     *(uint64_t *)bufc = cpu_to_be64((uint64_t)start);
     *(uint32_t *)(bufc + 8) = cpu_to_be32((uint32_t)len);
 
     if (rbname) {
-        int rbname_len = strlen(rbname);
+        rbname_len = strlen(rbname);
         assert(rbname_len < 256);
 
         bufc[msglen++] = rbname_len;
         memcpy(bufc + msglen, rbname, rbname_len);
         msglen += rbname_len;
-        migrate_send_rp_message(mis, MIG_RP_MSG_REQ_PAGES_ID, msglen, bufc);
+        msg_type = MIG_RP_MSG_REQ_PAGES_ID;
     } else {
-        migrate_send_rp_message(mis, MIG_RP_MSG_REQ_PAGES, msglen, bufc);
+        msg_type = MIG_RP_MSG_REQ_PAGES;
     }
+
+    return migrate_send_rp_message(mis, msg_type, msglen, bufc);
 }
 
 void qemu_start_incoming_migration(const char *uri, Error **errp)
diff --git a/migration/migration.h b/migration/migration.h
index ebb049f692..b63cdfbfdb 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -216,7 +216,7 @@ void migrate_send_rp_shut(MigrationIncomingState *mis,
                           uint32_t value);
 void migrate_send_rp_pong(MigrationIncomingState *mis,
                           uint32_t value);
-void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
+int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
                               ram_addr_t start, size_t len);
 
 #endif
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 09/32] migration: allow fault thread to pause
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (7 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 10/32] qmp: hmp: add migrate "resume" option Peter Xu
                   ` (23 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Allows the fault thread to stop handling page faults temporarily. When
network failure happened (and if we expect a recovery afterwards), we
should not allow the fault thread to continue sending things to source,
instead, it should halt for a while until the connection is rebuilt.

When the dest main thread noticed the failure, it kicks the fault thread
to switch to pause state.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c    |  1 +
 migration/migration.h    |  1 +
 migration/postcopy-ram.c | 50 ++++++++++++++++++++++++++++++++++++++++++++----
 migration/savevm.c       |  3 +++
 migration/trace-events   |  2 ++
 5 files changed, 53 insertions(+), 4 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index db896233f6..54fba8668d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -151,6 +151,7 @@ MigrationIncomingState *migration_incoming_get_current(void)
         qemu_mutex_init(&mis_current.rp_mutex);
         qemu_event_init(&mis_current.main_thread_load_event, false);
         qemu_sem_init(&mis_current.postcopy_pause_sem_dst, 0);
+        qemu_sem_init(&mis_current.postcopy_pause_sem_fault, 0);
         once = true;
     }
     return &mis_current;
diff --git a/migration/migration.h b/migration/migration.h
index b63cdfbfdb..2751b2dffc 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -64,6 +64,7 @@ struct MigrationIncomingState {
 
     /* notify PAUSED postcopy incoming migrations to try to continue */
     QemuSemaphore postcopy_pause_sem_dst;
+    QemuSemaphore postcopy_pause_sem_fault;
 };
 
 MigrationIncomingState *migration_incoming_get_current(void);
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 032abfbf1a..31c290c884 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -485,6 +485,17 @@ static int ram_block_enable_notify(const char *block_name, void *host_addr,
     return 0;
 }
 
+static bool postcopy_pause_fault_thread(MigrationIncomingState *mis)
+{
+    trace_postcopy_pause_fault_thread();
+
+    qemu_sem_wait(&mis->postcopy_pause_sem_fault);
+
+    trace_postcopy_pause_fault_thread_continued();
+
+    return true;
+}
+
 /*
  * Handle faults detected by the USERFAULT markings
  */
@@ -535,6 +546,22 @@ static void *postcopy_ram_fault_thread(void *opaque)
             }
         }
 
+        if (!mis->to_src_file) {
+            /*
+             * Possibly someone tells us that the return path is
+             * broken already using the event. We should hold until
+             * the channel is rebuilt.
+             */
+            if (postcopy_pause_fault_thread(mis)) {
+                last_rb = NULL;
+                /* Continue to read the userfaultfd */
+            } else {
+                error_report("%s: paused but don't allow to continue",
+                             __func__);
+                break;
+            }
+        }
+
         ret = read(mis->userfault_fd, &msg, sizeof(msg));
         if (ret != sizeof(msg)) {
             if (errno == EAGAIN) {
@@ -574,18 +601,33 @@ static void *postcopy_ram_fault_thread(void *opaque)
                                                 qemu_ram_get_idstr(rb),
                                                 rb_offset);
 
+retry:
         /*
          * Send the request to the source - we want to request one
          * of our host page sizes (which is >= TPS)
          */
         if (rb != last_rb) {
             last_rb = rb;
-            migrate_send_rp_req_pages(mis, qemu_ram_get_idstr(rb),
-                                     rb_offset, qemu_ram_pagesize(rb));
+            ret = migrate_send_rp_req_pages(mis, qemu_ram_get_idstr(rb),
+                                            rb_offset, qemu_ram_pagesize(rb));
         } else {
             /* Save some space */
-            migrate_send_rp_req_pages(mis, NULL,
-                                     rb_offset, qemu_ram_pagesize(rb));
+            ret = migrate_send_rp_req_pages(mis, NULL,
+                                            rb_offset, qemu_ram_pagesize(rb));
+        }
+
+        if (ret) {
+            /* May be network failure, try to wait for recovery */
+            if (ret == -EIO && postcopy_pause_fault_thread(mis)) {
+                /* We got reconnected somehow, try to continue */
+                last_rb = NULL;
+                goto retry;
+            } else {
+                /* This is a unavoidable fault */
+                error_report("%s: migrate_send_rp_req_pages() get %d",
+                             __func__, ret);
+                break;
+            }
         }
     }
     trace_postcopy_ram_fault_thread_exit();
diff --git a/migration/savevm.c b/migration/savevm.c
index 93e308ebf0..86ada6d0e7 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2039,6 +2039,9 @@ static bool postcopy_pause_incoming(MigrationIncomingState *mis)
     mis->to_src_file = NULL;
     qemu_mutex_unlock(&mis->rp_mutex);
 
+    /* Notify the fault thread for the invalidated file handle */
+    postcopy_fault_thread_notify(mis);
+
     error_report("Detected IO failure for postcopy. "
                  "Migration paused.");
 
diff --git a/migration/trace-events b/migration/trace-events
index a4031cfe00..32f02cbdcc 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -101,6 +101,8 @@ open_return_path_on_source_continue(void) ""
 postcopy_start(void) ""
 postcopy_pause_return_path(void) ""
 postcopy_pause_return_path_continued(void) ""
+postcopy_pause_fault_thread(void) ""
+postcopy_pause_fault_thread_continued(void) ""
 postcopy_pause_continued(void) ""
 postcopy_pause_incoming(void) ""
 postcopy_pause_incoming_continued(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 10/32] qmp: hmp: add migrate "resume" option
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (8 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 09/32] migration: allow fault thread to pause Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 11/32] migration: pass MigrationState to migrate_init() Peter Xu
                   ` (22 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

It will be used when we want to resume one paused migration.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 hmp-commands.hx       | 7 ++++---
 hmp.c                 | 4 +++-
 migration/migration.c | 2 +-
 qapi/migration.json   | 5 ++++-
 4 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 4afd57cf5f..ffcdc34652 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -928,13 +928,14 @@ ETEXI
 
     {
         .name       = "migrate",
-        .args_type  = "detach:-d,blk:-b,inc:-i,uri:s",
-        .params     = "[-d] [-b] [-i] uri",
+        .args_type  = "detach:-d,blk:-b,inc:-i,resume:-r,uri:s",
+        .params     = "[-d] [-b] [-i] [-r] uri",
         .help       = "migrate to URI (using -d to not wait for completion)"
 		      "\n\t\t\t -b for migration without shared storage with"
 		      " full copy of disk\n\t\t\t -i for migration without "
 		      "shared storage with incremental copy of disk "
-		      "(base image shared between src and destination)",
+		      "(base image shared between src and destination)"
+                      "\n\t\t\t -r to resume a paused migration",
         .cmd        = hmp_migrate,
     },
 
diff --git a/hmp.c b/hmp.c
index 35a7041824..c7e1022283 100644
--- a/hmp.c
+++ b/hmp.c
@@ -1921,10 +1921,12 @@ void hmp_migrate(Monitor *mon, const QDict *qdict)
     bool detach = qdict_get_try_bool(qdict, "detach", false);
     bool blk = qdict_get_try_bool(qdict, "blk", false);
     bool inc = qdict_get_try_bool(qdict, "inc", false);
+    bool resume = qdict_get_try_bool(qdict, "resume", false);
     const char *uri = qdict_get_str(qdict, "uri");
     Error *err = NULL;
 
-    qmp_migrate(uri, !!blk, blk, !!inc, inc, false, false, &err);
+    qmp_migrate(uri, !!blk, blk, !!inc, inc,
+                false, false, true, resume, &err);
     if (err) {
         hmp_handle_error(mon, &err);
         return;
diff --git a/migration/migration.c b/migration/migration.c
index 54fba8668d..b080440143 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1369,7 +1369,7 @@ bool migration_is_blocked(Error **errp)
 
 void qmp_migrate(const char *uri, bool has_blk, bool blk,
                  bool has_inc, bool inc, bool has_detach, bool detach,
-                 Error **errp)
+                 bool has_resume, bool resume, Error **errp)
 {
     Error *local_err = NULL;
     MigrationState *s = migrate_get_current();
diff --git a/qapi/migration.json b/qapi/migration.json
index dbcd43aa40..f22fd7f3d1 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1012,6 +1012,8 @@
 # @detach: this argument exists only for compatibility reasons and
 #          is ignored by QEMU
 #
+# @resume: resume one paused migration, default "off". (since 2.12)
+#
 # Returns: nothing on success
 #
 # Since: 0.14.0
@@ -1033,7 +1035,8 @@
 #
 ##
 { 'command': 'migrate',
-  'data': {'uri': 'str', '*blk': 'bool', '*inc': 'bool', '*detach': 'bool' } }
+  'data': {'uri': 'str', '*blk': 'bool', '*inc': 'bool',
+           '*detach': 'bool', '*resume': 'bool' } }
 
 ##
 # @migrate-incoming:
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 11/32] migration: pass MigrationState to migrate_init()
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (9 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 10/32] qmp: hmp: add migrate "resume" option Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 12/32] migration: rebuild channel on source Peter Xu
                   ` (21 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Let the callers take the object, then pass it to migrate_init().

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 7 ++-----
 migration/migration.h | 2 +-
 migration/savevm.c    | 5 ++++-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index b080440143..bf1bdd09da 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1269,10 +1269,8 @@ bool migration_is_idle(void)
     return false;
 }
 
-MigrationState *migrate_init(void)
+void migrate_init(MigrationState *s)
 {
-    MigrationState *s = migrate_get_current();
-
     /*
      * Reinitialise all migration state, except
      * parameters/capabilities that the user set, and
@@ -1300,7 +1298,6 @@ MigrationState *migrate_init(void)
     migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
 
     s->total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
-    return s;
 }
 
 static GSList *migration_blockers;
@@ -1408,7 +1405,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
         migrate_set_block_incremental(s, true);
     }
 
-    s = migrate_init();
+    migrate_init(s);
 
     if (strstart(uri, "tcp:", &p)) {
         tcp_start_outgoing_migration(s, p, &local_err);
diff --git a/migration/migration.h b/migration/migration.h
index 2751b2dffc..d052669e1c 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -180,7 +180,7 @@ void migrate_fd_error(MigrationState *s, const Error *error);
 
 void migrate_fd_connect(MigrationState *s);
 
-MigrationState *migrate_init(void);
+void migrate_init(MigrationState *s);
 bool migration_is_blocked(Error **errp);
 /* True if outgoing migration has entered postcopy phase */
 bool migration_in_postcopy(void);
diff --git a/migration/savevm.c b/migration/savevm.c
index 86ada6d0e7..6d6f8ee3e4 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1256,8 +1256,11 @@ void qemu_savevm_state_cleanup(void)
 static int qemu_savevm_state(QEMUFile *f, Error **errp)
 {
     int ret;
-    MigrationState *ms = migrate_init();
+    MigrationState *ms = migrate_get_current();
     MigrationStatus status;
+
+    migrate_init(ms);
+
     ms->to_dst_file = f;
 
     if (migration_is_blocked(errp)) {
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 12/32] migration: rebuild channel on source
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (10 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 11/32] migration: pass MigrationState to migrate_init() Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 13/32] migration: new state "postcopy-recover" Peter Xu
                   ` (20 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

This patch detects the "resume" flag of migration command, rebuild the
channels only if the flag is set.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 92 ++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 69 insertions(+), 23 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index bf1bdd09da..05a8d772ca 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1364,49 +1364,75 @@ bool migration_is_blocked(Error **errp)
     return false;
 }
 
-void qmp_migrate(const char *uri, bool has_blk, bool blk,
-                 bool has_inc, bool inc, bool has_detach, bool detach,
-                 bool has_resume, bool resume, Error **errp)
+/* Returns true if continue to migrate, or false if error detected */
+static bool migrate_prepare(MigrationState *s, bool blk, bool blk_inc,
+                            bool resume, Error **errp)
 {
     Error *local_err = NULL;
-    MigrationState *s = migrate_get_current();
-    const char *p;
+
+    if (resume) {
+        if (s->state != MIGRATION_STATUS_POSTCOPY_PAUSED) {
+            error_setg(errp, "Cannot resume if there is no "
+                       "paused migration");
+            return false;
+        }
+        /* This is a resume, skip init status */
+        return true;
+    }
 
     if (migration_is_setup_or_active(s->state) ||
         s->state == MIGRATION_STATUS_CANCELLING ||
         s->state == MIGRATION_STATUS_COLO) {
         error_setg(errp, QERR_MIGRATION_ACTIVE);
-        return;
+        return false;
     }
+
     if (runstate_check(RUN_STATE_INMIGRATE)) {
         error_setg(errp, "Guest is waiting for an incoming migration");
-        return;
+        return false;
     }
 
     if (migration_is_blocked(errp)) {
-        return;
+        return false;
     }
 
-    if ((has_blk && blk) || (has_inc && inc)) {
+    if (blk || blk_inc) {
         if (migrate_use_block() || migrate_use_block_incremental()) {
             error_setg(errp, "Command options are incompatible with "
                        "current migration capabilities");
-            return;
+            return false;
         }
         migrate_set_block_enabled(true, &local_err);
         if (local_err) {
             error_propagate(errp, local_err);
-            return;
+            return false;
         }
         s->must_remove_block_options = true;
     }
 
-    if (has_inc && inc) {
+    if (blk_inc) {
         migrate_set_block_incremental(s, true);
     }
 
     migrate_init(s);
 
+    return true;
+}
+
+void qmp_migrate(const char *uri, bool has_blk, bool blk,
+                 bool has_inc, bool inc, bool has_detach, bool detach,
+                 bool has_resume, bool resume, Error **errp)
+{
+    Error *local_err = NULL;
+    MigrationState *s = migrate_get_current();
+    const char *p;
+
+    if (!migrate_prepare(s, has_blk && blk, has_inc && inc,
+                         has_resume && resume, errp)) {
+        /* Error detected, put into errp */
+        return;
+    }
+
     if (strstart(uri, "tcp:", &p)) {
         tcp_start_outgoing_migration(s, p, &local_err);
 #ifdef CONFIG_RDMA
@@ -1862,7 +1888,8 @@ out:
     return NULL;
 }
 
-static int open_return_path_on_source(MigrationState *ms)
+static int open_return_path_on_source(MigrationState *ms,
+                                      bool create_thread)
 {
 
     ms->rp_state.from_dst_file = qemu_file_get_return_path(ms->to_dst_file);
@@ -1871,6 +1898,12 @@ static int open_return_path_on_source(MigrationState *ms)
     }
 
     trace_open_return_path_on_source();
+
+    if (!create_thread) {
+        /* We're done */
+        return 0;
+    }
+
     qemu_thread_create(&ms->rp_state.rp_thread, "return path",
                        source_return_path_thread, ms, QEMU_THREAD_JOINABLE);
 
@@ -2484,15 +2517,24 @@ static void *migration_thread(void *opaque)
 
 void migrate_fd_connect(MigrationState *s)
 {
-    s->expected_downtime = s->parameters.downtime_limit;
-    s->cleanup_bh = qemu_bh_new(migrate_fd_cleanup, s);
+    int64_t rate_limit;
+    bool resume = s->state == MIGRATION_STATUS_POSTCOPY_PAUSED;
 
-    qemu_file_set_blocking(s->to_dst_file, true);
-    qemu_file_set_rate_limit(s->to_dst_file,
-                             s->parameters.max_bandwidth / XFER_LIMIT_RATIO);
+    if (resume) {
+        /* This is a resumed migration */
+        rate_limit = INT64_MAX;
+    } else {
+        /* This is a fresh new migration */
+        rate_limit = s->parameters.max_bandwidth / XFER_LIMIT_RATIO;
+        s->expected_downtime = s->parameters.downtime_limit;
+        s->cleanup_bh = qemu_bh_new(migrate_fd_cleanup, s);
 
-    /* Notify before starting migration thread */
-    notifier_list_notify(&migration_state_notifiers, s);
+        /* Notify before starting migration thread */
+        notifier_list_notify(&migration_state_notifiers, s);
+    }
+
+    qemu_file_set_rate_limit(s->to_dst_file, rate_limit);
+    qemu_file_set_blocking(s->to_dst_file, true);
 
     /*
      * Open the return path. For postcopy, it is used exclusively. For
@@ -2500,15 +2542,19 @@ void migrate_fd_connect(MigrationState *s)
      * QEMU uses the return path.
      */
     if (migrate_postcopy_ram() || migrate_use_return_path()) {
-        if (open_return_path_on_source(s)) {
+        if (open_return_path_on_source(s, !resume)) {
             error_report("Unable to open return-path for postcopy");
-            migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
-                              MIGRATION_STATUS_FAILED);
+            migrate_set_state(&s->state, s->state, MIGRATION_STATUS_FAILED);
             migrate_fd_cleanup(s);
             return;
         }
     }
 
+    if (resume) {
+        /* TODO: do the resume logic */
+        return;
+    }
+
     if (multifd_save_setup() != 0) {
         migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
                           MIGRATION_STATUS_FAILED);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 13/32] migration: new state "postcopy-recover"
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (11 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 12/32] migration: rebuild channel on source Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 14/32] migration: wakeup dst ram-load-thread for recover Peter Xu
                   ` (19 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Introducing new migration state "postcopy-recover". If a migration
procedure is paused and the connection is rebuilt afterward
successfully, we'll switch the source VM state from "postcopy-paused" to
the new state "postcopy-recover", then we'll do the resume logic in the
migration thread (along with the return path thread).

This patch only do the state switch on source side. Another following up
patch will handle the state switching on destination side using the same
status bit.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 76 ++++++++++++++++++++++++++++++++++++++-------------
 qapi/migration.json   |  4 ++-
 2 files changed, 60 insertions(+), 20 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 05a8d772ca..0ba0ce1baf 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -555,6 +555,7 @@ static bool migration_is_setup_or_active(int state)
     case MIGRATION_STATUS_ACTIVE:
     case MIGRATION_STATUS_POSTCOPY_ACTIVE:
     case MIGRATION_STATUS_POSTCOPY_PAUSED:
+    case MIGRATION_STATUS_POSTCOPY_RECOVER:
     case MIGRATION_STATUS_SETUP:
     case MIGRATION_STATUS_PRE_SWITCHOVER:
     case MIGRATION_STATUS_DEVICE:
@@ -635,6 +636,7 @@ MigrationInfo *qmp_query_migrate(Error **errp)
     case MIGRATION_STATUS_PRE_SWITCHOVER:
     case MIGRATION_STATUS_DEVICE:
     case MIGRATION_STATUS_POSTCOPY_PAUSED:
+    case MIGRATION_STATUS_POSTCOPY_RECOVER:
          /* TODO add some postcopy stats */
         info->has_status = true;
         info->has_total_time = true;
@@ -2256,6 +2258,13 @@ typedef enum MigThrError {
     MIG_THR_ERR_FATAL = 2,
 } MigThrError;
 
+/* Return zero if success, or <0 for error */
+static int postcopy_do_resume(MigrationState *s)
+{
+    /* TODO: do the resume logic */
+    return 0;
+}
+
 /*
  * We don't return until we are in a safe state to continue current
  * postcopy migration.  Returns MIG_THR_ERR_RECOVERED if recovered, or
@@ -2264,29 +2273,55 @@ typedef enum MigThrError {
 static MigThrError postcopy_pause(MigrationState *s)
 {
     assert(s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
-    migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
-                      MIGRATION_STATUS_POSTCOPY_PAUSED);
 
-    /* Current channel is possibly broken. Release it. */
-    assert(s->to_dst_file);
-    qemu_file_shutdown(s->to_dst_file);
-    qemu_fclose(s->to_dst_file);
-    s->to_dst_file = NULL;
+    while (true) {
+        migrate_set_state(&s->state, s->state,
+                          MIGRATION_STATUS_POSTCOPY_PAUSED);
 
-    error_report("Detected IO failure for postcopy. "
-                 "Migration paused.");
+        /* Current channel is possibly broken. Release it. */
+        assert(s->to_dst_file);
+        qemu_file_shutdown(s->to_dst_file);
+        qemu_fclose(s->to_dst_file);
+        s->to_dst_file = NULL;
 
-    /*
-     * We wait until things fixed up. Then someone will setup the
-     * status back for us.
-     */
-    while (s->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
-        qemu_sem_wait(&s->postcopy_pause_sem);
-    }
+        error_report("Detected IO failure for postcopy. "
+                     "Migration paused.");
+
+        /*
+         * We wait until things fixed up. Then someone will setup the
+         * status back for us.
+         */
+        while (s->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
+            qemu_sem_wait(&s->postcopy_pause_sem);
+        }
 
-    trace_postcopy_pause_continued();
+        if (s->state == MIGRATION_STATUS_POSTCOPY_RECOVER) {
+            /* Woken up by a recover procedure. Give it a shot */
+
+            /*
+             * Firstly, let's wake up the return path now, with a new
+             * return path channel.
+             */
+            qemu_sem_post(&s->postcopy_pause_rp_sem);
 
-    return MIG_THR_ERR_RECOVERED;
+            /* Do the resume logic */
+            if (postcopy_do_resume(s) == 0) {
+                /* Let's continue! */
+                trace_postcopy_pause_continued();
+                return MIG_THR_ERR_RECOVERED;
+            } else {
+                /*
+                 * Something wrong happened during the recovery, let's
+                 * pause again. Pause is always better than throwing
+                 * data away.
+                 */
+                continue;
+            }
+        } else {
+            /* This is not right... Time to quit. */
+            return MIG_THR_ERR_FATAL;
+        }
+    }
 }
 
 static MigThrError migration_detect_error(MigrationState *s)
@@ -2551,7 +2586,10 @@ void migrate_fd_connect(MigrationState *s)
     }
 
     if (resume) {
-        /* TODO: do the resume logic */
+        /* Wakeup the main migration thread to do the recovery */
+        migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_PAUSED,
+                          MIGRATION_STATUS_POSTCOPY_RECOVER);
+        qemu_sem_post(&s->postcopy_pause_sem);
         return;
     }
 
diff --git a/qapi/migration.json b/qapi/migration.json
index f22fd7f3d1..4a3eff62f1 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -91,6 +91,8 @@
 #
 # @postcopy-paused: during postcopy but paused. (since 2.12)
 #
+# @postcopy-recover: trying to recover from a paused postcopy. (since 2.11)
+#
 # @completed: migration is finished.
 #
 # @failed: some error occurred during migration process.
@@ -109,7 +111,7 @@
 { 'enum': 'MigrationStatus',
   'data': [ 'none', 'setup', 'cancelling', 'cancelled',
             'active', 'postcopy-active', 'postcopy-paused',
-            'completed', 'failed', 'colo',
+            'postcopy-recover', 'completed', 'failed', 'colo',
             'pre-switchover', 'device' ] }
 
 ##
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 14/32] migration: wakeup dst ram-load-thread for recover
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (12 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 13/32] migration: new state "postcopy-recover" Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 15/32] migration: new cmd MIG_CMD_RECV_BITMAP Peter Xu
                   ` (18 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

On the destination side, we cannot wake up all the threads when we got
reconnected. The first thing to do is to wake up the main load thread,
so that we can continue to receive valid messages from source again and
reply when needed.

At this point, we switch the destination VM state from postcopy-paused
back to postcopy-recover.

Now we are finally ready to do the resume logic.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 0ba0ce1baf..32c036fa82 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -423,8 +423,34 @@ static void migration_incoming_process(void)
 
 void migration_fd_process_incoming(QEMUFile *f)
 {
-    migration_incoming_setup(f);
-    migration_incoming_process();
+    MigrationIncomingState *mis = migration_incoming_get_current();
+
+    if (mis->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
+        /* Resumed from a paused postcopy migration */
+
+        mis->from_src_file = f;
+        /* Postcopy has standalone thread to do vm load */
+        qemu_file_set_blocking(f, true);
+
+        /* Re-configure the return path */
+        mis->to_src_file = qemu_file_get_return_path(f);
+
+        migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_PAUSED,
+                          MIGRATION_STATUS_POSTCOPY_RECOVER);
+
+        /*
+         * Here, we only wake up the main loading thread (while the
+         * fault thread will still be waiting), so that we can receive
+         * commands from source now, and answer it if needed. The
+         * fault thread will be woken up afterwards until we are sure
+         * that source is ready to reply to page requests.
+         */
+        qemu_sem_post(&mis->postcopy_pause_sem_dst);
+    } else {
+        /* New incoming migration */
+        migration_incoming_setup(f);
+        migration_incoming_process();
+    }
 }
 
 void migration_ioc_process_incoming(QIOChannel *ioc)
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 15/32] migration: new cmd MIG_CMD_RECV_BITMAP
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (13 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 14/32] migration: wakeup dst ram-load-thread for recover Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP Peter Xu
                   ` (17 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Add a new vm command MIG_CMD_RECV_BITMAP to request received bitmap for
one ramblock.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/savevm.c     | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++
 migration/savevm.h     |  1 +
 migration/trace-events |  2 ++
 3 files changed, 64 insertions(+)

diff --git a/migration/savevm.c b/migration/savevm.c
index 6d6f8ee3e4..0f61da3ebb 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -78,6 +78,7 @@ enum qemu_vm_cmd {
                                       were previously sent during
                                       precopy but are dirty. */
     MIG_CMD_PACKAGED,          /* Send a wrapped stream within this stream */
+    MIG_CMD_RECV_BITMAP,       /* Request for recved bitmap on dst */
     MIG_CMD_MAX
 };
 
@@ -95,6 +96,7 @@ static struct mig_cmd_args {
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
                                    .len = -1, .name = "POSTCOPY_RAM_DISCARD" },
     [MIG_CMD_PACKAGED]         = { .len =  4, .name = "PACKAGED" },
+    [MIG_CMD_RECV_BITMAP]      = { .len = -1, .name = "RECV_BITMAP" },
     [MIG_CMD_MAX]              = { .len = -1, .name = "MAX" },
 };
 
@@ -953,6 +955,19 @@ void qemu_savevm_send_postcopy_run(QEMUFile *f)
     qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_RUN, 0, NULL);
 }
 
+void qemu_savevm_send_recv_bitmap(QEMUFile *f, char *block_name)
+{
+    size_t len;
+    char buf[256];
+
+    trace_savevm_send_recv_bitmap(block_name);
+
+    buf[0] = len = strlen(block_name);
+    memcpy(buf + 1, block_name, len);
+
+    qemu_savevm_command_send(f, MIG_CMD_RECV_BITMAP, len + 1, (uint8_t *)buf);
+}
+
 bool qemu_savevm_state_blocked(Error **errp)
 {
     SaveStateEntry *se;
@@ -1761,6 +1776,49 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
 }
 
 /*
+ * Handle request that source requests for recved_bitmap on
+ * destination. Payload format:
+ *
+ * len (1 byte) + ramblock_name (<255 bytes)
+ */
+static int loadvm_handle_recv_bitmap(MigrationIncomingState *mis,
+                                     uint16_t len)
+{
+    QEMUFile *file = mis->from_src_file;
+    RAMBlock *rb;
+    char block_name[256];
+    size_t cnt;
+
+    cnt = qemu_get_counted_string(file, block_name);
+    if (!cnt) {
+        error_report("%s: failed to read block name", __func__);
+        return -EINVAL;
+    }
+
+    /* Validate before using the data */
+    if (qemu_file_get_error(file)) {
+        return qemu_file_get_error(file);
+    }
+
+    if (len != cnt + 1) {
+        error_report("%s: invalid payload length (%d)", __func__, len);
+        return -EINVAL;
+    }
+
+    rb = qemu_ram_block_by_name(block_name);
+    if (!rb) {
+        error_report("%s: block '%s' not found", __func__, block_name);
+        return -EINVAL;
+    }
+
+    /* TODO: send the bitmap back to source */
+
+    trace_loadvm_handle_recv_bitmap(block_name);
+
+    return 0;
+}
+
+/*
  * Process an incoming 'QEMU_VM_COMMAND'
  * 0           just a normal return
  * LOADVM_QUIT All good, but exit the loop
@@ -1833,6 +1891,9 @@ static int loadvm_process_command(QEMUFile *f)
 
     case MIG_CMD_POSTCOPY_RAM_DISCARD:
         return loadvm_postcopy_ram_handle_discard(mis, len);
+
+    case MIG_CMD_RECV_BITMAP:
+        return loadvm_handle_recv_bitmap(mis, len);
     }
 
     return 0;
diff --git a/migration/savevm.h b/migration/savevm.h
index 295c4a1f2c..8126b1cc14 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -46,6 +46,7 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len);
 void qemu_savevm_send_postcopy_advise(QEMUFile *f);
 void qemu_savevm_send_postcopy_listen(QEMUFile *f);
 void qemu_savevm_send_postcopy_run(QEMUFile *f);
+void qemu_savevm_send_recv_bitmap(QEMUFile *f, char *block_name);
 
 void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
                                            uint16_t len,
diff --git a/migration/trace-events b/migration/trace-events
index 32f02cbdcc..55c0412aaa 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -12,6 +12,7 @@ loadvm_state_cleanup(void) ""
 loadvm_handle_cmd_packaged(unsigned int length) "%u"
 loadvm_handle_cmd_packaged_main(int ret) "%d"
 loadvm_handle_cmd_packaged_received(int ret) "%d"
+loadvm_handle_recv_bitmap(char *s) "%s"
 loadvm_postcopy_handle_advise(void) ""
 loadvm_postcopy_handle_listen(void) ""
 loadvm_postcopy_handle_run(void) ""
@@ -34,6 +35,7 @@ savevm_send_open_return_path(void) ""
 savevm_send_ping(uint32_t val) "0x%x"
 savevm_send_postcopy_listen(void) ""
 savevm_send_postcopy_run(void) ""
+savevm_send_recv_bitmap(char *name) "%s"
 savevm_state_setup(void) ""
 savevm_state_header(void) ""
 savevm_state_iterate(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (14 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 15/32] migration: new cmd MIG_CMD_RECV_BITMAP Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-30 17:21   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 17/32] migration: new cmd MIG_CMD_POSTCOPY_RESUME Peter Xu
                   ` (16 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Introducing new return path message MIG_RP_MSG_RECV_BITMAP to send
received bitmap of ramblock back to source.

This is the reply message of MIG_CMD_RECV_BITMAP, it contains not only
the header (including the ramblock name), and it was appended with the
whole ramblock received bitmap on the destination side.

When the source receives such a reply message (MIG_RP_MSG_RECV_BITMAP),
it parses it, convert it to the dirty bitmap by inverting the bits.

One thing to mention is that, when we send the recv bitmap, we are doing
these things in extra:

- converting the bitmap to little endian, to support when hosts are
  using different endianess on src/dst.

- do proper alignment for 8 bytes, to support when hosts are using
  different word size (32/64 bits) on src/dst.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  |  68 +++++++++++++++++++++++
 migration/migration.h  |   2 +
 migration/ram.c        | 144 +++++++++++++++++++++++++++++++++++++++++++++++++
 migration/ram.h        |   3 ++
 migration/savevm.c     |   2 +-
 migration/trace-events |   3 ++
 6 files changed, 221 insertions(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index 32c036fa82..5592975d33 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -93,6 +93,7 @@ enum mig_rp_message_type {
 
     MIG_RP_MSG_REQ_PAGES_ID, /* data (start: be64, len: be32, id: string) */
     MIG_RP_MSG_REQ_PAGES,    /* data (start: be64, len: be32) */
+    MIG_RP_MSG_RECV_BITMAP,  /* send recved_bitmap back to source */
 
     MIG_RP_MSG_MAX
 };
@@ -502,6 +503,45 @@ void migrate_send_rp_pong(MigrationIncomingState *mis,
     migrate_send_rp_message(mis, MIG_RP_MSG_PONG, sizeof(buf), &buf);
 }
 
+void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
+                                 char *block_name)
+{
+    char buf[512];
+    int len;
+    int64_t res;
+
+    /*
+     * First, we send the header part. It contains only the len of
+     * idstr, and the idstr itself.
+     */
+    len = strlen(block_name);
+    buf[0] = len;
+    memcpy(buf + 1, block_name, len);
+
+    if (mis->state != MIGRATION_STATUS_POSTCOPY_RECOVER) {
+        error_report("%s: MSG_RP_RECV_BITMAP only used for recovery",
+                     __func__);
+        return;
+    }
+
+    migrate_send_rp_message(mis, MIG_RP_MSG_RECV_BITMAP, len + 1, buf);
+
+    /*
+     * Next, we dump the received bitmap to the stream.
+     *
+     * TODO: currently we are safe since we are the only one that is
+     * using the to_src_file handle (fault thread is still paused),
+     * and it's ok even not taking the mutex. However the best way is
+     * to take the lock before sending the message header, and release
+     * the lock after sending the bitmap.
+     */
+    qemu_mutex_lock(&mis->rp_mutex);
+    res = ramblock_recv_bitmap_send(mis->to_src_file, block_name);
+    qemu_mutex_unlock(&mis->rp_mutex);
+
+    trace_migrate_send_rp_recv_bitmap(block_name, res);
+}
+
 MigrationCapabilityStatusList *qmp_query_migrate_capabilities(Error **errp)
 {
     MigrationCapabilityStatusList *head = NULL;
@@ -1736,6 +1776,7 @@ static struct rp_cmd_args {
     [MIG_RP_MSG_PONG]           = { .len =  4, .name = "PONG" },
     [MIG_RP_MSG_REQ_PAGES]      = { .len = 12, .name = "REQ_PAGES" },
     [MIG_RP_MSG_REQ_PAGES_ID]   = { .len = -1, .name = "REQ_PAGES_ID" },
+    [MIG_RP_MSG_RECV_BITMAP]    = { .len = -1, .name = "RECV_BITMAP" },
     [MIG_RP_MSG_MAX]            = { .len = -1, .name = "MAX" },
 };
 
@@ -1780,6 +1821,19 @@ static bool postcopy_pause_return_path_thread(MigrationState *s)
     return true;
 }
 
+static int migrate_handle_rp_recv_bitmap(MigrationState *s, char *block_name)
+{
+    RAMBlock *block = qemu_ram_block_by_name(block_name);
+
+    if (!block) {
+        error_report("%s: invalid block name '%s'", __func__, block_name);
+        return -EINVAL;
+    }
+
+    /* Fetch the received bitmap and refresh the dirty bitmap */
+    return ram_dirty_bitmap_reload(s, block);
+}
+
 /*
  * Handles messages sent on the return path towards the source VM
  *
@@ -1885,6 +1939,20 @@ retry:
             migrate_handle_rp_req_pages(ms, (char *)&buf[13], start, len);
             break;
 
+        case MIG_RP_MSG_RECV_BITMAP:
+            if (header_len < 1) {
+                error_report("%s: missing block name", __func__);
+                mark_source_rp_bad(ms);
+                goto out;
+            }
+            /* Format: len (1B) + idstr (<255B). This ends the idstr. */
+            buf[buf[0] + 1] = '\0';
+            if (migrate_handle_rp_recv_bitmap(ms, (char *)(buf + 1))) {
+                mark_source_rp_bad(ms);
+                goto out;
+            }
+            break;
+
         default:
             break;
         }
diff --git a/migration/migration.h b/migration/migration.h
index d052669e1c..f879c93542 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -219,5 +219,7 @@ void migrate_send_rp_pong(MigrationIncomingState *mis,
                           uint32_t value);
 int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
                               ram_addr_t start, size_t len);
+void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
+                                 char *block_name);
 
 #endif
diff --git a/migration/ram.c b/migration/ram.c
index 960c726ff2..b30c669476 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -180,6 +180,70 @@ void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr,
                       nr);
 }
 
+#define  RAMBLOCK_RECV_BITMAP_ENDING  (0x0123456789abcdefULL)
+
+/*
+ * Format: bitmap_size (8 bytes) + whole_bitmap (N bytes).
+ *
+ * Returns >0 if success with sent bytes, or <0 if error.
+ */
+int64_t ramblock_recv_bitmap_send(QEMUFile *file,
+                                  const char *block_name)
+{
+    RAMBlock *block = qemu_ram_block_by_name(block_name);
+    unsigned long *le_bitmap, nbits;
+    uint64_t size;
+
+    if (!block) {
+        error_report("%s: invalid block name: %s", __func__, block_name);
+        return -1;
+    }
+
+    nbits = block->used_length >> TARGET_PAGE_BITS;
+
+    /*
+     * Make sure the tmp bitmap buffer is big enough, e.g., on 32bit
+     * machines we may need 4 more bytes for padding (see below
+     * comment). So extend it a bit before hand.
+     */
+    le_bitmap = bitmap_new(nbits + BITS_PER_LONG);
+
+    /*
+     * Always use little endian when sending the bitmap. This is
+     * required that when source and destination VMs are not using the
+     * same endianess. (Note: big endian won't work.)
+     */
+    bitmap_to_le(le_bitmap, block->receivedmap, nbits);
+
+    /* Size of the bitmap, in bytes */
+    size = nbits / 8;
+
+    /*
+     * size is always aligned to 8 bytes for 64bit machines, but it
+     * may not be true for 32bit machines. We need this padding to
+     * make sure the migration can survive even between 32bit and
+     * 64bit machines.
+     */
+    size = ROUND_UP(size, 8);
+
+    qemu_put_be64(file, size);
+    qemu_put_buffer(file, (const uint8_t *)le_bitmap, size);
+    /*
+     * Mark as an end, in case the middle part is screwed up due to
+     * some "misterious" reason.
+     */
+    qemu_put_be64(file, RAMBLOCK_RECV_BITMAP_ENDING);
+    qemu_fflush(file);
+
+    free(le_bitmap);
+
+    if (qemu_file_get_error(file)) {
+        return qemu_file_get_error(file);
+    }
+
+    return size + sizeof(size);
+}
+
 /*
  * An outstanding page request, on the source, having been received
  * and queued
@@ -2985,6 +3049,86 @@ static bool ram_has_postcopy(void *opaque)
     return migrate_postcopy_ram();
 }
 
+/*
+ * Read the received bitmap, revert it as the initial dirty bitmap.
+ * This is only used when the postcopy migration is paused but wants
+ * to resume from a middle point.
+ */
+int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *block)
+{
+    int ret = -EINVAL;
+    QEMUFile *file = s->rp_state.from_dst_file;
+    unsigned long *le_bitmap, nbits = block->used_length >> TARGET_PAGE_BITS;
+    uint64_t local_size = nbits / 8;
+    uint64_t size, end_mark;
+
+    trace_ram_dirty_bitmap_reload_begin(block->idstr);
+
+    if (s->state != MIGRATION_STATUS_POSTCOPY_RECOVER) {
+        error_report("%s: incorrect state %s", __func__,
+                     MigrationStatus_str(s->state));
+        return -EINVAL;
+    }
+
+    /*
+     * Note: see comments in ramblock_recv_bitmap_send() on why we
+     * need the endianess convertion, and the paddings.
+     */
+    local_size = ROUND_UP(local_size, 8);
+
+    /* Add addings */
+    le_bitmap = bitmap_new(nbits + BITS_PER_LONG);
+
+    size = qemu_get_be64(file);
+
+    /* The size of the bitmap should match with our ramblock */
+    if (size != local_size) {
+        error_report("%s: ramblock '%s' bitmap size mismatch "
+                     "(0x%"PRIx64" != 0x%"PRIx64")", __func__,
+                     block->idstr, size, local_size);
+        ret = -EINVAL;
+        goto out;
+    }
+
+    size = qemu_get_buffer(file, (uint8_t *)le_bitmap, local_size);
+    end_mark = qemu_get_be64(file);
+
+    ret = qemu_file_get_error(file);
+    if (ret || size != local_size) {
+        error_report("%s: read bitmap failed for ramblock '%s': %d"
+                     " (size 0x%"PRIx64", got: 0x%"PRIx64")",
+                     __func__, block->idstr, ret, local_size, size);
+        ret = -EIO;
+        goto out;
+    }
+
+    if (end_mark != RAMBLOCK_RECV_BITMAP_ENDING) {
+        error_report("%s: ramblock '%s' end mark incorrect: 0x%"PRIu64,
+                     __func__, block->idstr, end_mark);
+        ret = -EINVAL;
+        goto out;
+    }
+
+    /*
+     * Endianess convertion. We are during postcopy (though paused).
+     * The dirty bitmap won't change. We can directly modify it.
+     */
+    bitmap_from_le(block->bmap, le_bitmap, nbits);
+
+    /*
+     * What we received is "received bitmap". Revert it as the initial
+     * dirty bitmap for this ramblock.
+     */
+    bitmap_complement(block->bmap, block->bmap, nbits);
+
+    trace_ram_dirty_bitmap_reload_complete(block->idstr);
+
+    ret = 0;
+out:
+    free(le_bitmap);
+    return ret;
+}
+
 static SaveVMHandlers savevm_ram_handlers = {
     .save_setup = ram_save_setup,
     .save_live_iterate = ram_save_iterate,
diff --git a/migration/ram.h b/migration/ram.h
index 64d81e9f1d..10a459cc89 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -61,5 +61,8 @@ void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
 int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
 void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
 void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
+int64_t ramblock_recv_bitmap_send(QEMUFile *file,
+                                  const char *block_name);
+int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index 0f61da3ebb..2148b198c7 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1811,7 +1811,7 @@ static int loadvm_handle_recv_bitmap(MigrationIncomingState *mis,
         return -EINVAL;
     }
 
-    /* TODO: send the bitmap back to source */
+    migrate_send_rp_recv_bitmap(mis, block_name);
 
     trace_loadvm_handle_recv_bitmap(block_name);
 
diff --git a/migration/trace-events b/migration/trace-events
index 55c0412aaa..3dcf8a93d9 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -79,6 +79,8 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
 ram_postcopy_send_discard_bitmap(void) ""
 ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
 ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
+ram_dirty_bitmap_reload_begin(char *str) "%s"
+ram_dirty_bitmap_reload_complete(char *str) "%s"
 
 # migration/migration.c
 await_return_path_close_on_source_close(void) ""
@@ -90,6 +92,7 @@ migrate_fd_cancel(void) ""
 migrate_handle_rp_req_pages(const char *rbname, size_t start, size_t len) "in %s at 0x%zx len 0x%zx"
 migrate_pending(uint64_t size, uint64_t max, uint64_t post, uint64_t nonpost) "pending size %" PRIu64 " max %" PRIu64 " (post=%" PRIu64 " nonpost=%" PRIu64 ")"
 migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d"
+migrate_send_rp_recv_bitmap(char *name, int64_t size) "block '%s' size 0x%"PRIi64
 migration_completion_file_err(void) ""
 migration_completion_postcopy_end(void) ""
 migration_completion_postcopy_end_after_complete(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 17/32] migration: new cmd MIG_CMD_POSTCOPY_RESUME
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (15 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 18/32] migration: new message MIG_RP_MSG_RESUME_ACK Peter Xu
                   ` (15 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Introducing this new command to be sent when the source VM is ready to
resume the paused migration.  What the destination does here is
basically release the fault thread to continue service page faults.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/savevm.c     | 35 +++++++++++++++++++++++++++++++++++
 migration/savevm.h     |  1 +
 migration/trace-events |  2 ++
 3 files changed, 38 insertions(+)

diff --git a/migration/savevm.c b/migration/savevm.c
index 2148b198c7..bb6639812b 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -77,6 +77,7 @@ enum qemu_vm_cmd {
     MIG_CMD_POSTCOPY_RAM_DISCARD,  /* A list of pages to discard that
                                       were previously sent during
                                       precopy but are dirty. */
+    MIG_CMD_POSTCOPY_RESUME,       /* resume postcopy on dest */
     MIG_CMD_PACKAGED,          /* Send a wrapped stream within this stream */
     MIG_CMD_RECV_BITMAP,       /* Request for recved bitmap on dst */
     MIG_CMD_MAX
@@ -95,6 +96,7 @@ static struct mig_cmd_args {
     [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
                                    .len = -1, .name = "POSTCOPY_RAM_DISCARD" },
+    [MIG_CMD_POSTCOPY_RESUME]  = { .len =  0, .name = "POSTCOPY_RESUME" },
     [MIG_CMD_PACKAGED]         = { .len =  4, .name = "PACKAGED" },
     [MIG_CMD_RECV_BITMAP]      = { .len = -1, .name = "RECV_BITMAP" },
     [MIG_CMD_MAX]              = { .len = -1, .name = "MAX" },
@@ -955,6 +957,12 @@ void qemu_savevm_send_postcopy_run(QEMUFile *f)
     qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_RUN, 0, NULL);
 }
 
+void qemu_savevm_send_postcopy_resume(QEMUFile *f)
+{
+    trace_savevm_send_postcopy_resume();
+    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_RESUME, 0, NULL);
+}
+
 void qemu_savevm_send_recv_bitmap(QEMUFile *f, char *block_name)
 {
     size_t len;
@@ -1727,6 +1735,30 @@ static int loadvm_postcopy_handle_run(MigrationIncomingState *mis)
     return LOADVM_QUIT;
 }
 
+static int loadvm_postcopy_handle_resume(MigrationIncomingState *mis)
+{
+    if (mis->state != MIGRATION_STATUS_POSTCOPY_RECOVER) {
+        error_report("%s: illegal resume received", __func__);
+        /* Don't fail the load, only for this. */
+        return 0;
+    }
+
+    /*
+     * This means source VM is ready to resume the postcopy migration.
+     * It's time to switch state and release the fault thread to
+     * continue service page faults.
+     */
+    migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_RECOVER,
+                      MIGRATION_STATUS_POSTCOPY_ACTIVE);
+    qemu_sem_post(&mis->postcopy_pause_sem_fault);
+
+    trace_loadvm_postcopy_handle_resume();
+
+    /* TODO: Tell source that "we are ready" */
+
+    return 0;
+}
+
 /**
  * Immediately following this command is a blob of data containing an embedded
  * chunk of migration stream; read it and load it.
@@ -1892,6 +1924,9 @@ static int loadvm_process_command(QEMUFile *f)
     case MIG_CMD_POSTCOPY_RAM_DISCARD:
         return loadvm_postcopy_ram_handle_discard(mis, len);
 
+    case MIG_CMD_POSTCOPY_RESUME:
+        return loadvm_postcopy_handle_resume(mis);
+
     case MIG_CMD_RECV_BITMAP:
         return loadvm_handle_recv_bitmap(mis, len);
     }
diff --git a/migration/savevm.h b/migration/savevm.h
index 8126b1cc14..a5f3879191 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -46,6 +46,7 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len);
 void qemu_savevm_send_postcopy_advise(QEMUFile *f);
 void qemu_savevm_send_postcopy_listen(QEMUFile *f);
 void qemu_savevm_send_postcopy_run(QEMUFile *f);
+void qemu_savevm_send_postcopy_resume(QEMUFile *f);
 void qemu_savevm_send_recv_bitmap(QEMUFile *f, char *block_name);
 
 void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
diff --git a/migration/trace-events b/migration/trace-events
index 3dcf8a93d9..4b60865194 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -18,6 +18,7 @@ loadvm_postcopy_handle_listen(void) ""
 loadvm_postcopy_handle_run(void) ""
 loadvm_postcopy_handle_run_cpu_sync(void) ""
 loadvm_postcopy_handle_run_vmstart(void) ""
+loadvm_postcopy_handle_resume(void) ""
 loadvm_postcopy_ram_handle_discard(void) ""
 loadvm_postcopy_ram_handle_discard_end(void) ""
 loadvm_postcopy_ram_handle_discard_header(const char *ramid, uint16_t len) "%s: %ud"
@@ -35,6 +36,7 @@ savevm_send_open_return_path(void) ""
 savevm_send_ping(uint32_t val) "0x%x"
 savevm_send_postcopy_listen(void) ""
 savevm_send_postcopy_run(void) ""
+savevm_send_postcopy_resume(void) ""
 savevm_send_recv_bitmap(char *name) "%s"
 savevm_state_setup(void) ""
 savevm_state_header(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 18/32] migration: new message MIG_RP_MSG_RESUME_ACK
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (16 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 17/32] migration: new cmd MIG_CMD_POSTCOPY_RESUME Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 19/32] migration: introduce SaveVMHandlers.resume_prepare Peter Xu
                   ` (14 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Creating new message to reply for MIG_CMD_POSTCOPY_RESUME. One uint32_t
is used as payload to let the source know whether destination is ready
to continue the migration.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  | 37 +++++++++++++++++++++++++++++++++++++
 migration/migration.h  |  3 +++
 migration/savevm.c     |  3 ++-
 migration/trace-events |  1 +
 4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index 5592975d33..308adae4d3 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -94,6 +94,7 @@ enum mig_rp_message_type {
     MIG_RP_MSG_REQ_PAGES_ID, /* data (start: be64, len: be32, id: string) */
     MIG_RP_MSG_REQ_PAGES,    /* data (start: be64, len: be32) */
     MIG_RP_MSG_RECV_BITMAP,  /* send recved_bitmap back to source */
+    MIG_RP_MSG_RESUME_ACK,   /* tell source that we are ready to resume */
 
     MIG_RP_MSG_MAX
 };
@@ -542,6 +543,14 @@ void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
     trace_migrate_send_rp_recv_bitmap(block_name, res);
 }
 
+void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value)
+{
+    uint32_t buf;
+
+    buf = cpu_to_be32(value);
+    migrate_send_rp_message(mis, MIG_RP_MSG_RESUME_ACK, sizeof(buf), &buf);
+}
+
 MigrationCapabilityStatusList *qmp_query_migrate_capabilities(Error **errp)
 {
     MigrationCapabilityStatusList *head = NULL;
@@ -1777,6 +1786,7 @@ static struct rp_cmd_args {
     [MIG_RP_MSG_REQ_PAGES]      = { .len = 12, .name = "REQ_PAGES" },
     [MIG_RP_MSG_REQ_PAGES_ID]   = { .len = -1, .name = "REQ_PAGES_ID" },
     [MIG_RP_MSG_RECV_BITMAP]    = { .len = -1, .name = "RECV_BITMAP" },
+    [MIG_RP_MSG_RESUME_ACK]     = { .len =  4, .name = "RESUME_ACK" },
     [MIG_RP_MSG_MAX]            = { .len = -1, .name = "MAX" },
 };
 
@@ -1834,6 +1844,25 @@ static int migrate_handle_rp_recv_bitmap(MigrationState *s, char *block_name)
     return ram_dirty_bitmap_reload(s, block);
 }
 
+static int migrate_handle_rp_resume_ack(MigrationState *s, uint32_t value)
+{
+    trace_source_return_path_thread_resume_ack(value);
+
+    if (value != MIGRATION_RESUME_ACK_VALUE) {
+        error_report("%s: illegal resume_ack value %"PRIu32,
+                     __func__, value);
+        return -1;
+    }
+
+    /* Now both sides are active. */
+    migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_RECOVER,
+                      MIGRATION_STATUS_POSTCOPY_ACTIVE);
+
+    /* TODO: notify send thread that time to continue send pages */
+
+    return 0;
+}
+
 /*
  * Handles messages sent on the return path towards the source VM
  *
@@ -1953,6 +1982,14 @@ retry:
             }
             break;
 
+        case MIG_RP_MSG_RESUME_ACK:
+            tmp32 = ldl_be_p(buf);
+            if (migrate_handle_rp_resume_ack(ms, tmp32)) {
+                mark_source_rp_bad(ms);
+                goto out;
+            }
+            break;
+
         default:
             break;
         }
diff --git a/migration/migration.h b/migration/migration.h
index f879c93542..11fbfebba1 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -22,6 +22,8 @@
 #include "hw/qdev.h"
 #include "io/channel.h"
 
+#define  MIGRATION_RESUME_ACK_VALUE  (1)
+
 /* State for the incoming migration */
 struct MigrationIncomingState {
     QEMUFile *from_src_file;
@@ -221,5 +223,6 @@ int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
                               ram_addr_t start, size_t len);
 void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
                                  char *block_name);
+void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index bb6639812b..611b3f1a09 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1754,7 +1754,8 @@ static int loadvm_postcopy_handle_resume(MigrationIncomingState *mis)
 
     trace_loadvm_postcopy_handle_resume();
 
-    /* TODO: Tell source that "we are ready" */
+    /* Tell source that "we are ready" */
+    migrate_send_rp_resume_ack(mis, MIGRATION_RESUME_ACK_VALUE);
 
     return 0;
 }
diff --git a/migration/trace-events b/migration/trace-events
index 4b60865194..2bf8301293 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -120,6 +120,7 @@ source_return_path_thread_entry(void) ""
 source_return_path_thread_loop_top(void) ""
 source_return_path_thread_pong(uint32_t val) "0x%x"
 source_return_path_thread_shut(uint32_t val) "0x%x"
+source_return_path_thread_resume_ack(uint32_t v) "%"PRIu32
 migrate_global_state_post_load(const char *state) "loaded state: %s"
 migrate_global_state_pre_save(const char *state) "saved state: %s"
 migration_thread_low_pending(uint64_t pending) "%" PRIu64
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 19/32] migration: introduce SaveVMHandlers.resume_prepare
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (17 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 18/32] migration: new message MIG_RP_MSG_RESUME_ACK Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 20/32] migration: synchronize dirty bitmap for resume Peter Xu
                   ` (13 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

This is hook function to be called when a postcopy migration wants to
resume from a failure. For each module, it should provide its own
recovery logic before we switch to the postcopy-active state.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/migration/register.h |  2 ++
 migration/migration.c        | 20 +++++++++++++++++++-
 migration/savevm.c           | 25 +++++++++++++++++++++++++
 migration/savevm.h           |  1 +
 migration/trace-events       |  1 +
 5 files changed, 48 insertions(+), 1 deletion(-)

diff --git a/include/migration/register.h b/include/migration/register.h
index f4f7bdc177..128124f008 100644
--- a/include/migration/register.h
+++ b/include/migration/register.h
@@ -42,6 +42,8 @@ typedef struct SaveVMHandlers {
     LoadStateHandler *load_state;
     int (*load_setup)(QEMUFile *f, void *opaque);
     int (*load_cleanup)(void *opaque);
+    /* Called when postcopy migration wants to resume from failure */
+    int (*resume_prepare)(MigrationState *s, void *opaque);
 } SaveVMHandlers;
 
 int register_savevm_live(DeviceState *dev,
diff --git a/migration/migration.c b/migration/migration.c
index 308adae4d3..4dc34ed8ce 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2392,7 +2392,25 @@ typedef enum MigThrError {
 /* Return zero if success, or <0 for error */
 static int postcopy_do_resume(MigrationState *s)
 {
-    /* TODO: do the resume logic */
+    int ret;
+
+    /*
+     * Call all the resume_prepare() hooks, so that modules can be
+     * ready for the migration resume.
+     */
+    ret = qemu_savevm_state_resume_prepare(s);
+    if (ret) {
+        error_report("%s: resume_prepare() failure detected: %d",
+                     __func__, ret);
+        return ret;
+    }
+
+    /*
+     * TODO: handshake with dest using MIG_CMD_RESUME,
+     * MIG_RP_MSG_RESUME_ACK, then switch source state to
+     * "postcopy-active"
+     */
+
     return 0;
 }
 
diff --git a/migration/savevm.c b/migration/savevm.c
index 611b3f1a09..bc87b0e5b1 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1028,6 +1028,31 @@ void qemu_savevm_state_setup(QEMUFile *f)
     }
 }
 
+int qemu_savevm_state_resume_prepare(MigrationState *s)
+{
+    SaveStateEntry *se;
+    int ret;
+
+    trace_savevm_state_resume_prepare();
+
+    QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
+        if (!se->ops || !se->ops->resume_prepare) {
+            continue;
+        }
+        if (se->ops && se->ops->is_active) {
+            if (!se->ops->is_active(se->opaque)) {
+                continue;
+            }
+        }
+        ret = se->ops->resume_prepare(s, se->opaque);
+        if (ret < 0) {
+            return ret;
+        }
+    }
+
+    return 0;
+}
+
 /*
  * this function has three return values:
  *   negative: there was one error, and we have -errno.
diff --git a/migration/savevm.h b/migration/savevm.h
index a5f3879191..3193f04cca 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -31,6 +31,7 @@
 
 bool qemu_savevm_state_blocked(Error **errp);
 void qemu_savevm_state_setup(QEMUFile *f);
+int qemu_savevm_state_resume_prepare(MigrationState *s);
 void qemu_savevm_state_header(QEMUFile *f);
 int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy);
 void qemu_savevm_state_cleanup(void);
diff --git a/migration/trace-events b/migration/trace-events
index 2bf8301293..eadabf03e8 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -39,6 +39,7 @@ savevm_send_postcopy_run(void) ""
 savevm_send_postcopy_resume(void) ""
 savevm_send_recv_bitmap(char *name) "%s"
 savevm_state_setup(void) ""
+savevm_state_resume_prepare(void) ""
 savevm_state_header(void) ""
 savevm_state_iterate(void) ""
 savevm_state_cleanup(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 20/32] migration: synchronize dirty bitmap for resume
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (18 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 19/32] migration: introduce SaveVMHandlers.resume_prepare Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-30 18:40   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 21/32] migration: setup ramstate " Peter Xu
                   ` (12 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

This patch implements the first part of core RAM resume logic for
postcopy. ram_resume_prepare() is provided for the work.

When the migration is interrupted by network failure, the dirty bitmap
on the source side will be meaningless, because even the dirty bit is
cleared, it is still possible that the sent page was lost along the way
to destination. Here instead of continue the migration with the old
dirty bitmap on source, we ask the destination side to send back its
received bitmap, then invert it to be our initial dirty bitmap.

The source side send thread will issue the MIG_CMD_RECV_BITMAP requests,
once per ramblock, to ask for the received bitmap. On destination side,
MIG_RP_MSG_RECV_BITMAP will be issued, along with the requested bitmap.
Data will be received on the return-path thread of source, and the main
migration thread will be notified when all the ramblock bitmaps are
synchronized.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c  |  3 +++
 migration/migration.h  |  1 +
 migration/ram.c        | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 migration/trace-events |  4 ++++
 4 files changed, 55 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index 4dc34ed8ce..5b1fbe5b98 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2843,6 +2843,7 @@ static void migration_instance_finalize(Object *obj)
     g_free(params->tls_hostname);
     g_free(params->tls_creds);
     qemu_sem_destroy(&ms->pause_sem);
+    qemu_sem_destroy(&ms->rp_state.rp_sem);
 }
 
 static void migration_instance_init(Object *obj)
@@ -2871,6 +2872,8 @@ static void migration_instance_init(Object *obj)
     params->has_x_multifd_channels = true;
     params->has_x_multifd_page_count = true;
     params->has_xbzrle_cache_size = true;
+
+    qemu_sem_init(&ms->rp_state.rp_sem, 0);
 }
 
 /*
diff --git a/migration/migration.h b/migration/migration.h
index 11fbfebba1..82dd7d9820 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -108,6 +108,7 @@ struct MigrationState
         QEMUFile     *from_dst_file;
         QemuThread    rp_thread;
         bool          error;
+        QemuSemaphore rp_sem;
     } rp_state;
 
     double mbps;
diff --git a/migration/ram.c b/migration/ram.c
index b30c669476..49627ca9fc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -49,6 +49,7 @@
 #include "qemu/rcu_queue.h"
 #include "migration/colo.h"
 #include "migration/block.h"
+#include "savevm.h"
 
 /***********************************************************/
 /* ram save/restore */
@@ -3049,6 +3050,38 @@ static bool ram_has_postcopy(void *opaque)
     return migrate_postcopy_ram();
 }
 
+/* Sync all the dirty bitmap with destination VM.  */
+static int ram_dirty_bitmap_sync_all(MigrationState *s, RAMState *rs)
+{
+    RAMBlock *block;
+    QEMUFile *file = s->to_dst_file;
+    int ramblock_count = 0;
+
+    trace_ram_dirty_bitmap_sync_start();
+
+    RAMBLOCK_FOREACH(block) {
+        qemu_savevm_send_recv_bitmap(file, block->idstr);
+        trace_ram_dirty_bitmap_request(block->idstr);
+        ramblock_count++;
+    }
+
+    trace_ram_dirty_bitmap_sync_wait();
+
+    /* Wait until all the ramblocks' dirty bitmap synced */
+    while (ramblock_count--) {
+        qemu_sem_wait(&s->rp_state.rp_sem);
+    }
+
+    trace_ram_dirty_bitmap_sync_complete();
+
+    return 0;
+}
+
+static void ram_dirty_bitmap_reload_notify(MigrationState *s)
+{
+    qemu_sem_post(&s->rp_state.rp_sem);
+}
+
 /*
  * Read the received bitmap, revert it as the initial dirty bitmap.
  * This is only used when the postcopy migration is paused but wants
@@ -3123,12 +3156,25 @@ int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *block)
 
     trace_ram_dirty_bitmap_reload_complete(block->idstr);
 
+    /*
+     * We succeeded to sync bitmap for current ramblock. If this is
+     * the last one to sync, we need to notify the main send thread.
+     */
+    ram_dirty_bitmap_reload_notify(s);
+
     ret = 0;
 out:
     free(le_bitmap);
     return ret;
 }
 
+static int ram_resume_prepare(MigrationState *s, void *opaque)
+{
+    RAMState *rs = *(RAMState **)opaque;
+
+    return ram_dirty_bitmap_sync_all(s, rs);
+}
+
 static SaveVMHandlers savevm_ram_handlers = {
     .save_setup = ram_save_setup,
     .save_live_iterate = ram_save_iterate,
@@ -3140,6 +3186,7 @@ static SaveVMHandlers savevm_ram_handlers = {
     .save_cleanup = ram_save_cleanup,
     .load_setup = ram_load_setup,
     .load_cleanup = ram_load_cleanup,
+    .resume_prepare = ram_resume_prepare,
 };
 
 void ram_mig_init(void)
diff --git a/migration/trace-events b/migration/trace-events
index eadabf03e8..804f18d492 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -82,8 +82,12 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
 ram_postcopy_send_discard_bitmap(void) ""
 ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
 ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
+ram_dirty_bitmap_request(char *str) "%s"
 ram_dirty_bitmap_reload_begin(char *str) "%s"
 ram_dirty_bitmap_reload_complete(char *str) "%s"
+ram_dirty_bitmap_sync_start(void) ""
+ram_dirty_bitmap_sync_wait(void) ""
+ram_dirty_bitmap_sync_complete(void) ""
 
 # migration/migration.c
 await_return_path_close_on_source_close(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 21/32] migration: setup ramstate for resume
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (19 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 20/32] migration: synchronize dirty bitmap for resume Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 22/32] migration: final handshake for the resume Peter Xu
                   ` (11 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

After we updated the dirty bitmaps of ramblocks, we also need to update
the critical fields in RAMState to make sure it is ready for a resume.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c        | 37 ++++++++++++++++++++++++++++++++++++-
 migration/trace-events |  1 +
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 49627ca9fc..ff201f0922 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2250,6 +2250,33 @@ static int ram_init_all(RAMState **rsp)
     return 0;
 }
 
+static void ram_state_resume_prepare(RAMState *rs)
+{
+    RAMBlock *block;
+    long pages = 0;
+
+    /*
+     * Postcopy is not using xbzrle/compression, so no need for that.
+     * Also, since source are already halted, we don't need to care
+     * about dirty page logging as well.
+     */
+
+    RAMBLOCK_FOREACH(block) {
+        pages += bitmap_count_one(block->bmap,
+                                  block->used_length >> TARGET_PAGE_BITS);
+    }
+
+    /* This may not be aligned with current bitmaps. Recalculate. */
+    rs->migration_dirty_pages = pages;
+
+    rs->last_seen_block = NULL;
+    rs->last_sent_block = NULL;
+    rs->last_page = 0;
+    rs->last_version = ram_list.version;
+
+    trace_ram_state_resume_prepare(pages);
+}
+
 /*
  * Each of ram_save_setup, ram_save_iterate and ram_save_complete has
  * long-running RCU critical section.  When rcu-reclaims in the code
@@ -3171,8 +3198,16 @@ out:
 static int ram_resume_prepare(MigrationState *s, void *opaque)
 {
     RAMState *rs = *(RAMState **)opaque;
+    int ret;
 
-    return ram_dirty_bitmap_sync_all(s, rs);
+    ret = ram_dirty_bitmap_sync_all(s, rs);
+    if (ret) {
+        return ret;
+    }
+
+    ram_state_resume_prepare(rs);
+
+    return 0;
 }
 
 static SaveVMHandlers savevm_ram_handlers = {
diff --git a/migration/trace-events b/migration/trace-events
index 804f18d492..98c2e4de58 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -88,6 +88,7 @@ ram_dirty_bitmap_reload_complete(char *str) "%s"
 ram_dirty_bitmap_sync_start(void) ""
 ram_dirty_bitmap_sync_wait(void) ""
 ram_dirty_bitmap_sync_complete(void) ""
+ram_state_resume_prepare(long v) "%ld"
 
 # migration/migration.c
 await_return_path_close_on_source_close(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 22/32] migration: final handshake for the resume
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (20 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 21/32] migration: setup ramstate " Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 23/32] migration: free SocketAddress where allocated Peter Xu
                   ` (10 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Finish the last step to do the final handshake for the recovery.

First source sends one MIG_CMD_RESUME to dst, telling that source is
ready to resume.

Then, dest replies with MIG_RP_MSG_RESUME_ACK to source, telling that
dest is ready to resume (after switch to postcopy-active state).

When source received the RESUME_ACK, it switches its state to
postcopy-active, and finally the recovery is completed.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 28 ++++++++++++++++++++++++----
 1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 5b1fbe5b98..189d5d2d42 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1858,7 +1858,8 @@ static int migrate_handle_rp_resume_ack(MigrationState *s, uint32_t value)
     migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_RECOVER,
                       MIGRATION_STATUS_POSTCOPY_ACTIVE);
 
-    /* TODO: notify send thread that time to continue send pages */
+    /* Notify send thread that time to continue send pages */
+    qemu_sem_post(&s->rp_state.rp_sem);
 
     return 0;
 }
@@ -2389,6 +2390,21 @@ typedef enum MigThrError {
     MIG_THR_ERR_FATAL = 2,
 } MigThrError;
 
+static int postcopy_resume_handshake(MigrationState *s)
+{
+    qemu_savevm_send_postcopy_resume(s->to_dst_file);
+
+    while (s->state == MIGRATION_STATUS_POSTCOPY_RECOVER) {
+        qemu_sem_wait(&s->rp_state.rp_sem);
+    }
+
+    if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) {
+        return 0;
+    }
+
+    return -1;
+}
+
 /* Return zero if success, or <0 for error */
 static int postcopy_do_resume(MigrationState *s)
 {
@@ -2406,10 +2422,14 @@ static int postcopy_do_resume(MigrationState *s)
     }
 
     /*
-     * TODO: handshake with dest using MIG_CMD_RESUME,
-     * MIG_RP_MSG_RESUME_ACK, then switch source state to
-     * "postcopy-active"
+     * Last handshake with destination on the resume (destination will
+     * switch to postcopy-active afterwards)
      */
+    ret = postcopy_resume_handshake(s);
+    if (ret) {
+        error_report("%s: handshake failed: %d", __func__, ret);
+        return ret;
+    }
 
     return 0;
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 23/32] migration: free SocketAddress where allocated
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (21 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 22/32] migration: final handshake for the resume Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 24/32] migration: return incoming task tag for sockets Peter Xu
                   ` (9 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Freeing the SocketAddress struct in socket_start_incoming_migration is
slightly confusing. Let's free the address in the same context where we
allocated it.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/socket.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/migration/socket.c b/migration/socket.c
index dee869044a..4879f11e0f 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -172,7 +172,6 @@ static void socket_start_incoming_migration(SocketAddress *saddr,
 
     if (qio_channel_socket_listen_sync(listen_ioc, saddr, errp) < 0) {
         object_unref(OBJECT(listen_ioc));
-        qapi_free_SocketAddress(saddr);
         return;
     }
 
@@ -181,7 +180,6 @@ static void socket_start_incoming_migration(SocketAddress *saddr,
                           socket_accept_incoming_migration,
                           listen_ioc,
                           (GDestroyNotify)object_unref);
-    qapi_free_SocketAddress(saddr);
 }
 
 void tcp_start_incoming_migration(const char *host_port, Error **errp)
@@ -192,10 +190,12 @@ void tcp_start_incoming_migration(const char *host_port, Error **errp)
         socket_start_incoming_migration(saddr, &err);
     }
     error_propagate(errp, err);
+    qapi_free_SocketAddress(saddr);
 }
 
 void unix_start_incoming_migration(const char *path, Error **errp)
 {
     SocketAddress *saddr = unix_build_address(path);
     socket_start_incoming_migration(saddr, errp);
+    qapi_free_SocketAddress(saddr);
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 24/32] migration: return incoming task tag for sockets
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (22 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 23/32] migration: free SocketAddress where allocated Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 25/32] migration: return incoming task tag for exec Peter Xu
                   ` (8 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

For socket based incoming migration, we attached a background task onto
main loop to handle the acception of connections. We never had a way to
destroy it before, only if we finished the migration.

Let's allow socket_start_incoming_migration() to return the source tag
of the listening async work, so that we may be able to clean it up in
the future.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/socket.c | 36 ++++++++++++++++++++++++------------
 migration/socket.h |  4 ++--
 2 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/migration/socket.c b/migration/socket.c
index 4879f11e0f..e8f3325155 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -162,8 +162,12 @@ out:
 }
 
 
-static void socket_start_incoming_migration(SocketAddress *saddr,
-                                            Error **errp)
+/*
+ * Returns the tag ID of the watch that is attached to global main
+ * loop (>0), or zero if failure detected.
+ */
+static guint socket_start_incoming_migration(SocketAddress *saddr,
+                                             Error **errp)
 {
     QIOChannelSocket *listen_ioc = qio_channel_socket_new();
 
@@ -172,30 +176,38 @@ static void socket_start_incoming_migration(SocketAddress *saddr,
 
     if (qio_channel_socket_listen_sync(listen_ioc, saddr, errp) < 0) {
         object_unref(OBJECT(listen_ioc));
-        return;
+        return 0;
     }
 
-    qio_channel_add_watch(QIO_CHANNEL(listen_ioc),
-                          G_IO_IN,
-                          socket_accept_incoming_migration,
-                          listen_ioc,
-                          (GDestroyNotify)object_unref);
+    return qio_channel_add_watch(QIO_CHANNEL(listen_ioc),
+                                 G_IO_IN,
+                                 socket_accept_incoming_migration,
+                                 listen_ioc,
+                                 (GDestroyNotify)object_unref);
 }
 
-void tcp_start_incoming_migration(const char *host_port, Error **errp)
+guint tcp_start_incoming_migration(const char *host_port, Error **errp)
 {
     Error *err = NULL;
     SocketAddress *saddr = tcp_build_address(host_port, &err);
+    guint tag = 0;
+
     if (!err) {
-        socket_start_incoming_migration(saddr, &err);
+        tag = socket_start_incoming_migration(saddr, &err);
     }
     error_propagate(errp, err);
     qapi_free_SocketAddress(saddr);
+
+    return tag;
 }
 
-void unix_start_incoming_migration(const char *path, Error **errp)
+guint unix_start_incoming_migration(const char *path, Error **errp)
 {
     SocketAddress *saddr = unix_build_address(path);
-    socket_start_incoming_migration(saddr, errp);
+    guint tag;
+
+    tag = socket_start_incoming_migration(saddr, errp);
     qapi_free_SocketAddress(saddr);
+
+    return tag;
 }
diff --git a/migration/socket.h b/migration/socket.h
index 6b91e9db38..bc8a59aee4 100644
--- a/migration/socket.h
+++ b/migration/socket.h
@@ -16,12 +16,12 @@
 
 #ifndef QEMU_MIGRATION_SOCKET_H
 #define QEMU_MIGRATION_SOCKET_H
-void tcp_start_incoming_migration(const char *host_port, Error **errp);
+guint tcp_start_incoming_migration(const char *host_port, Error **errp);
 
 void tcp_start_outgoing_migration(MigrationState *s, const char *host_port,
                                   Error **errp);
 
-void unix_start_incoming_migration(const char *path, Error **errp);
+guint unix_start_incoming_migration(const char *path, Error **errp);
 
 void unix_start_outgoing_migration(MigrationState *s, const char *path,
                                    Error **errp);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 25/32] migration: return incoming task tag for exec
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (23 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 24/32] migration: return incoming task tag for sockets Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 26/32] migration: return incoming task tag for fd Peter Xu
                   ` (7 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Return the async task tag for exec typed incoming migration in
exec_start_incoming_migration().

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/exec.c | 18 +++++++++++-------
 migration/exec.h |  2 +-
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/migration/exec.c b/migration/exec.c
index f3be1baf2e..a0796c2c70 100644
--- a/migration/exec.c
+++ b/migration/exec.c
@@ -52,7 +52,11 @@ static gboolean exec_accept_incoming_migration(QIOChannel *ioc,
     return G_SOURCE_REMOVE;
 }
 
-void exec_start_incoming_migration(const char *command, Error **errp)
+/*
+ * Returns the tag ID of the watch that is attached to global main
+ * loop (>0), or zero if failure detected.
+ */
+guint exec_start_incoming_migration(const char *command, Error **errp)
 {
     QIOChannel *ioc;
     const char *argv[] = { "/bin/sh", "-c", command, NULL };
@@ -62,13 +66,13 @@ void exec_start_incoming_migration(const char *command, Error **errp)
                                                     O_RDWR,
                                                     errp));
     if (!ioc) {
-        return;
+        return 0;
     }
 
     qio_channel_set_name(ioc, "migration-exec-incoming");
-    qio_channel_add_watch(ioc,
-                          G_IO_IN,
-                          exec_accept_incoming_migration,
-                          NULL,
-                          NULL);
+    return qio_channel_add_watch(ioc,
+                                 G_IO_IN,
+                                 exec_accept_incoming_migration,
+                                 NULL,
+                                 NULL);
 }
diff --git a/migration/exec.h b/migration/exec.h
index b210ffde7a..0a7aadacd3 100644
--- a/migration/exec.h
+++ b/migration/exec.h
@@ -19,7 +19,7 @@
 
 #ifndef QEMU_MIGRATION_EXEC_H
 #define QEMU_MIGRATION_EXEC_H
-void exec_start_incoming_migration(const char *host_port, Error **errp);
+guint exec_start_incoming_migration(const char *host_port, Error **errp);
 
 void exec_start_outgoing_migration(MigrationState *s, const char *host_port,
                                    Error **errp);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 26/32] migration: return incoming task tag for fd
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (24 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 25/32] migration: return incoming task tag for exec Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 27/32] migration: store listen task tag Peter Xu
                   ` (6 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Allow to return the task tag in fd_start_incoming_migration().

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/fd.c | 18 +++++++++++-------
 migration/fd.h |  2 +-
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/migration/fd.c b/migration/fd.c
index 30de4b9847..7ead2f26cc 100644
--- a/migration/fd.c
+++ b/migration/fd.c
@@ -52,7 +52,11 @@ static gboolean fd_accept_incoming_migration(QIOChannel *ioc,
     return G_SOURCE_REMOVE;
 }
 
-void fd_start_incoming_migration(const char *infd, Error **errp)
+/*
+ * Returns the tag ID of the watch that is attached to global main
+ * loop (>0), or zero if failure detected.
+ */
+guint fd_start_incoming_migration(const char *infd, Error **errp)
 {
     QIOChannel *ioc;
     int fd;
@@ -63,13 +67,13 @@ void fd_start_incoming_migration(const char *infd, Error **errp)
     ioc = qio_channel_new_fd(fd, errp);
     if (!ioc) {
         close(fd);
-        return;
+        return 0;
     }
 
     qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-incoming");
-    qio_channel_add_watch(ioc,
-                          G_IO_IN,
-                          fd_accept_incoming_migration,
-                          NULL,
-                          NULL);
+    return qio_channel_add_watch(ioc,
+                                 G_IO_IN,
+                                 fd_accept_incoming_migration,
+                                 NULL,
+                                 NULL);
 }
diff --git a/migration/fd.h b/migration/fd.h
index a14a63ce2e..94cdea87d8 100644
--- a/migration/fd.h
+++ b/migration/fd.h
@@ -16,7 +16,7 @@
 
 #ifndef QEMU_MIGRATION_FD_H
 #define QEMU_MIGRATION_FD_H
-void fd_start_incoming_migration(const char *path, Error **errp);
+guint fd_start_incoming_migration(const char *path, Error **errp);
 
 void fd_start_outgoing_migration(MigrationState *s, const char *fdname,
                                  Error **errp);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 27/32] migration: store listen task tag
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (25 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 26/32] migration: return incoming task tag for fd Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 28/32] migration: allow migrate_incoming for paused VM Peter Xu
                   ` (5 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Store the task tag for migration types: tcp/unix/fd/exec in current
MigrationIncomingState struct.

For defered migration, no need to store task tag since there is no task
running in the main loop at all. For RDMA, let's mark it as todo.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 22 ++++++++++++++++++----
 migration/migration.h |  2 ++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 189d5d2d42..a4cdedcde8 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -175,6 +175,7 @@ void migration_incoming_state_destroy(void)
         mis->from_src_file = NULL;
     }
 
+    mis->listen_task_tag = 0;
     qemu_event_reset(&mis->main_thread_load_event);
 }
 
@@ -269,25 +270,31 @@ int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
 void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
     const char *p;
+    guint task_tag = 0;
+    MigrationIncomingState *mis = migration_incoming_get_current();
 
     qapi_event_send_migration(MIGRATION_STATUS_SETUP, &error_abort);
     if (!strcmp(uri, "defer")) {
         deferred_incoming_migration(errp);
     } else if (strstart(uri, "tcp:", &p)) {
-        tcp_start_incoming_migration(p, errp);
+        task_tag = tcp_start_incoming_migration(p, errp);
 #ifdef CONFIG_RDMA
     } else if (strstart(uri, "rdma:", &p)) {
+        /* TODO: store task tag for RDMA migrations */
         rdma_start_incoming_migration(p, errp);
 #endif
     } else if (strstart(uri, "exec:", &p)) {
-        exec_start_incoming_migration(p, errp);
+        task_tag = exec_start_incoming_migration(p, errp);
     } else if (strstart(uri, "unix:", &p)) {
-        unix_start_incoming_migration(p, errp);
+        task_tag = unix_start_incoming_migration(p, errp);
     } else if (strstart(uri, "fd:", &p)) {
-        fd_start_incoming_migration(p, errp);
+        task_tag = fd_start_incoming_migration(p, errp);
     } else {
         error_setg(errp, "unknown migration protocol: %s", uri);
+        return;
     }
+
+    mis->listen_task_tag = task_tag;
 }
 
 static void process_incoming_migration_bh(void *opaque)
@@ -453,6 +460,13 @@ void migration_fd_process_incoming(QEMUFile *f)
         migration_incoming_setup(f);
         migration_incoming_process();
     }
+
+    /*
+     * When reach here, we should not need the listening port any
+     * more. We'll detach the listening task soon, let's reset the
+     * listen task tag.
+     */
+    mis->listen_task_tag = 0;
 }
 
 void migration_ioc_process_incoming(QIOChannel *ioc)
diff --git a/migration/migration.h b/migration/migration.h
index 82dd7d9820..a0af86ab21 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -27,6 +27,8 @@
 /* State for the incoming migration */
 struct MigrationIncomingState {
     QEMUFile *from_src_file;
+    /* Task tag for incoming listen port. Valid when >0. */
+    guint listen_task_tag;
 
     /*
      * Free at the start of the main state load, set as the main thread finishes
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 28/32] migration: allow migrate_incoming for paused VM
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (26 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 27/32] migration: store listen task tag Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-12-01 17:21   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 29/32] migration: init dst in migration_object_init too Peter Xu
                   ` (4 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

migrate_incoming command is previously only used when we were providing
"-incoming defer" in the command line, to defer the incoming migration
channel creation.

However there is similar requirement when we are paused during postcopy
migration. The old incoming channel might have been destroyed already.
We may need another new channel for the recovery to happen.

This patch leveraged the same interface, but allows the user to specify
incoming migration channel even for paused postcopy.

Meanwhile, now migration listening ports are always detached manually
using the tag, rather than using return values of dispatchers.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/exec.c       |  2 +-
 migration/fd.c         |  2 +-
 migration/migration.c  | 58 +++++++++++++++++++++++++++++++++++++++++++-------
 migration/socket.c     |  4 +---
 migration/trace-events |  2 ++
 5 files changed, 55 insertions(+), 13 deletions(-)

diff --git a/migration/exec.c b/migration/exec.c
index a0796c2c70..9d20d10899 100644
--- a/migration/exec.c
+++ b/migration/exec.c
@@ -49,7 +49,7 @@ static gboolean exec_accept_incoming_migration(QIOChannel *ioc,
 {
     migration_channel_process_incoming(ioc);
     object_unref(OBJECT(ioc));
-    return G_SOURCE_REMOVE;
+    return G_SOURCE_CONTINUE;
 }
 
 /*
diff --git a/migration/fd.c b/migration/fd.c
index 7ead2f26cc..54b36888e2 100644
--- a/migration/fd.c
+++ b/migration/fd.c
@@ -49,7 +49,7 @@ static gboolean fd_accept_incoming_migration(QIOChannel *ioc,
 {
     migration_channel_process_incoming(ioc);
     object_unref(OBJECT(ioc));
-    return G_SOURCE_REMOVE;
+    return G_SOURCE_CONTINUE;
 }
 
 /*
diff --git a/migration/migration.c b/migration/migration.c
index a4cdedcde8..9b7fc56ed8 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -179,6 +179,17 @@ void migration_incoming_state_destroy(void)
     qemu_event_reset(&mis->main_thread_load_event);
 }
 
+static bool migrate_incoming_detach_listen(MigrationIncomingState *mis)
+{
+    if (mis->listen_task_tag) {
+        /* Never fail */
+        g_source_remove(mis->listen_task_tag);
+        mis->listen_task_tag = 0;
+        return true;
+    }
+    return false;
+}
+
 static void migrate_generate_event(int new_state)
 {
     if (migrate_use_events()) {
@@ -463,10 +474,9 @@ void migration_fd_process_incoming(QEMUFile *f)
 
     /*
      * When reach here, we should not need the listening port any
-     * more. We'll detach the listening task soon, let's reset the
-     * listen task tag.
+     * more.  Detach the listening port explicitly.
      */
-    mis->listen_task_tag = 0;
+    migrate_incoming_detach_listen(mis);
 }
 
 void migration_ioc_process_incoming(QIOChannel *ioc)
@@ -1422,14 +1432,46 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
 {
     Error *local_err = NULL;
     static bool once = true;
+    MigrationIncomingState *mis = migration_incoming_get_current();
+
 
-    if (!deferred_incoming) {
-        error_setg(errp, "For use with '-incoming defer'");
+    if (mis->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
+        if (mis->listen_task_tag) {
+            error_setg(errp, "We already have a listening port!");
+            return;
+        } else {
+            /*
+            * We are in postcopy-paused state, and we don't have
+            * listening port.  It's very possible that the old
+            * listening port is already gone, so we allow to create a
+            * new one.
+            *
+            * NOTE: RDMA migration currently does not really use
+            * listen_task_tag for now, so even if listen_task_tag is
+            * zero, RDMA can still have its accept port listening.
+            * However, RDMA is not supported by postcopy at all (yet), so
+            * we are safe here.
+            */
+            trace_migrate_incoming_recover();
+        }
+    } else if (deferred_incoming) {
+        /*
+         * We don't need recovery, but we possibly has a deferred
+         * incoming parameter, this allows us to manually specify
+         * incoming port once.
+         */
+        if (!once) {
+            error_setg(errp, "The incoming migration has already been started");
+            return;
+        } else {
+            /* PASS */
+            trace_migrate_incoming_deferred();
+        }
+    } else {
+        error_setg(errp, "Migrate-incoming is only allowed for either "
+                   "deferred incoming, or postcopy paused stage.");
         return;
     }
-    if (!once) {
-        error_setg(errp, "The incoming migration has already been started");
-    }
 
     qemu_start_incoming_migration(uri, &local_err);
 
diff --git a/migration/socket.c b/migration/socket.c
index e8f3325155..54095a80a0 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -155,10 +155,8 @@ out:
     if (migration_has_all_channels()) {
         /* Close listening socket as its no longer needed */
         qio_channel_close(ioc, NULL);
-        return G_SOURCE_REMOVE;
-    } else {
-        return G_SOURCE_CONTINUE;
     }
+    return G_SOURCE_CONTINUE;
 }
 
 
diff --git a/migration/trace-events b/migration/trace-events
index 98c2e4de58..65b1c7e459 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -136,6 +136,8 @@ process_incoming_migration_co_end(int ret, int ps) "ret=%d postcopy-state=%d"
 process_incoming_migration_co_postcopy_end_main(void) ""
 migration_set_incoming_channel(void *ioc, const char *ioctype) "ioc=%p ioctype=%s"
 migration_set_outgoing_channel(void *ioc, const char *ioctype, const char *hostname)  "ioc=%p ioctype=%s hostname=%s"
+migrate_incoming_deferred(void) ""
+migrate_incoming_recover(void) ""
 
 # migration/rdma.c
 qemu_rdma_accept_incoming_migration(void) ""
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 29/32] migration: init dst in migration_object_init too
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (27 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 28/32] migration: allow migrate_incoming for paused VM Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch Peter Xu
                   ` (3 subsequent siblings)
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Though we may not need it, now we init both the src/dst migration
objects in migration_object_init() so that even incoming migration
object would be thread safe (it was not).

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 9b7fc56ed8..536a771803 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -104,6 +104,7 @@ enum mig_rp_message_type {
    dynamic creation of migration */
 
 static MigrationState *current_migration;
+static MigrationIncomingState *current_incoming;
 
 static bool migration_object_check(MigrationState *ms, Error **errp);
 static int migration_maybe_pause(MigrationState *s,
@@ -119,6 +120,18 @@ void migration_object_init(void)
     assert(!current_migration);
     current_migration = MIGRATION_OBJ(object_new(TYPE_MIGRATION));
 
+    /*
+     * Init the migrate incoming object as well no matter whether
+     * we'll use it or not.
+     */
+    assert(!current_incoming);
+    current_incoming = g_new0(MigrationIncomingState, 1);
+    current_incoming->state = MIGRATION_STATUS_NONE;
+    qemu_mutex_init(&current_incoming->rp_mutex);
+    qemu_event_init(&current_incoming->main_thread_load_event, false);
+    qemu_sem_init(&current_incoming->postcopy_pause_sem_dst, 0);
+    qemu_sem_init(&current_incoming->postcopy_pause_sem_fault, 0);
+
     if (!migration_object_check(current_migration, &err)) {
         error_report_err(err);
         exit(1);
@@ -144,19 +157,8 @@ MigrationState *migrate_get_current(void)
 
 MigrationIncomingState *migration_incoming_get_current(void)
 {
-    static bool once;
-    static MigrationIncomingState mis_current;
-
-    if (!once) {
-        mis_current.state = MIGRATION_STATUS_NONE;
-        memset(&mis_current, 0, sizeof(MigrationIncomingState));
-        qemu_mutex_init(&mis_current.rp_mutex);
-        qemu_event_init(&mis_current.main_thread_load_event, false);
-        qemu_sem_init(&mis_current.postcopy_pause_sem_dst, 0);
-        qemu_sem_init(&mis_current.postcopy_pause_sem_fault, 0);
-        once = true;
-    }
-    return &mis_current;
+    assert(current_incoming);
+    return current_incoming;
 }
 
 void migration_incoming_state_destroy(void)
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (28 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 29/32] migration: init dst in migration_object_init too Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-12-01 12:34   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause" Peter Xu
                   ` (2 subsequent siblings)
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

Switch the state until we try to start the VM on destination side.  The
problem is that without doing this we may have a very small window that
we'll be in such a state:

- dst VM is in postcopy-active state,
- main thread is handling MIG_CMD_PACKAGED message, which loads all the
  device states,
- ram load thread is reading memory data from source.

Then if we failed at this point when reading the migration stream we'll
also switch to postcopy-paused state, but that is not what we want.  If
device states failed to load, we should fail the migration directly
instead of pause.

Postponing the state switch to the point when we have already loaded the
devices' states and been ready to start running destination VM.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/savevm.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/migration/savevm.c b/migration/savevm.c
index bc87b0e5b1..3bc792e320 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1584,8 +1584,6 @@ static void *postcopy_ram_listen_thread(void *opaque)
     QEMUFile *f = mis->from_src_file;
     int load_res;
 
-    migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
-                                   MIGRATION_STATUS_POSTCOPY_ACTIVE);
     qemu_sem_post(&mis->listen_thread_sem);
     trace_postcopy_ram_listen_thread_start();
 
@@ -1748,6 +1746,14 @@ static int loadvm_postcopy_handle_run(MigrationIncomingState *mis)
         return -1;
     }
 
+    /*
+     * Declare that we are in postcopy now.  We should already have
+     * all the device states loaded ready when reach here, and also
+     * the ram load thread running.
+     */
+    migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
+                                   MIGRATION_STATUS_POSTCOPY_ACTIVE);
+
     data = g_new(HandleRunBhData, 1);
     data->bh = qemu_bh_new(loadvm_postcopy_handle_run_bh, data);
     qemu_bh_schedule(data->bh);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause"
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (29 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-12-01 16:53   ` Dr. David Alan Gilbert
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 32/32] migration, hmp: new command "migrate_pause" Peter Xu
  2017-11-30 20:00 ` [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Dr. David Alan Gilbert
  32 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

It is used to manually trigger the postcopy pause state.  It works just
like when we found the migration stream failed during postcopy, but
provide an explicit way for user in case of misterious socket hangs.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/migration.c | 18 ++++++++++++++++++
 qapi/migration.json   | 22 ++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/migration/migration.c b/migration/migration.c
index 536a771803..30348a5e27 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1485,6 +1485,24 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
     once = false;
 }
 
+void qmp_migrate_pause(Error **errp)
+{
+    int ret;
+    MigrationState *ms = migrate_get_current();
+
+    if (ms->state != MIGRATION_STATUS_POSTCOPY_ACTIVE) {
+        error_setg(errp, "Migration pause is currently only allowed during"
+                   " an active postcopy phase.");
+        return;
+    }
+
+    ret = qemu_file_shutdown(ms->to_dst_file);
+
+    if (ret) {
+        error_setg(errp, "Failed to pause migration stream.");
+    }
+}
+
 bool migration_is_blocked(Error **errp)
 {
     if (qemu_savevm_state_blocked(errp)) {
diff --git a/qapi/migration.json b/qapi/migration.json
index 4a3eff62f1..52901f7e2e 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1074,6 +1074,28 @@
 { 'command': 'migrate-incoming', 'data': {'uri': 'str' } }
 
 ##
+# @migrate-pause:
+#
+# Pause an migration.  Currently it can only pause a postcopy
+# migration.  Pausing a precopy migration is not supported yet.
+#
+# It is mostly used as a manual way to trigger the postcopy paused
+# state when the network sockets hang due to some reason, so that we
+# can try a recovery afterward.
+#
+# Returns: nothing on success
+#
+# Since: 2.12
+#
+# Example:
+#
+# -> { "execute": "migrate-pause" }
+# <- { "return": {} }
+#
+##
+{ 'command': 'migrate-pause' }
+
+##
 # @xen-save-devices-state:
 #
 # Save the state of all devices to file. The RAM and the block devices
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [Qemu-devel] [PATCH v4 32/32] migration, hmp: new command "migrate_pause"
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (30 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause" Peter Xu
@ 2017-11-08  6:01 ` Peter Xu
  2017-11-30 20:00 ` [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Dr. David Alan Gilbert
  32 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-11-08  6:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli, Dr . David Alan Gilbert, peterx

HMP version of QMP "migrate-pause".

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 hmp-commands.hx | 14 ++++++++++++++
 hmp.c           |  9 +++++++++
 hmp.h           |  1 +
 3 files changed, 24 insertions(+)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index ffcdc34652..0c9f0cc4f2 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -992,6 +992,20 @@ as the -incoming option).
 ETEXI
 
     {
+        .name       = "migrate_pause",
+        .args_type  = "",
+        .params     = "",
+        .help       = "Pause a migration stream (only supported by postcopy)",
+        .cmd        = hmp_migrate_pause,
+    },
+
+STEXI
+@item migrate_pause
+@findex migrate_pause
+Pause an existing migration manually.  Currently it only support postcopy.
+ETEXI
+
+    {
         .name       = "migrate_set_cache_size",
         .args_type  = "value:o",
         .params     = "value",
diff --git a/hmp.c b/hmp.c
index c7e1022283..c1abba037f 100644
--- a/hmp.c
+++ b/hmp.c
@@ -1519,6 +1519,15 @@ void hmp_migrate_incoming(Monitor *mon, const QDict *qdict)
     hmp_handle_error(mon, &err);
 }
 
+void hmp_migrate_pause(Monitor *mon, const QDict *qdict)
+{
+    Error *err = NULL;
+
+    qmp_migrate_pause(&err);
+
+    hmp_handle_error(mon, &err);
+}
+
 /* Kept for backwards compatibility */
 void hmp_migrate_set_downtime(Monitor *mon, const QDict *qdict)
 {
diff --git a/hmp.h b/hmp.h
index a6f56b1f29..87d7c117eb 100644
--- a/hmp.h
+++ b/hmp.h
@@ -70,6 +70,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict);
 void hmp_migrate_cancel(Monitor *mon, const QDict *qdict);
 void hmp_migrate_continue(Monitor *mon, const QDict *qdict);
 void hmp_migrate_incoming(Monitor *mon, const QDict *qdict);
+void hmp_migrate_pause(Monitor *mon, const QDict *qdict);
 void hmp_migrate_set_downtime(Monitor *mon, const QDict *qdict);
 void hmp_migrate_set_speed(Monitor *mon, const QDict *qdict);
 void hmp_migrate_set_capability(Monitor *mon, const QDict *qdict);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile
  2017-11-08  6:00 ` [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile Peter Xu
@ 2017-11-30 10:24   ` Dr. David Alan Gilbert
  2017-12-01  8:39     ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 10:24 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> If the postcopy down due to some reason, we can always see this on dst:
> 
>   qemu-system-x86_64: RP: Received invalid message 0x0000 length 0x0000
> 
> However in most cases that's not the real issue. The problem is that
> qemu_get_be16() has no way to show whether the returned data is valid or
> not, and we are _always_ assuming it is valid. That's possibly not wise.
> 
> The best approach to solve this would be: refactoring QEMUFile interface
> to allow the APIs to return error if there is. However it needs quite a
> bit of work and testing. For now, let's explicitly check the validity
> first before using the data in all places for qemu_get_*().
> 
> This patch tries to fix most of the cases I can see. Only if we are with
> this, can we make sure we are processing the valid data, and also can we
> make sure we can capture the channel down events correctly.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c |  5 +++++
>  migration/ram.c       | 26 ++++++++++++++++++++++----
>  migration/savevm.c    | 40 ++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 65 insertions(+), 6 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index c0206023d7..eae34d0524 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1708,6 +1708,11 @@ static void *source_return_path_thread(void *opaque)
>          header_type = qemu_get_be16(rp);
>          header_len = qemu_get_be16(rp);
>  
> +        if (qemu_file_get_error(rp)) {
> +            mark_source_rp_bad(ms);
> +            goto out;
> +        }
> +
>          if (header_type >= MIG_RP_MSG_MAX ||
>              header_type == MIG_RP_MSG_INVALID) {
>              error_report("RP: Received invalid message 0x%04x length 0x%04x",
> diff --git a/migration/ram.c b/migration/ram.c
> index 8620aa400a..960c726ff2 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2687,7 +2687,7 @@ static int ram_load_postcopy(QEMUFile *f)
>      void *last_host = NULL;
>      bool all_zero = false;
>  
> -    while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
> +    while (!(flags & RAM_SAVE_FLAG_EOS)) {

I still think you need to keep the !ret && - see below;
anyway, there's no harm in keeping it!

>          ram_addr_t addr;
>          void *host = NULL;
>          void *page_buffer = NULL;
> @@ -2696,6 +2696,16 @@ static int ram_load_postcopy(QEMUFile *f)
>          uint8_t ch;
>  
>          addr = qemu_get_be64(f);
> +
> +        /*
> +         * If qemu file error, we should stop here, and then "addr"
> +         * may be invalid
> +         */
> +        ret = qemu_file_get_error(f);
> +        if (ret) {
> +            break;
> +        }
> +
>          flags = addr & ~TARGET_PAGE_MASK;
>          addr &= TARGET_PAGE_MASK;
>  
> @@ -2776,6 +2786,13 @@ static int ram_load_postcopy(QEMUFile *f)
>              error_report("Unknown combination of migration flags: %#x"
>                           " (postcopy mode)", flags);
>              ret = -EINVAL;
> +            break;

This 'break' breaks from the switch, but doesn't break the loop and
because you remove dthe !ret && from the top, the loop keeps going when
it shouldn't.

> +        }
> +
> +        /* Detect for any possible file errors */
> +        if (qemu_file_get_error(f)) {
> +            ret = qemu_file_get_error(f);
> +            break;
>          }

This is all simpler if you just leave the !ret && at the top, and then
make this:
  if (!ret) {
      ret = qemu_file_get_error(f);
  }

>  
>          if (place_needed) {

Make that

      if (!ret && place_needed) {

> @@ -2789,9 +2806,10 @@ static int ram_load_postcopy(QEMUFile *f)
>                  ret = postcopy_place_page(mis, place_dest,
>                                            place_source, block);
>              }
> -        }
> -        if (!ret) {
> -            ret = qemu_file_get_error(f);
> +
> +            if (ret) {
> +                break;
> +            }

And with the !ret check at the top this goes again.

>          }
>      }
>  
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 4a88228614..1da0255cd7 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1765,6 +1765,11 @@ static int loadvm_process_command(QEMUFile *f)
>      cmd = qemu_get_be16(f);
>      len = qemu_get_be16(f);
>  
> +    /* Check validity before continue processing of cmds */
> +    if (qemu_file_get_error(f)) {
> +        return qemu_file_get_error(f);
> +    }
> +
>      trace_loadvm_process_command(cmd, len);
>      if (cmd >= MIG_CMD_MAX || cmd == MIG_CMD_INVALID) {
>          error_report("MIG_CMD 0x%x unknown (len 0x%x)", cmd, len);
> @@ -1830,6 +1835,7 @@ static int loadvm_process_command(QEMUFile *f)
>   */
>  static bool check_section_footer(QEMUFile *f, SaveStateEntry *se)
>  {
> +    int ret;
>      uint8_t read_mark;
>      uint32_t read_section_id;
>  
> @@ -1840,6 +1846,13 @@ static bool check_section_footer(QEMUFile *f, SaveStateEntry *se)
>  
>      read_mark = qemu_get_byte(f);
>  
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        error_report("%s: Read section footer failed: %d",
> +                     __func__, ret);
> +        return false;
> +    }
> +
>      if (read_mark != QEMU_VM_SECTION_FOOTER) {
>          error_report("Missing section footer for %s", se->idstr);
>          return false;
> @@ -1875,6 +1888,13 @@ qemu_loadvm_section_start_full(QEMUFile *f, MigrationIncomingState *mis)
>      instance_id = qemu_get_be32(f);
>      version_id = qemu_get_be32(f);
>  
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        error_report("%s: Failed to read instance/version ID: %d",
> +                     __func__, ret);
> +        return ret;
> +    }
> +
>      trace_qemu_loadvm_state_section_startfull(section_id, idstr,
>              instance_id, version_id);
>      /* Find savevm section */
> @@ -1922,6 +1942,13 @@ qemu_loadvm_section_part_end(QEMUFile *f, MigrationIncomingState *mis)
>  
>      section_id = qemu_get_be32(f);
>  
> +    ret = qemu_file_get_error(f);
> +    if (ret) {
> +        error_report("%s: Failed to read section ID: %d",
> +                     __func__, ret);
> +        return ret;
> +    }
> +
>      trace_qemu_loadvm_state_section_partend(section_id);
>      QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
>          if (se->load_section_id == section_id) {
> @@ -1989,8 +2016,14 @@ static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
>      uint8_t section_type;
>      int ret = 0;
>  
> -    while ((section_type = qemu_get_byte(f)) != QEMU_VM_EOF) {
> -        ret = 0;
> +    while (true) {
> +        section_type = qemu_get_byte(f);
> +
> +        if (qemu_file_get_error(f)) {
> +            ret = qemu_file_get_error(f);
> +            break;
> +        }
> +
>          trace_qemu_loadvm_state_section(section_type);
>          switch (section_type) {
>          case QEMU_VM_SECTION_START:
> @@ -2014,6 +2047,9 @@ static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
>                  goto out;
>              }
>              break;
> +        case QEMU_VM_EOF:
> +            /* This is the end of migration */
> +            goto out;
>          default:
>              error_report("Unknown savevm section type %d", section_type);
>              ret = -EINVAL;
> -- 
> 2.13.6

The rest of it looks OK.

Dave

--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic Peter Xu
@ 2017-11-30 10:49   ` Dr. David Alan Gilbert
  2017-12-01  8:56     ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 10:49 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> Now when network down for postcopy, the source side will not fail the
> migration. Instead we convert the status into this new paused state, and
> we will try to wait for a rescue in the future.
> 
> If a recovery is detected, migration_thread() will reset its local
> variables to prepare for that.
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

That's still OK; you might want to consider reusing the 'pause_sem' that I
added to MigrationStatus for the other pause case.

Dave

> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c  | 98 +++++++++++++++++++++++++++++++++++++++++++++++---
>  migration/migration.h  |  3 ++
>  migration/trace-events |  1 +
>  3 files changed, 98 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index dd270f8bc5..46e7ca36a4 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1111,6 +1111,8 @@ static void migrate_fd_cleanup(void *opaque)
>      }
>      notifier_list_notify(&migration_state_notifiers, s);
>      block_cleanup_parameters(s);
> +
> +    qemu_sem_destroy(&s->postcopy_pause_sem);
>  }
>  
>  void migrate_set_error(MigrationState *s, const Error *error)
> @@ -1267,6 +1269,7 @@ MigrationState *migrate_init(void)
>      s->migration_thread_running = false;
>      error_free(s->error);
>      s->error = NULL;
> +    qemu_sem_init(&s->postcopy_pause_sem, 0);
>  
>      migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
>  
> @@ -2159,6 +2162,80 @@ bool migrate_colo_enabled(void)
>      return s->enabled_capabilities[MIGRATION_CAPABILITY_X_COLO];
>  }
>  
> +typedef enum MigThrError {
> +    /* No error detected */
> +    MIG_THR_ERR_NONE = 0,
> +    /* Detected error, but resumed successfully */
> +    MIG_THR_ERR_RECOVERED = 1,
> +    /* Detected fatal error, need to exit */
> +    MIG_THR_ERR_FATAL = 2,
> +} MigThrError;
> +
> +/*
> + * We don't return until we are in a safe state to continue current
> + * postcopy migration.  Returns MIG_THR_ERR_RECOVERED if recovered, or
> + * MIG_THR_ERR_FATAL if unrecovery failure happened.
> + */
> +static MigThrError postcopy_pause(MigrationState *s)
> +{
> +    assert(s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
> +    migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
> +                      MIGRATION_STATUS_POSTCOPY_PAUSED);
> +
> +    /* Current channel is possibly broken. Release it. */
> +    assert(s->to_dst_file);
> +    qemu_file_shutdown(s->to_dst_file);
> +    qemu_fclose(s->to_dst_file);
> +    s->to_dst_file = NULL;
> +
> +    error_report("Detected IO failure for postcopy. "
> +                 "Migration paused.");
> +
> +    /*
> +     * We wait until things fixed up. Then someone will setup the
> +     * status back for us.
> +     */
> +    while (s->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
> +        qemu_sem_wait(&s->postcopy_pause_sem);
> +    }
> +
> +    trace_postcopy_pause_continued();
> +
> +    return MIG_THR_ERR_RECOVERED;
> +}
> +
> +static MigThrError migration_detect_error(MigrationState *s)
> +{
> +    int ret;
> +
> +    /* Try to detect any file errors */
> +    ret = qemu_file_get_error(s->to_dst_file);
> +
> +    if (!ret) {
> +        /* Everything is fine */
> +        return MIG_THR_ERR_NONE;
> +    }
> +
> +    if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE && ret == -EIO) {
> +        /*
> +         * For postcopy, we allow the network to be down for a
> +         * while. After that, it can be continued by a
> +         * recovery phase.
> +         */
> +        return postcopy_pause(s);
> +    } else {
> +        /*
> +         * For precopy (or postcopy with error outside IO), we fail
> +         * with no time.
> +         */
> +        migrate_set_state(&s->state, s->state, MIGRATION_STATUS_FAILED);
> +        trace_migration_thread_file_err();
> +
> +        /* Time to stop the migration, now. */
> +        return MIG_THR_ERR_FATAL;
> +    }
> +}
> +
>  /*
>   * Master migration thread on the source VM.
>   * It drives the migration and pumps the data down the outgoing channel.
> @@ -2183,6 +2260,7 @@ static void *migration_thread(void *opaque)
>      /* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
>      enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
>      bool enable_colo = migrate_colo_enabled();
> +    MigThrError thr_error;
>  
>      rcu_register_thread();
>  
> @@ -2255,12 +2333,24 @@ static void *migration_thread(void *opaque)
>              }
>          }
>  
> -        if (qemu_file_get_error(s->to_dst_file)) {
> -            migrate_set_state(&s->state, current_active_state,
> -                              MIGRATION_STATUS_FAILED);
> -            trace_migration_thread_file_err();
> +        /*
> +         * Try to detect any kind of failures, and see whether we
> +         * should stop the migration now.
> +         */
> +        thr_error = migration_detect_error(s);
> +        if (thr_error == MIG_THR_ERR_FATAL) {
> +            /* Stop migration */
>              break;
> +        } else if (thr_error == MIG_THR_ERR_RECOVERED) {
> +            /*
> +             * Just recovered from a e.g. network failure, reset all
> +             * the local variables. This is important to avoid
> +             * breaking transferred_bytes and bandwidth calculation
> +             */
> +            initial_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> +            initial_bytes = 0;
>          }
> +
>          current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>          if (current_time >= initial_time + BUFFER_DELAY) {
>              uint64_t transferred_bytes = qemu_ftell(s->to_dst_file) -
> diff --git a/migration/migration.h b/migration/migration.h
> index 6d36400975..36aaa13f50 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -156,6 +156,9 @@ struct MigrationState
>      bool send_configuration;
>      /* Whether we send section footer during migration */
>      bool send_section_footer;
> +
> +    /* Needed by postcopy-pause state */
> +    QemuSemaphore postcopy_pause_sem;
>  };
>  
>  void migrate_set_state(int *state, int old_state, int new_state);
> diff --git a/migration/trace-events b/migration/trace-events
> index 6f29fcc686..da1c63a933 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -99,6 +99,7 @@ migration_thread_setup_complete(void) ""
>  open_return_path_on_source(void) ""
>  open_return_path_on_source_continue(void) ""
>  postcopy_start(void) ""
> +postcopy_pause_continued(void) ""
>  postcopy_start_set_run(void) ""
>  source_return_path_thread_bad_end(void) ""
>  source_return_path_thread_end(void) ""
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 06/32] migration: allow dst vm pause on postcopy
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 06/32] migration: allow dst vm pause on postcopy Peter Xu
@ 2017-11-30 11:17   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 11:17 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> When there is IO error on the incoming channel (e.g., network down),
> instead of bailing out immediately, we allow the dst vm to switch to the
> new POSTCOPY_PAUSE state. Currently it is still simple - it waits the
> new semaphore, until someone poke it for another attempt.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

As noted last time we need one of the later patches (8/32) to stop a
race; but OK

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c  |  1 +
>  migration/migration.h  |  3 +++
>  migration/savevm.c     | 60 ++++++++++++++++++++++++++++++++++++++++++++++++--
>  migration/trace-events |  2 ++
>  4 files changed, 64 insertions(+), 2 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 46e7ca36a4..b166e19785 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -150,6 +150,7 @@ MigrationIncomingState *migration_incoming_get_current(void)
>          memset(&mis_current, 0, sizeof(MigrationIncomingState));
>          qemu_mutex_init(&mis_current.rp_mutex);
>          qemu_event_init(&mis_current.main_thread_load_event, false);
> +        qemu_sem_init(&mis_current.postcopy_pause_sem_dst, 0);
>          once = true;
>      }
>      return &mis_current;
> diff --git a/migration/migration.h b/migration/migration.h
> index 36aaa13f50..55894ecb79 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -61,6 +61,9 @@ struct MigrationIncomingState {
>      /* The coroutine we should enter (back) after failover */
>      Coroutine *migration_incoming_co;
>      QemuSemaphore colo_incoming_sem;
> +
> +    /* notify PAUSED postcopy incoming migrations to try to continue */
> +    QemuSemaphore postcopy_pause_sem_dst;
>  };
>  
>  MigrationIncomingState *migration_incoming_get_current(void);
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 1da0255cd7..93e308ebf0 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1529,8 +1529,8 @@ static int loadvm_postcopy_ram_handle_discard(MigrationIncomingState *mis,
>   */
>  static void *postcopy_ram_listen_thread(void *opaque)
>  {
> -    QEMUFile *f = opaque;
>      MigrationIncomingState *mis = migration_incoming_get_current();
> +    QEMUFile *f = mis->from_src_file;
>      int load_res;
>  
>      migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
> @@ -1544,6 +1544,14 @@ static void *postcopy_ram_listen_thread(void *opaque)
>       */
>      qemu_file_set_blocking(f, true);
>      load_res = qemu_loadvm_state_main(f, mis);
> +
> +    /*
> +     * This is tricky, but, mis->from_src_file can change after it
> +     * returns, when postcopy recovery happened. In the future, we may
> +     * want a wrapper for the QEMUFile handle.
> +     */
> +    f = mis->from_src_file;
> +
>      /* And non-blocking again so we don't block in any cleanup */
>      qemu_file_set_blocking(f, false);
>  
> @@ -1626,7 +1634,7 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
>      /* Start up the listening thread and wait for it to signal ready */
>      qemu_sem_init(&mis->listen_thread_sem, 0);
>      qemu_thread_create(&mis->listen_thread, "postcopy/listen",
> -                       postcopy_ram_listen_thread, mis->from_src_file,
> +                       postcopy_ram_listen_thread, NULL,
>                         QEMU_THREAD_DETACHED);
>      qemu_sem_wait(&mis->listen_thread_sem);
>      qemu_sem_destroy(&mis->listen_thread_sem);
> @@ -2011,11 +2019,44 @@ void qemu_loadvm_state_cleanup(void)
>      }
>  }
>  
> +/* Return true if we should continue the migration, or false. */
> +static bool postcopy_pause_incoming(MigrationIncomingState *mis)
> +{
> +    trace_postcopy_pause_incoming();
> +
> +    migrate_set_state(&mis->state, MIGRATION_STATUS_POSTCOPY_ACTIVE,
> +                      MIGRATION_STATUS_POSTCOPY_PAUSED);
> +
> +    assert(mis->from_src_file);
> +    qemu_file_shutdown(mis->from_src_file);
> +    qemu_fclose(mis->from_src_file);
> +    mis->from_src_file = NULL;
> +
> +    assert(mis->to_src_file);
> +    qemu_file_shutdown(mis->to_src_file);
> +    qemu_mutex_lock(&mis->rp_mutex);
> +    qemu_fclose(mis->to_src_file);
> +    mis->to_src_file = NULL;
> +    qemu_mutex_unlock(&mis->rp_mutex);
> +
> +    error_report("Detected IO failure for postcopy. "
> +                 "Migration paused.");
> +
> +    while (mis->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
> +        qemu_sem_wait(&mis->postcopy_pause_sem_dst);
> +    }
> +
> +    trace_postcopy_pause_incoming_continued();
> +
> +    return true;
> +}
> +
>  static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
>  {
>      uint8_t section_type;
>      int ret = 0;
>  
> +retry:
>      while (true) {
>          section_type = qemu_get_byte(f);
>  
> @@ -2060,6 +2101,21 @@ static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
>  out:
>      if (ret < 0) {
>          qemu_file_set_error(f, ret);
> +
> +        /*
> +         * Detect whether it is:
> +         *
> +         * 1. postcopy running
> +         * 2. network failure (-EIO)
> +         *
> +         * If so, we try to wait for a recovery.
> +         */
> +        if (mis->state == MIGRATION_STATUS_POSTCOPY_ACTIVE &&
> +            ret == -EIO && postcopy_pause_incoming(mis)) {
> +            /* Reset f to point to the newly created channel */
> +            f = mis->from_src_file;
> +            goto retry;
> +        }
>      }
>      return ret;
>  }
> diff --git a/migration/trace-events b/migration/trace-events
> index da1c63a933..bed1646cd6 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -100,6 +100,8 @@ open_return_path_on_source(void) ""
>  open_return_path_on_source_continue(void) ""
>  postcopy_start(void) ""
>  postcopy_pause_continued(void) ""
> +postcopy_pause_incoming(void) ""
> +postcopy_pause_incoming_continued(void) ""
>  postcopy_start_set_run(void) ""
>  source_return_path_thread_bad_end(void) ""
>  source_return_path_thread_end(void) ""
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail Peter Xu
@ 2017-11-30 12:13   ` Dr. David Alan Gilbert
  2017-12-01  9:30     ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 12:13 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> We will not allow failures to happen when sending data from destination
> to source via the return path. However it is possible that there can be
> errors along the way.  This patch allows the migrate_send_rp_message()
> to return error when it happens, and further extended it to
> migrate_send_rp_req_pages().
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/migration.c | 38 ++++++++++++++++++++++++++++++--------
>  migration/migration.h |  2 +-
>  2 files changed, 31 insertions(+), 9 deletions(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 8d93b891e3..db896233f6 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -199,17 +199,35 @@ static void deferred_incoming_migration(Error **errp)
>   * Send a message on the return channel back to the source
>   * of the migration.
>   */
> -static void migrate_send_rp_message(MigrationIncomingState *mis,
> -                                    enum mig_rp_message_type message_type,
> -                                    uint16_t len, void *data)
> +static int migrate_send_rp_message(MigrationIncomingState *mis,
> +                                   enum mig_rp_message_type message_type,
> +                                   uint16_t len, void *data)
>  {
> +    int ret = 0;
> +
>      trace_migrate_send_rp_message((int)message_type, len);
>      qemu_mutex_lock(&mis->rp_mutex);
> +
> +    /*
> +     * It's possible that the file handle got lost due to network
> +     * failures.
> +     */
> +    if (!mis->to_src_file) {
> +        ret = -EIO;
> +        goto error;
> +    }
> +
>      qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
>      qemu_put_be16(mis->to_src_file, len);
>      qemu_put_buffer(mis->to_src_file, data, len);
>      qemu_fflush(mis->to_src_file);
> +
> +    /* It's possible that qemu file got error during sending */
> +    ret = qemu_file_get_error(mis->to_src_file);
> +
> +error:
>      qemu_mutex_unlock(&mis->rp_mutex);
> +    return ret;
>  }
>  
>  /* Request a range of pages from the source VM at the given
> @@ -219,26 +237,30 @@ static void migrate_send_rp_message(MigrationIncomingState *mis,
>   *   Start: Address offset within the RB
>   *   Len: Length in bytes required - must be a multiple of pagesize
>   */
> -void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
> -                               ram_addr_t start, size_t len)
> +int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
> +                              ram_addr_t start, size_t len)
>  {
>      uint8_t bufc[12 + 1 + 255]; /* start (8), len (4), rbname up to 256 */
>      size_t msglen = 12; /* start + len */
> +    int rbname_len;
> +    enum mig_rp_message_type msg_type;
>  
>      *(uint64_t *)bufc = cpu_to_be64((uint64_t)start);
>      *(uint32_t *)(bufc + 8) = cpu_to_be32((uint32_t)len);
>  
>      if (rbname) {
> -        int rbname_len = strlen(rbname);
> +        rbname_len = strlen(rbname);

I don't think that move of the declaration of rbname_len is necessary;
it's only msglen that you need to keep for longer.

Dave

>          assert(rbname_len < 256);
>  
>          bufc[msglen++] = rbname_len;
>          memcpy(bufc + msglen, rbname, rbname_len);
>          msglen += rbname_len;
> -        migrate_send_rp_message(mis, MIG_RP_MSG_REQ_PAGES_ID, msglen, bufc);
> +        msg_type = MIG_RP_MSG_REQ_PAGES_ID;
>      } else {
> -        migrate_send_rp_message(mis, MIG_RP_MSG_REQ_PAGES, msglen, bufc);
> +        msg_type = MIG_RP_MSG_REQ_PAGES;
>      }
> +
> +    return migrate_send_rp_message(mis, msg_type, msglen, bufc);
>  }
>  
>  void qemu_start_incoming_migration(const char *uri, Error **errp)
> diff --git a/migration/migration.h b/migration/migration.h
> index ebb049f692..b63cdfbfdb 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -216,7 +216,7 @@ void migrate_send_rp_shut(MigrationIncomingState *mis,
>                            uint32_t value);
>  void migrate_send_rp_pong(MigrationIncomingState *mis,
>                            uint32_t value);
> -void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
> +int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
>                                ram_addr_t start, size_t len);
>  
>  #endif
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP Peter Xu
@ 2017-11-30 17:21   ` Dr. David Alan Gilbert
  2017-12-01  9:37     ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 17:21 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> Introducing new return path message MIG_RP_MSG_RECV_BITMAP to send
> received bitmap of ramblock back to source.
> 
> This is the reply message of MIG_CMD_RECV_BITMAP, it contains not only
> the header (including the ramblock name), and it was appended with the
> whole ramblock received bitmap on the destination side.
> 
> When the source receives such a reply message (MIG_RP_MSG_RECV_BITMAP),
> it parses it, convert it to the dirty bitmap by inverting the bits.
> 
> One thing to mention is that, when we send the recv bitmap, we are doing
> these things in extra:
> 
> - converting the bitmap to little endian, to support when hosts are
>   using different endianess on src/dst.
> 
> - do proper alignment for 8 bytes, to support when hosts are using
>   different word size (32/64 bits) on src/dst.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

(The comment on the receive side 'Add addings' is a bit odd!
The send side is much better); other than that:

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c  |  68 +++++++++++++++++++++++
>  migration/migration.h  |   2 +
>  migration/ram.c        | 144 +++++++++++++++++++++++++++++++++++++++++++++++++
>  migration/ram.h        |   3 ++
>  migration/savevm.c     |   2 +-
>  migration/trace-events |   3 ++
>  6 files changed, 221 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 32c036fa82..5592975d33 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -93,6 +93,7 @@ enum mig_rp_message_type {
>  
>      MIG_RP_MSG_REQ_PAGES_ID, /* data (start: be64, len: be32, id: string) */
>      MIG_RP_MSG_REQ_PAGES,    /* data (start: be64, len: be32) */
> +    MIG_RP_MSG_RECV_BITMAP,  /* send recved_bitmap back to source */
>  
>      MIG_RP_MSG_MAX
>  };
> @@ -502,6 +503,45 @@ void migrate_send_rp_pong(MigrationIncomingState *mis,
>      migrate_send_rp_message(mis, MIG_RP_MSG_PONG, sizeof(buf), &buf);
>  }
>  
> +void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
> +                                 char *block_name)
> +{
> +    char buf[512];
> +    int len;
> +    int64_t res;
> +
> +    /*
> +     * First, we send the header part. It contains only the len of
> +     * idstr, and the idstr itself.
> +     */
> +    len = strlen(block_name);
> +    buf[0] = len;
> +    memcpy(buf + 1, block_name, len);
> +
> +    if (mis->state != MIGRATION_STATUS_POSTCOPY_RECOVER) {
> +        error_report("%s: MSG_RP_RECV_BITMAP only used for recovery",
> +                     __func__);
> +        return;
> +    }
> +
> +    migrate_send_rp_message(mis, MIG_RP_MSG_RECV_BITMAP, len + 1, buf);
> +
> +    /*
> +     * Next, we dump the received bitmap to the stream.
> +     *
> +     * TODO: currently we are safe since we are the only one that is
> +     * using the to_src_file handle (fault thread is still paused),
> +     * and it's ok even not taking the mutex. However the best way is
> +     * to take the lock before sending the message header, and release
> +     * the lock after sending the bitmap.
> +     */
> +    qemu_mutex_lock(&mis->rp_mutex);
> +    res = ramblock_recv_bitmap_send(mis->to_src_file, block_name);
> +    qemu_mutex_unlock(&mis->rp_mutex);
> +
> +    trace_migrate_send_rp_recv_bitmap(block_name, res);
> +}
> +
>  MigrationCapabilityStatusList *qmp_query_migrate_capabilities(Error **errp)
>  {
>      MigrationCapabilityStatusList *head = NULL;
> @@ -1736,6 +1776,7 @@ static struct rp_cmd_args {
>      [MIG_RP_MSG_PONG]           = { .len =  4, .name = "PONG" },
>      [MIG_RP_MSG_REQ_PAGES]      = { .len = 12, .name = "REQ_PAGES" },
>      [MIG_RP_MSG_REQ_PAGES_ID]   = { .len = -1, .name = "REQ_PAGES_ID" },
> +    [MIG_RP_MSG_RECV_BITMAP]    = { .len = -1, .name = "RECV_BITMAP" },
>      [MIG_RP_MSG_MAX]            = { .len = -1, .name = "MAX" },
>  };
>  
> @@ -1780,6 +1821,19 @@ static bool postcopy_pause_return_path_thread(MigrationState *s)
>      return true;
>  }
>  
> +static int migrate_handle_rp_recv_bitmap(MigrationState *s, char *block_name)
> +{
> +    RAMBlock *block = qemu_ram_block_by_name(block_name);
> +
> +    if (!block) {
> +        error_report("%s: invalid block name '%s'", __func__, block_name);
> +        return -EINVAL;
> +    }
> +
> +    /* Fetch the received bitmap and refresh the dirty bitmap */
> +    return ram_dirty_bitmap_reload(s, block);
> +}
> +
>  /*
>   * Handles messages sent on the return path towards the source VM
>   *
> @@ -1885,6 +1939,20 @@ retry:
>              migrate_handle_rp_req_pages(ms, (char *)&buf[13], start, len);
>              break;
>  
> +        case MIG_RP_MSG_RECV_BITMAP:
> +            if (header_len < 1) {
> +                error_report("%s: missing block name", __func__);
> +                mark_source_rp_bad(ms);
> +                goto out;
> +            }
> +            /* Format: len (1B) + idstr (<255B). This ends the idstr. */
> +            buf[buf[0] + 1] = '\0';
> +            if (migrate_handle_rp_recv_bitmap(ms, (char *)(buf + 1))) {
> +                mark_source_rp_bad(ms);
> +                goto out;
> +            }
> +            break;
> +
>          default:
>              break;
>          }
> diff --git a/migration/migration.h b/migration/migration.h
> index d052669e1c..f879c93542 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -219,5 +219,7 @@ void migrate_send_rp_pong(MigrationIncomingState *mis,
>                            uint32_t value);
>  int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
>                                ram_addr_t start, size_t len);
> +void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
> +                                 char *block_name);
>  
>  #endif
> diff --git a/migration/ram.c b/migration/ram.c
> index 960c726ff2..b30c669476 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -180,6 +180,70 @@ void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr,
>                        nr);
>  }
>  
> +#define  RAMBLOCK_RECV_BITMAP_ENDING  (0x0123456789abcdefULL)
> +
> +/*
> + * Format: bitmap_size (8 bytes) + whole_bitmap (N bytes).
> + *
> + * Returns >0 if success with sent bytes, or <0 if error.
> + */
> +int64_t ramblock_recv_bitmap_send(QEMUFile *file,
> +                                  const char *block_name)
> +{
> +    RAMBlock *block = qemu_ram_block_by_name(block_name);
> +    unsigned long *le_bitmap, nbits;
> +    uint64_t size;
> +
> +    if (!block) {
> +        error_report("%s: invalid block name: %s", __func__, block_name);
> +        return -1;
> +    }
> +
> +    nbits = block->used_length >> TARGET_PAGE_BITS;
> +
> +    /*
> +     * Make sure the tmp bitmap buffer is big enough, e.g., on 32bit
> +     * machines we may need 4 more bytes for padding (see below
> +     * comment). So extend it a bit before hand.
> +     */
> +    le_bitmap = bitmap_new(nbits + BITS_PER_LONG);
> +
> +    /*
> +     * Always use little endian when sending the bitmap. This is
> +     * required that when source and destination VMs are not using the
> +     * same endianess. (Note: big endian won't work.)
> +     */
> +    bitmap_to_le(le_bitmap, block->receivedmap, nbits);
> +
> +    /* Size of the bitmap, in bytes */
> +    size = nbits / 8;
> +
> +    /*
> +     * size is always aligned to 8 bytes for 64bit machines, but it
> +     * may not be true for 32bit machines. We need this padding to
> +     * make sure the migration can survive even between 32bit and
> +     * 64bit machines.
> +     */
> +    size = ROUND_UP(size, 8);
> +
> +    qemu_put_be64(file, size);
> +    qemu_put_buffer(file, (const uint8_t *)le_bitmap, size);
> +    /*
> +     * Mark as an end, in case the middle part is screwed up due to
> +     * some "misterious" reason.
> +     */
> +    qemu_put_be64(file, RAMBLOCK_RECV_BITMAP_ENDING);
> +    qemu_fflush(file);
> +
> +    free(le_bitmap);
> +
> +    if (qemu_file_get_error(file)) {
> +        return qemu_file_get_error(file);
> +    }
> +
> +    return size + sizeof(size);
> +}
> +
>  /*
>   * An outstanding page request, on the source, having been received
>   * and queued
> @@ -2985,6 +3049,86 @@ static bool ram_has_postcopy(void *opaque)
>      return migrate_postcopy_ram();
>  }
>  
> +/*
> + * Read the received bitmap, revert it as the initial dirty bitmap.
> + * This is only used when the postcopy migration is paused but wants
> + * to resume from a middle point.
> + */
> +int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *block)
> +{
> +    int ret = -EINVAL;
> +    QEMUFile *file = s->rp_state.from_dst_file;
> +    unsigned long *le_bitmap, nbits = block->used_length >> TARGET_PAGE_BITS;
> +    uint64_t local_size = nbits / 8;
> +    uint64_t size, end_mark;
> +
> +    trace_ram_dirty_bitmap_reload_begin(block->idstr);
> +
> +    if (s->state != MIGRATION_STATUS_POSTCOPY_RECOVER) {
> +        error_report("%s: incorrect state %s", __func__,
> +                     MigrationStatus_str(s->state));
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Note: see comments in ramblock_recv_bitmap_send() on why we
> +     * need the endianess convertion, and the paddings.
> +     */
> +    local_size = ROUND_UP(local_size, 8);
> +
> +    /* Add addings */
> +    le_bitmap = bitmap_new(nbits + BITS_PER_LONG);
> +
> +    size = qemu_get_be64(file);
> +
> +    /* The size of the bitmap should match with our ramblock */
> +    if (size != local_size) {
> +        error_report("%s: ramblock '%s' bitmap size mismatch "
> +                     "(0x%"PRIx64" != 0x%"PRIx64")", __func__,
> +                     block->idstr, size, local_size);
> +        ret = -EINVAL;
> +        goto out;
> +    }
> +
> +    size = qemu_get_buffer(file, (uint8_t *)le_bitmap, local_size);
> +    end_mark = qemu_get_be64(file);
> +
> +    ret = qemu_file_get_error(file);
> +    if (ret || size != local_size) {
> +        error_report("%s: read bitmap failed for ramblock '%s': %d"
> +                     " (size 0x%"PRIx64", got: 0x%"PRIx64")",
> +                     __func__, block->idstr, ret, local_size, size);
> +        ret = -EIO;
> +        goto out;
> +    }
> +
> +    if (end_mark != RAMBLOCK_RECV_BITMAP_ENDING) {
> +        error_report("%s: ramblock '%s' end mark incorrect: 0x%"PRIu64,
> +                     __func__, block->idstr, end_mark);
> +        ret = -EINVAL;
> +        goto out;
> +    }
> +
> +    /*
> +     * Endianess convertion. We are during postcopy (though paused).
> +     * The dirty bitmap won't change. We can directly modify it.
> +     */
> +    bitmap_from_le(block->bmap, le_bitmap, nbits);
> +
> +    /*
> +     * What we received is "received bitmap". Revert it as the initial
> +     * dirty bitmap for this ramblock.
> +     */
> +    bitmap_complement(block->bmap, block->bmap, nbits);
> +
> +    trace_ram_dirty_bitmap_reload_complete(block->idstr);
> +
> +    ret = 0;
> +out:
> +    free(le_bitmap);
> +    return ret;
> +}
> +
>  static SaveVMHandlers savevm_ram_handlers = {
>      .save_setup = ram_save_setup,
>      .save_live_iterate = ram_save_iterate,
> diff --git a/migration/ram.h b/migration/ram.h
> index 64d81e9f1d..10a459cc89 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -61,5 +61,8 @@ void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
>  int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
>  void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
>  void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
> +int64_t ramblock_recv_bitmap_send(QEMUFile *file,
> +                                  const char *block_name);
> +int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
>  
>  #endif
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 0f61da3ebb..2148b198c7 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1811,7 +1811,7 @@ static int loadvm_handle_recv_bitmap(MigrationIncomingState *mis,
>          return -EINVAL;
>      }
>  
> -    /* TODO: send the bitmap back to source */
> +    migrate_send_rp_recv_bitmap(mis, block_name);
>  
>      trace_loadvm_handle_recv_bitmap(block_name);
>  
> diff --git a/migration/trace-events b/migration/trace-events
> index 55c0412aaa..3dcf8a93d9 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -79,6 +79,8 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
>  ram_postcopy_send_discard_bitmap(void) ""
>  ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
>  ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
> +ram_dirty_bitmap_reload_begin(char *str) "%s"
> +ram_dirty_bitmap_reload_complete(char *str) "%s"
>  
>  # migration/migration.c
>  await_return_path_close_on_source_close(void) ""
> @@ -90,6 +92,7 @@ migrate_fd_cancel(void) ""
>  migrate_handle_rp_req_pages(const char *rbname, size_t start, size_t len) "in %s at 0x%zx len 0x%zx"
>  migrate_pending(uint64_t size, uint64_t max, uint64_t post, uint64_t nonpost) "pending size %" PRIu64 " max %" PRIu64 " (post=%" PRIu64 " nonpost=%" PRIu64 ")"
>  migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d"
> +migrate_send_rp_recv_bitmap(char *name, int64_t size) "block '%s' size 0x%"PRIi64
>  migration_completion_file_err(void) ""
>  migration_completion_postcopy_end(void) ""
>  migration_completion_postcopy_end_after_complete(void) ""
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 20/32] migration: synchronize dirty bitmap for resume
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 20/32] migration: synchronize dirty bitmap for resume Peter Xu
@ 2017-11-30 18:40   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 18:40 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> This patch implements the first part of core RAM resume logic for
> postcopy. ram_resume_prepare() is provided for the work.
> 
> When the migration is interrupted by network failure, the dirty bitmap
> on the source side will be meaningless, because even the dirty bit is
> cleared, it is still possible that the sent page was lost along the way
> to destination. Here instead of continue the migration with the old
> dirty bitmap on source, we ask the destination side to send back its
> received bitmap, then invert it to be our initial dirty bitmap.
> 
> The source side send thread will issue the MIG_CMD_RECV_BITMAP requests,
> once per ramblock, to ask for the received bitmap. On destination side,
> MIG_RP_MSG_RECV_BITMAP will be issued, along with the requested bitmap.
> Data will be received on the return-path thread of source, and the main
> migration thread will be notified when all the ramblock bitmaps are
> synchronized.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/migration.c  |  3 +++
>  migration/migration.h  |  1 +
>  migration/ram.c        | 47 +++++++++++++++++++++++++++++++++++++++++++++++
>  migration/trace-events |  4 ++++
>  4 files changed, 55 insertions(+)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 4dc34ed8ce..5b1fbe5b98 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -2843,6 +2843,7 @@ static void migration_instance_finalize(Object *obj)
>      g_free(params->tls_hostname);
>      g_free(params->tls_creds);
>      qemu_sem_destroy(&ms->pause_sem);
> +    qemu_sem_destroy(&ms->rp_state.rp_sem);
>  }
>  
>  static void migration_instance_init(Object *obj)
> @@ -2871,6 +2872,8 @@ static void migration_instance_init(Object *obj)
>      params->has_x_multifd_channels = true;
>      params->has_x_multifd_page_count = true;
>      params->has_xbzrle_cache_size = true;
> +
> +    qemu_sem_init(&ms->rp_state.rp_sem, 0);
>  }
>  
>  /*
> diff --git a/migration/migration.h b/migration/migration.h
> index 11fbfebba1..82dd7d9820 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -108,6 +108,7 @@ struct MigrationState
>          QEMUFile     *from_dst_file;
>          QemuThread    rp_thread;
>          bool          error;
> +        QemuSemaphore rp_sem;
>      } rp_state;
>  
>      double mbps;
> diff --git a/migration/ram.c b/migration/ram.c
> index b30c669476..49627ca9fc 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -49,6 +49,7 @@
>  #include "qemu/rcu_queue.h"
>  #include "migration/colo.h"
>  #include "migration/block.h"
> +#include "savevm.h"
>  
>  /***********************************************************/
>  /* ram save/restore */
> @@ -3049,6 +3050,38 @@ static bool ram_has_postcopy(void *opaque)
>      return migrate_postcopy_ram();
>  }
>  
> +/* Sync all the dirty bitmap with destination VM.  */
> +static int ram_dirty_bitmap_sync_all(MigrationState *s, RAMState *rs)
> +{
> +    RAMBlock *block;
> +    QEMUFile *file = s->to_dst_file;
> +    int ramblock_count = 0;
> +
> +    trace_ram_dirty_bitmap_sync_start();
> +
> +    RAMBLOCK_FOREACH(block) {
> +        qemu_savevm_send_recv_bitmap(file, block->idstr);
> +        trace_ram_dirty_bitmap_request(block->idstr);
> +        ramblock_count++;
> +    }
> +
> +    trace_ram_dirty_bitmap_sync_wait();
> +
> +    /* Wait until all the ramblocks' dirty bitmap synced */
> +    while (ramblock_count--) {
> +        qemu_sem_wait(&s->rp_state.rp_sem);
> +    }
> +
> +    trace_ram_dirty_bitmap_sync_complete();
> +
> +    return 0;
> +}
> +
> +static void ram_dirty_bitmap_reload_notify(MigrationState *s)
> +{
> +    qemu_sem_post(&s->rp_state.rp_sem);
> +}
> +
>  /*
>   * Read the received bitmap, revert it as the initial dirty bitmap.
>   * This is only used when the postcopy migration is paused but wants
> @@ -3123,12 +3156,25 @@ int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *block)
>  
>      trace_ram_dirty_bitmap_reload_complete(block->idstr);
>  
> +    /*
> +     * We succeeded to sync bitmap for current ramblock. If this is
> +     * the last one to sync, we need to notify the main send thread.
> +     */
> +    ram_dirty_bitmap_reload_notify(s);
> +
>      ret = 0;
>  out:
>      free(le_bitmap);
>      return ret;
>  }
>  
> +static int ram_resume_prepare(MigrationState *s, void *opaque)
> +{
> +    RAMState *rs = *(RAMState **)opaque;
> +
> +    return ram_dirty_bitmap_sync_all(s, rs);
> +}
> +
>  static SaveVMHandlers savevm_ram_handlers = {
>      .save_setup = ram_save_setup,
>      .save_live_iterate = ram_save_iterate,
> @@ -3140,6 +3186,7 @@ static SaveVMHandlers savevm_ram_handlers = {
>      .save_cleanup = ram_save_cleanup,
>      .load_setup = ram_load_setup,
>      .load_cleanup = ram_load_cleanup,
> +    .resume_prepare = ram_resume_prepare,
>  };
>  
>  void ram_mig_init(void)
> diff --git a/migration/trace-events b/migration/trace-events
> index eadabf03e8..804f18d492 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -82,8 +82,12 @@ ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
>  ram_postcopy_send_discard_bitmap(void) ""
>  ram_save_page(const char *rbname, uint64_t offset, void *host) "%s: offset: 0x%" PRIx64 " host: %p"
>  ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: 0x%zx len: 0x%zx"
> +ram_dirty_bitmap_request(char *str) "%s"
>  ram_dirty_bitmap_reload_begin(char *str) "%s"
>  ram_dirty_bitmap_reload_complete(char *str) "%s"
> +ram_dirty_bitmap_sync_start(void) ""
> +ram_dirty_bitmap_sync_wait(void) ""
> +ram_dirty_bitmap_sync_complete(void) ""
>  
>  # migration/migration.c
>  await_return_path_close_on_source_close(void) ""
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery
  2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
                   ` (31 preceding siblings ...)
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 32/32] migration, hmp: new command "migrate_pause" Peter Xu
@ 2017-11-30 20:00 ` Dr. David Alan Gilbert
  2017-12-01 10:23   ` Peter Xu
  32 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-11-30 20:00 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> Tree is pushed here for better reference and testing:
>   github.com/xzpeter postcopy-recovery-support

Hi Peter,
  Do you have a git with this code + your OOB world in?
I'd like to play with doing recovery and see what happens;
I still worry a bit about whether the (potentially hung) main loop
is needed for the new incoming connection to be accepted by the
destination.

Dave

> Please review, thanks.
> 
> v4:
> - fix two compile errors that patchew reported
> - for QMP: do s/2.11/2.12/g
> - fix migrate-incoming logic to be more strict
> 
> v3:
> - add r-bs correspondingly
> - in ram_load_postcopy() capture error if postcopy_place_page() failed
>   [Dave]
> - remove "break" if there is a "goto" before that [Dave]
> - ram_dirty_bitmap_reload(): use PRIx64 where needed, add some more
>   print sizes [Dave]
> - remove RAMState.ramblock_to_sync, instead use local counter [Dave]
> - init tag in tcp_start_incoming_migration() [Dave]
> - more traces when transmiting the recv bitmap [Dave]
> - postcopy_pause_incoming(): do shutdown before taking rp lock [Dave]
> - add one more patch to postpone the state switch of postcopy-active [Dave]
> - refactor the migrate_incoming handling according to the email
>   discussion [Dave]
> - add manual trigger to pause postcopy (two new patches added to
>   introduce "migrate-pause" command for QMP/HMP). [Dave]
> 
> v2 note (the coarse-grained changelog):
> 
> - I appended the migrate-incoming re-use series into this one, since
>   that one depends on this one, and it's really for the recovery
> 
> - I haven't yet added (actually I just added them but removed) the
>   per-monitor thread related patches into this one, basically to setup
>   "need-bql"="false" patches - the solution for the monitor hang issue
>   is still during discussion in the other thread.  I'll add them in
>   when settled.
> 
> - Quite a lot of other changes and additions regarding to v1 review
>   comments.  I think I settled all the comments, but the God knows
>   better.
> 
> Feel free to skip this ugly longer changelog (it's too long to be
> meaningful I'm afraid).
> 
> Tree: github.com/xzpeter postcopy-recovery-support
> 
> v2:
> - rebased to alexey's received bitmap v9
> - add Dave's r-bs for patches: 2/5/6/8/9/13/14/15/16/20/21
> - patch 1: use target page size to calc bitmap [Dave]
> - patch 3: move trace_*() after EINTR check [Dave]
> - patch 4: dropped since I can use bitmap_complement() [Dave]
> - patch 7: check file error right after data is read in both
>   qemu_loadvm_section_start_full() and qemu_loadvm_section_part_end(),
>   meanwhile also check in check_section_footer() [Dave]
> - patch 8/9: fix error_report/commit message in both patches [Dave]
> - patch 10: dropped (new parameter "x-postcopy-fast")
> - patch 11: split the "postcopy-paused" patch into two, one to
>   introduce the new state, the other to implement the logic. Also,
>   print something when paused [Dave]
> - patch 17: removed do_resume label, introduced migration_prepare()
>   [Dave]
> - patch 18: removed do_pause label using a new loop [Dave]
> - patch 20: removed incorrect comment [Dave]
> - patch 21: use 256B buffer in qemu_savevm_send_recv_bitmap(), add
>   trace in loadvm_handle_recv_bitmap() [Dave]
> - patch 22: fix MIG_RP_MSG_RECV_BITMAP for (1) endianess (2) 32/64bit
>   machines. More info in the commit message update.
> - patch 23: add one check on migration state [Dave]
> - patch 24: use macro instead of magic 1 [Dave]
> - patch 26: use more trace_*() instead of one, and use one sem to
>   replace mutex+cond. [Dave]
> - move sem init/destroy into migration_instance_init() and
>   migration_instance_finalize (new function after rebase).
> - patch 29: squashed this patch most into:
>   "migration: implement "postcopy-pause" src logic" [Dave]
> - split the two fix patches out of the series
> - fixed two places where I misused "wake/woke/woken". [Dave]
> - add new patch "bitmap: provide to_le/from_le helpers" to solve the
>   bitmap endianess issue [Dave]
> - appended migrate_incoming series to this series, since that one is
>   depending on the paused state.  Using explicit g_source_remove() for
>   listening ports [Dan]
> 
> FUTURE TODO LIST
> - support migrate_cancel during PAUSED/RECOVER state
> - when anything wrong happens during PAUSED/RECOVER, switching back to
>   PAUSED state on both sides
> 
> As we all know that postcopy migration has a potential risk to lost
> the VM if the network is broken during the migration. This series
> tries to solve the problem by allowing the migration to pause at the
> failure point, and do recovery after the link is reconnected.
> 
> There was existing work on this issue from Md Haris Iqbal:
> 
> https://lists.nongnu.org/archive/html/qemu-devel/2016-08/msg03468.html
> 
> This series is a totally re-work of the issue, based on Alexey
> Perevalov's recved bitmap v8 series:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg06401.html
> 
> Two new status are added to support the migration (used on both
> sides):
> 
>   MIGRATION_STATUS_POSTCOPY_PAUSED
>   MIGRATION_STATUS_POSTCOPY_RECOVER
> 
> The MIGRATION_STATUS_POSTCOPY_PAUSED state will be set when the
> network failure is detected. It is a phase that we'll be in for a long
> time as long as the failure is detected, and we'll be there until a
> recovery is triggered.  In this state, all the threads (on source:
> send thread, return-path thread; destination: ram-load thread,
> page-fault thread) will be halted.
> 
> The MIGRATION_STATUS_POSTCOPY_RECOVER state is short. If we triggered
> a recovery, both source/destination VM will jump into this stage, do
> whatever it needs to prepare the recovery (e.g., currently the most
> important thing is to synchronize the dirty bitmap, please see commit
> messages for more information). After the preparation is ready, the
> source will do the final handshake with destination, then both sides
> will switch back to MIGRATION_STATUS_POSTCOPY_ACTIVE again.
> 
> New commands/messages are defined as well to satisfy the need:
> 
> MIG_CMD_RECV_BITMAP & MIG_RP_MSG_RECV_BITMAP are introduced for
> delivering received bitmaps
> 
> MIG_CMD_RESUME & MIG_RP_MSG_RESUME_ACK are introduced to do the final
> handshake of postcopy recovery.
> 
> Here's some more details on how the whole failure/recovery routine is
> happened:
> 
> - start migration
> - ... (switch from precopy to postcopy)
> - both sides are in "postcopy-active" state
> - ... (failure happened, e.g., network unplugged)
> - both sides switch to "postcopy-paused" state
>   - all the migration threads are stopped on both sides
> - ... (both VMs hanged)
> - ... (user triggers recovery using "migrate -r -d tcp:HOST:PORT" on
>   source side, "-r" means "recover")
> - both sides switch to "postcopy-recover" state
>   - on source: send-thread, return-path-thread will be waked up
>   - on dest: ram-load-thread waked up, fault-thread still paused
> - source calls new savevmhandler hook resume_prepare() (currently,
>   only ram is providing the hook):
>   - ram_resume_prepare(): for each ramblock, fetch recved bitmap by:
>     - src sends MIG_CMD_RECV_BITMAP to dst
>     - dst replies MIG_RP_MSG_RECV_BITMAP to src, with bitmap data
>       - src uses the recved bitmap to rebuild dirty bitmap
> - source do final handshake with destination
>   - src sends MIG_CMD_RESUME to dst, telling "src is ready"
>     - when dst receives the command, fault thread will be waked up,
>       meanwhile, dst switch back to "postcopy-active"
>   - dst sends MIG_RP_MSG_RESUME_ACK to src, telling "dst is ready"
>     - when src receives the ack, state switch to "postcopy-active"
> - postcopy migration continued
> 
> Testing:
> 
> As I said, it's still an extremely simple test. I used socat to create
> a socket bridge:
> 
>   socat tcp-listen:6666 tcp-connect:localhost:5555 &
> 
> Then do the migration via the bridge. I emulated the network failure
> by killing the socat process (bridge down), then tries to recover the
> migration using the other channel (default dst channel). It looks
> like:
> 
>         port:6666    +------------------+
>         +----------> | socat bridge [1] |-------+
>         |            +------------------+       |
>         |         (Original channel)            |
>         |                                       | port: 5555
>      +---------+  (Recovery channel)            +--->+---------+
>      | src VM  |------------------------------------>| dst VM  |
>      +---------+                                     +---------+
> 
> Known issues/notes:
> 
> - currently destination listening port still cannot change. E.g., the
>   recovery should be using the same port on destination for
>   simplicity. (on source, we can specify new URL)
> 
> - the patch: "migration: let dst listen on port always" is still
>   hacky, it just kept the incoming accept open forever for now...
> 
> - some migration numbers might still be inaccurate, like total
>   migration time, etc. (But I don't really think that matters much
>   now)
> 
> - the patches are very lightly tested.
> 
> - Dave reported one problem that may hang destination main loop thread
>   (one vcpu thread holds the BQL) and the rest. I haven't encountered
>   it yet, but it does not mean this series can survive with it.
> 
> - other potential issues that I may have forgotten or unnoticed...
> 
> Anyway, the work is still in preliminary stage. Any suggestions and
> comments are greatly welcomed.  Thanks.
> 
> Peter Xu (32):
>   migration: better error handling with QEMUFile
>   migration: reuse mis->userfault_quit_fd
>   migration: provide postcopy_fault_thread_notify()
>   migration: new postcopy-pause state
>   migration: implement "postcopy-pause" src logic
>   migration: allow dst vm pause on postcopy
>   migration: allow src return path to pause
>   migration: allow send_rq to fail
>   migration: allow fault thread to pause
>   qmp: hmp: add migrate "resume" option
>   migration: pass MigrationState to migrate_init()
>   migration: rebuild channel on source
>   migration: new state "postcopy-recover"
>   migration: wakeup dst ram-load-thread for recover
>   migration: new cmd MIG_CMD_RECV_BITMAP
>   migration: new message MIG_RP_MSG_RECV_BITMAP
>   migration: new cmd MIG_CMD_POSTCOPY_RESUME
>   migration: new message MIG_RP_MSG_RESUME_ACK
>   migration: introduce SaveVMHandlers.resume_prepare
>   migration: synchronize dirty bitmap for resume
>   migration: setup ramstate for resume
>   migration: final handshake for the resume
>   migration: free SocketAddress where allocated
>   migration: return incoming task tag for sockets
>   migration: return incoming task tag for exec
>   migration: return incoming task tag for fd
>   migration: store listen task tag
>   migration: allow migrate_incoming for paused VM
>   migration: init dst in migration_object_init too
>   migration: delay the postcopy-active state switch
>   migration, qmp: new command "migrate-pause"
>   migration, hmp: new command "migrate_pause"
> 
>  hmp-commands.hx              |  21 +-
>  hmp.c                        |  13 +-
>  hmp.h                        |   1 +
>  include/migration/register.h |   2 +
>  migration/exec.c             |  20 +-
>  migration/exec.h             |   2 +-
>  migration/fd.c               |  20 +-
>  migration/fd.h               |   2 +-
>  migration/migration.c        | 609 ++++++++++++++++++++++++++++++++++++++-----
>  migration/migration.h        |  26 +-
>  migration/postcopy-ram.c     | 110 ++++++--
>  migration/postcopy-ram.h     |   2 +
>  migration/ram.c              | 252 +++++++++++++++++-
>  migration/ram.h              |   3 +
>  migration/savevm.c           | 240 ++++++++++++++++-
>  migration/savevm.h           |   3 +
>  migration/socket.c           |  44 ++--
>  migration/socket.h           |   4 +-
>  migration/trace-events       |  23 ++
>  qapi/migration.json          |  34 ++-
>  20 files changed, 1283 insertions(+), 148 deletions(-)
> 
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile
  2017-11-30 10:24   ` Dr. David Alan Gilbert
@ 2017-12-01  8:39     ` Peter Xu
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-12-01  8:39 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Thu, Nov 30, 2017 at 10:24:38AM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > If the postcopy down due to some reason, we can always see this on dst:
> > 
> >   qemu-system-x86_64: RP: Received invalid message 0x0000 length 0x0000
> > 
> > However in most cases that's not the real issue. The problem is that
> > qemu_get_be16() has no way to show whether the returned data is valid or
> > not, and we are _always_ assuming it is valid. That's possibly not wise.
> > 
> > The best approach to solve this would be: refactoring QEMUFile interface
> > to allow the APIs to return error if there is. However it needs quite a
> > bit of work and testing. For now, let's explicitly check the validity
> > first before using the data in all places for qemu_get_*().
> > 
> > This patch tries to fix most of the cases I can see. Only if we are with
> > this, can we make sure we are processing the valid data, and also can we
> > make sure we can capture the channel down events correctly.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  migration/migration.c |  5 +++++
> >  migration/ram.c       | 26 ++++++++++++++++++++++----
> >  migration/savevm.c    | 40 ++++++++++++++++++++++++++++++++++++++--
> >  3 files changed, 65 insertions(+), 6 deletions(-)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index c0206023d7..eae34d0524 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -1708,6 +1708,11 @@ static void *source_return_path_thread(void *opaque)
> >          header_type = qemu_get_be16(rp);
> >          header_len = qemu_get_be16(rp);
> >  
> > +        if (qemu_file_get_error(rp)) {
> > +            mark_source_rp_bad(ms);
> > +            goto out;
> > +        }
> > +
> >          if (header_type >= MIG_RP_MSG_MAX ||
> >              header_type == MIG_RP_MSG_INVALID) {
> >              error_report("RP: Received invalid message 0x%04x length 0x%04x",
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 8620aa400a..960c726ff2 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -2687,7 +2687,7 @@ static int ram_load_postcopy(QEMUFile *f)
> >      void *last_host = NULL;
> >      bool all_zero = false;
> >  
> > -    while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
> > +    while (!(flags & RAM_SAVE_FLAG_EOS)) {
> 
> I still think you need to keep the !ret && - see below;
> anyway, there's no harm in keeping it!

Fair enough; I'll keep it no matter what. :-)

> 
> >          ram_addr_t addr;
> >          void *host = NULL;
> >          void *page_buffer = NULL;
> > @@ -2696,6 +2696,16 @@ static int ram_load_postcopy(QEMUFile *f)
> >          uint8_t ch;
> >  
> >          addr = qemu_get_be64(f);
> > +
> > +        /*
> > +         * If qemu file error, we should stop here, and then "addr"
> > +         * may be invalid
> > +         */
> > +        ret = qemu_file_get_error(f);
> > +        if (ret) {
> > +            break;
> > +        }
> > +
> >          flags = addr & ~TARGET_PAGE_MASK;
> >          addr &= TARGET_PAGE_MASK;
> >  
> > @@ -2776,6 +2786,13 @@ static int ram_load_postcopy(QEMUFile *f)
> >              error_report("Unknown combination of migration flags: %#x"
> >                           " (postcopy mode)", flags);
> >              ret = -EINVAL;

[1]

> > +            break;
> 
> This 'break' breaks from the switch, but doesn't break the loop and
> because you remove dthe !ret && from the top, the loop keeps going when
> it shouldn't.

Ah yes I missed this one, thanks.

What I should have written here is a "goto out", and also I should add
that "out" label at the end.  I think after this single change current
patch should be fine.

However I understand that you would prefer me to check the ret every
time.  IMHO it's a matter of taste.  I would prefer current way to do
things since I see it awkward to keep checking against (!ret) possibly
multiple times even we already know it's non-zero (especially when the
failure happens at the beginning of the loop block).  But for this
patch, I can follow yours (since you asked for twice already :).

> 
> > +        }
> > +
> > +        /* Detect for any possible file errors */
> > +        if (qemu_file_get_error(f)) {
> > +            ret = qemu_file_get_error(f);
> > +            break;
> >          }
> 
> This is all simpler if you just leave the !ret && at the top, and then
> make this:
>   if (!ret) {
>       ret = qemu_file_get_error(f);
>   }

Sure.

(So to show what I meant: if we failed at [1] above we still need to
 check this, which is unecessary imho)

> 
> >  
> >          if (place_needed) {
> 
> Make that
> 
>       if (!ret && place_needed) {

Will do.

(same here if we failed at [1], actually we don't need to check the
 ret value so many times)

> 
> > @@ -2789,9 +2806,10 @@ static int ram_load_postcopy(QEMUFile *f)
> >                  ret = postcopy_place_page(mis, place_dest,
> >                                            place_source, block);
> >              }
> > -        }
> > -        if (!ret) {
> > -            ret = qemu_file_get_error(f);
> > +
> > +            if (ret) {
> > +                break;
> > +            }
> 
> And with the !ret check at the top this goes again.

Yes, will remove it.  Thanks!

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic
  2017-11-30 10:49   ` Dr. David Alan Gilbert
@ 2017-12-01  8:56     ` Peter Xu
  2017-12-01 10:49       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-12-01  8:56 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Thu, Nov 30, 2017 at 10:49:45AM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > Now when network down for postcopy, the source side will not fail the
> > migration. Instead we convert the status into this new paused state, and
> > we will try to wait for a rescue in the future.
> > 
> > If a recovery is detected, migration_thread() will reset its local
> > variables to prepare for that.
> > 
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> 
> That's still OK; you might want to consider reusing the 'pause_sem' that I
> added to MigrationStatus for the other pause case.

Yes I can.  I am just a bit worried about how these two different
features cross-affect each other.  Say, what if something tries to
execute "migrate-continue" during a postcopy network failure?  IMHO it
should not be allowed, but we don't yet have a protection so far.

So I would prefer to still separate these two semaphores.

Though I found that I can move init/destroy of the two new semaphores
(postcopy_pause_sem, postcopy_pause_rp_sem) into object init/destroy
just like what we did for pause_sem, which seems to be cleaner.  I
hope I can still keep your r-b if I do that small change.  Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail
  2017-11-30 12:13   ` Dr. David Alan Gilbert
@ 2017-12-01  9:30     ` Peter Xu
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-12-01  9:30 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Thu, Nov 30, 2017 at 12:13:57PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > We will not allow failures to happen when sending data from destination
> > to source via the return path. However it is possible that there can be
> > errors along the way.  This patch allows the migrate_send_rp_message()
> > to return error when it happens, and further extended it to
> > migrate_send_rp_req_pages().
> > 
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  migration/migration.c | 38 ++++++++++++++++++++++++++++++--------
> >  migration/migration.h |  2 +-
> >  2 files changed, 31 insertions(+), 9 deletions(-)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 8d93b891e3..db896233f6 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -199,17 +199,35 @@ static void deferred_incoming_migration(Error **errp)
> >   * Send a message on the return channel back to the source
> >   * of the migration.
> >   */
> > -static void migrate_send_rp_message(MigrationIncomingState *mis,
> > -                                    enum mig_rp_message_type message_type,
> > -                                    uint16_t len, void *data)
> > +static int migrate_send_rp_message(MigrationIncomingState *mis,
> > +                                   enum mig_rp_message_type message_type,
> > +                                   uint16_t len, void *data)
> >  {
> > +    int ret = 0;
> > +
> >      trace_migrate_send_rp_message((int)message_type, len);
> >      qemu_mutex_lock(&mis->rp_mutex);
> > +
> > +    /*
> > +     * It's possible that the file handle got lost due to network
> > +     * failures.
> > +     */
> > +    if (!mis->to_src_file) {
> > +        ret = -EIO;
> > +        goto error;
> > +    }
> > +
> >      qemu_put_be16(mis->to_src_file, (unsigned int)message_type);
> >      qemu_put_be16(mis->to_src_file, len);
> >      qemu_put_buffer(mis->to_src_file, data, len);
> >      qemu_fflush(mis->to_src_file);
> > +
> > +    /* It's possible that qemu file got error during sending */
> > +    ret = qemu_file_get_error(mis->to_src_file);
> > +
> > +error:
> >      qemu_mutex_unlock(&mis->rp_mutex);
> > +    return ret;
> >  }
> >  
> >  /* Request a range of pages from the source VM at the given
> > @@ -219,26 +237,30 @@ static void migrate_send_rp_message(MigrationIncomingState *mis,
> >   *   Start: Address offset within the RB
> >   *   Len: Length in bytes required - must be a multiple of pagesize
> >   */
> > -void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
> > -                               ram_addr_t start, size_t len)
> > +int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
> > +                              ram_addr_t start, size_t len)
> >  {
> >      uint8_t bufc[12 + 1 + 255]; /* start (8), len (4), rbname up to 256 */
> >      size_t msglen = 12; /* start + len */
> > +    int rbname_len;
> > +    enum mig_rp_message_type msg_type;
> >  
> >      *(uint64_t *)bufc = cpu_to_be64((uint64_t)start);
> >      *(uint32_t *)(bufc + 8) = cpu_to_be32((uint32_t)len);
> >  
> >      if (rbname) {
> > -        int rbname_len = strlen(rbname);
> > +        rbname_len = strlen(rbname);
> 
> I don't think that move of the declaration of rbname_len is necessary;
> it's only msglen that you need to keep for longer.

Yes it's not necessary.  I'll avoid touching it.  Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP
  2017-11-30 17:21   ` Dr. David Alan Gilbert
@ 2017-12-01  9:37     ` Peter Xu
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-12-01  9:37 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Thu, Nov 30, 2017 at 05:21:48PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > Introducing new return path message MIG_RP_MSG_RECV_BITMAP to send
> > received bitmap of ramblock back to source.
> > 
> > This is the reply message of MIG_CMD_RECV_BITMAP, it contains not only
> > the header (including the ramblock name), and it was appended with the
> > whole ramblock received bitmap on the destination side.
> > 
> > When the source receives such a reply message (MIG_RP_MSG_RECV_BITMAP),
> > it parses it, convert it to the dirty bitmap by inverting the bits.
> > 
> > One thing to mention is that, when we send the recv bitmap, we are doing
> > these things in extra:
> > 
> > - converting the bitmap to little endian, to support when hosts are
> >   using different endianess on src/dst.
> > 
> > - do proper alignment for 8 bytes, to support when hosts are using
> >   different word size (32/64 bits) on src/dst.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> (The comment on the receive side 'Add addings' is a bit odd!
> The send side is much better); other than that:
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Ouch.  It was meant to be "Add paddings". :)

I'll keep the r-b, though, with the fix.  Thanks,

> > +    /* Add addings */
> > +    le_bitmap = bitmap_new(nbits + BITS_PER_LONG);

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery
  2017-11-30 20:00 ` [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Dr. David Alan Gilbert
@ 2017-12-01 10:23   ` Peter Xu
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-12-01 10:23 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Stefan Hajnoczi, Paolo Bonzini,
	Daniel P. Berrange, Fam Zheng, Eric Blake
  Cc: qemu-devel, Alexey Perevalov, Juan Quintela, Andrea Arcangeli

On Thu, Nov 30, 2017 at 08:00:54PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > Tree is pushed here for better reference and testing:
> >   github.com/xzpeter postcopy-recovery-support
> 
> Hi Peter,
>   Do you have a git with this code + your OOB world in?
> I'd like to play with doing recovery and see what happens;
> I still worry a bit about whether the (potentially hung) main loop
> is needed for the new incoming connection to be accepted by the
> destination.

Good question...

I'd say I thought it was okay.  The reason is that as long as we run
migrate-incoming command using run-oob=true, it'll be run in iothread,
and our iothread implementation has this in iothread_run():

    g_main_context_push_thread_default(iothread->worker_context);

This _should_ mean that from now on NULL context will be replaced with
iothread->worker_context (which is the monitor context, rather than
main thread any more) mostly (I say mostly because there are corner
cases that glib won't use this thread-local var but still the global
one, though it should not be our case I guess).

I tried to confirm this by breaking at the entry of function
socket_accept_incoming_migration() on destination side.  Sadly, I was
wrong.  It's still running in main().

I found that the problem is that g_source_attach() implementation is
still using the g_main_context_default() rather than
g_main_context_get_thread_default() for the cases where context=NULL
is passed in.  I don't know whether this is a glib bug:

g_source_attach (GSource      *source,
		 GMainContext *context)
{
  guint result = 0;
  ...
  if (!context)
    context = g_main_context_default ();
  ...
}

I'm CCing some more people who may know better on glib than me.

For now, I think a simple solution can be that, we just call
g_main_context_get_thread_default() explicitly for QIO code.  But also
I'd like to see how other people think too.

I'll prepare one branch soon, including the two series (postcopy
recovery + oob), after the solution is settled down.  Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic
  2017-12-01  8:56     ` Peter Xu
@ 2017-12-01 10:49       ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-12-01 10:49 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> On Thu, Nov 30, 2017 at 10:49:45AM +0000, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > Now when network down for postcopy, the source side will not fail the
> > > migration. Instead we convert the status into this new paused state, and
> > > we will try to wait for a rescue in the future.
> > > 
> > > If a recovery is detected, migration_thread() will reset its local
> > > variables to prepare for that.
> > > 
> > > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > 
> > That's still OK; you might want to consider reusing the 'pause_sem' that I
> > added to MigrationStatus for the other pause case.
> 
> Yes I can.  I am just a bit worried about how these two different
> features cross-affect each other.  Say, what if something tries to
> execute "migrate-continue" during a postcopy network failure?  IMHO it
> should not be allowed, but we don't yet have a protection so far.
> 
> So I would prefer to still separate these two semaphores.

Yes, that's fair enough; the semantics might be different enough that
they don't quite fit - but worth keeping in mind.

> Though I found that I can move init/destroy of the two new semaphores
> (postcopy_pause_sem, postcopy_pause_rp_sem) into object init/destroy
> just like what we did for pause_sem, which seems to be cleaner.  I
> hope I can still keep your r-b if I do that small change.  Thanks,

Yes, I think that's OK.

Dave

> -- 
> Peter Xu
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch Peter Xu
@ 2017-12-01 12:34   ` Dr. David Alan Gilbert
  2017-12-04  4:14     ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-12-01 12:34 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> Switch the state until we try to start the VM on destination side.  The
> problem is that without doing this we may have a very small window that
> we'll be in such a state:
> 
> - dst VM is in postcopy-active state,
> - main thread is handling MIG_CMD_PACKAGED message, which loads all the
>   device states,
> - ram load thread is reading memory data from source.
> 
> Then if we failed at this point when reading the migration stream we'll
> also switch to postcopy-paused state, but that is not what we want.  If
> device states failed to load, we should fail the migration directly
> instead of pause.
> 
> Postponing the state switch to the point when we have already loaded the
> devices' states and been ready to start running destination VM.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

If it's the only way, then this is OK; but I'd prefer you use a separate
flag somewhere to let you know this, because this means that
POSTCOPY_ACTIVE on the destination happens at a different point that it
does on the source (and changing it on the source I think will break
lots of things).
Can't you use the PostcopyState value and check if it's in
POSTCOPY_INCOMING_RUNNING?

Dave

> ---
>  migration/savevm.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/migration/savevm.c b/migration/savevm.c
> index bc87b0e5b1..3bc792e320 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1584,8 +1584,6 @@ static void *postcopy_ram_listen_thread(void *opaque)
>      QEMUFile *f = mis->from_src_file;
>      int load_res;
>  
> -    migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
> -                                   MIGRATION_STATUS_POSTCOPY_ACTIVE);
>      qemu_sem_post(&mis->listen_thread_sem);
>      trace_postcopy_ram_listen_thread_start();
>  
> @@ -1748,6 +1746,14 @@ static int loadvm_postcopy_handle_run(MigrationIncomingState *mis)
>          return -1;
>      }
>  
> +    /*
> +     * Declare that we are in postcopy now.  We should already have
> +     * all the device states loaded ready when reach here, and also
> +     * the ram load thread running.
> +     */
> +    migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
> +                                   MIGRATION_STATUS_POSTCOPY_ACTIVE);
> +
>      data = g_new(HandleRunBhData, 1);
>      data->bh = qemu_bh_new(loadvm_postcopy_handle_run_bh, data);
>      qemu_bh_schedule(data->bh);
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause"
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause" Peter Xu
@ 2017-12-01 16:53   ` Dr. David Alan Gilbert
  2017-12-04  4:48     ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-12-01 16:53 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> It is used to manually trigger the postcopy pause state.  It works just
> like when we found the migration stream failed during postcopy, but
> provide an explicit way for user in case of misterious socket hangs.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

Can we change the name to something like 'migrate-disconnect' - pause
is a bit easy to confuse with other things and this is really more
an explicit network disconnect (Is it worth just making it a flag to
migrate-cancel?)


> ---
>  migration/migration.c | 18 ++++++++++++++++++
>  qapi/migration.json   | 22 ++++++++++++++++++++++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index 536a771803..30348a5e27 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1485,6 +1485,24 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
>      once = false;
>  }
>  
> +void qmp_migrate_pause(Error **errp)
> +{
> +    int ret;
> +    MigrationState *ms = migrate_get_current();
> +
> +    if (ms->state != MIGRATION_STATUS_POSTCOPY_ACTIVE) {
> +        error_setg(errp, "Migration pause is currently only allowed during"
> +                   " an active postcopy phase.");
> +        return;
> +    }
> +
> +    ret = qemu_file_shutdown(ms->to_dst_file);
> +
> +    if (ret) {
> +        error_setg(errp, "Failed to pause migration stream.");
> +    }
> +}
> +
>  bool migration_is_blocked(Error **errp)
>  {
>      if (qemu_savevm_state_blocked(errp)) {
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 4a3eff62f1..52901f7e2e 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -1074,6 +1074,28 @@
>  { 'command': 'migrate-incoming', 'data': {'uri': 'str' } }
>  
>  ##
> +# @migrate-pause:
> +#
> +# Pause an migration.  Currently it can only pause a postcopy
> +# migration.  Pausing a precopy migration is not supported yet.
> +#
> +# It is mostly used as a manual way to trigger the postcopy paused
> +# state when the network sockets hang due to some reason, so that we
> +# can try a recovery afterward.

Can we say this explicitly;
'Force closes the migration connection to trigger the postcopy paused
 state when the network sockets hang due to some reason, so that we
can try a recovery afterwards'

Dave

> +# Returns: nothing on success
> +#
> +# Since: 2.12
> +#
> +# Example:
> +#
> +# -> { "execute": "migrate-pause" }
> +# <- { "return": {} }
> +#
> +##
> +{ 'command': 'migrate-pause' }
> +
> +##
>  # @xen-save-devices-state:
>  #
>  # Save the state of all devices to file. The RAM and the block devices
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 28/32] migration: allow migrate_incoming for paused VM
  2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 28/32] migration: allow migrate_incoming for paused VM Peter Xu
@ 2017-12-01 17:21   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-12-01 17:21 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> migrate_incoming command is previously only used when we were providing
> "-incoming defer" in the command line, to defer the incoming migration
> channel creation.
> 
> However there is similar requirement when we are paused during postcopy
> migration. The old incoming channel might have been destroyed already.
> We may need another new channel for the recovery to happen.
> 
> This patch leveraged the same interface, but allows the user to specify
> incoming migration channel even for paused postcopy.
> 
> Meanwhile, now migration listening ports are always detached manually
> using the tag, rather than using return values of dispatchers.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

I think this patch is OK now, except for that top level question I've asked
against 00 about how the new incoming ever gets to start up.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/exec.c       |  2 +-
>  migration/fd.c         |  2 +-
>  migration/migration.c  | 58 +++++++++++++++++++++++++++++++++++++++++++-------
>  migration/socket.c     |  4 +---
>  migration/trace-events |  2 ++
>  5 files changed, 55 insertions(+), 13 deletions(-)
> 
> diff --git a/migration/exec.c b/migration/exec.c
> index a0796c2c70..9d20d10899 100644
> --- a/migration/exec.c
> +++ b/migration/exec.c
> @@ -49,7 +49,7 @@ static gboolean exec_accept_incoming_migration(QIOChannel *ioc,
>  {
>      migration_channel_process_incoming(ioc);
>      object_unref(OBJECT(ioc));
> -    return G_SOURCE_REMOVE;
> +    return G_SOURCE_CONTINUE;
>  }
>  
>  /*
> diff --git a/migration/fd.c b/migration/fd.c
> index 7ead2f26cc..54b36888e2 100644
> --- a/migration/fd.c
> +++ b/migration/fd.c
> @@ -49,7 +49,7 @@ static gboolean fd_accept_incoming_migration(QIOChannel *ioc,
>  {
>      migration_channel_process_incoming(ioc);
>      object_unref(OBJECT(ioc));
> -    return G_SOURCE_REMOVE;
> +    return G_SOURCE_CONTINUE;
>  }
>  
>  /*
> diff --git a/migration/migration.c b/migration/migration.c
> index a4cdedcde8..9b7fc56ed8 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -179,6 +179,17 @@ void migration_incoming_state_destroy(void)
>      qemu_event_reset(&mis->main_thread_load_event);
>  }
>  
> +static bool migrate_incoming_detach_listen(MigrationIncomingState *mis)
> +{
> +    if (mis->listen_task_tag) {
> +        /* Never fail */
> +        g_source_remove(mis->listen_task_tag);
> +        mis->listen_task_tag = 0;
> +        return true;
> +    }
> +    return false;
> +}
> +
>  static void migrate_generate_event(int new_state)
>  {
>      if (migrate_use_events()) {
> @@ -463,10 +474,9 @@ void migration_fd_process_incoming(QEMUFile *f)
>  
>      /*
>       * When reach here, we should not need the listening port any
> -     * more. We'll detach the listening task soon, let's reset the
> -     * listen task tag.
> +     * more.  Detach the listening port explicitly.
>       */
> -    mis->listen_task_tag = 0;
> +    migrate_incoming_detach_listen(mis);
>  }
>  
>  void migration_ioc_process_incoming(QIOChannel *ioc)
> @@ -1422,14 +1432,46 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
>  {
>      Error *local_err = NULL;
>      static bool once = true;
> +    MigrationIncomingState *mis = migration_incoming_get_current();
> +
>  
> -    if (!deferred_incoming) {
> -        error_setg(errp, "For use with '-incoming defer'");
> +    if (mis->state == MIGRATION_STATUS_POSTCOPY_PAUSED) {
> +        if (mis->listen_task_tag) {
> +            error_setg(errp, "We already have a listening port!");
> +            return;
> +        } else {
> +            /*
> +            * We are in postcopy-paused state, and we don't have
> +            * listening port.  It's very possible that the old
> +            * listening port is already gone, so we allow to create a
> +            * new one.
> +            *
> +            * NOTE: RDMA migration currently does not really use
> +            * listen_task_tag for now, so even if listen_task_tag is
> +            * zero, RDMA can still have its accept port listening.
> +            * However, RDMA is not supported by postcopy at all (yet), so
> +            * we are safe here.
> +            */
> +            trace_migrate_incoming_recover();
> +        }
> +    } else if (deferred_incoming) {
> +        /*
> +         * We don't need recovery, but we possibly has a deferred
> +         * incoming parameter, this allows us to manually specify
> +         * incoming port once.
> +         */
> +        if (!once) {
> +            error_setg(errp, "The incoming migration has already been started");
> +            return;
> +        } else {
> +            /* PASS */
> +            trace_migrate_incoming_deferred();
> +        }
> +    } else {
> +        error_setg(errp, "Migrate-incoming is only allowed for either "
> +                   "deferred incoming, or postcopy paused stage.");
>          return;
>      }
> -    if (!once) {
> -        error_setg(errp, "The incoming migration has already been started");
> -    }
>  
>      qemu_start_incoming_migration(uri, &local_err);
>  
> diff --git a/migration/socket.c b/migration/socket.c
> index e8f3325155..54095a80a0 100644
> --- a/migration/socket.c
> +++ b/migration/socket.c
> @@ -155,10 +155,8 @@ out:
>      if (migration_has_all_channels()) {
>          /* Close listening socket as its no longer needed */
>          qio_channel_close(ioc, NULL);
> -        return G_SOURCE_REMOVE;
> -    } else {
> -        return G_SOURCE_CONTINUE;
>      }
> +    return G_SOURCE_CONTINUE;
>  }
>  
>  
> diff --git a/migration/trace-events b/migration/trace-events
> index 98c2e4de58..65b1c7e459 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -136,6 +136,8 @@ process_incoming_migration_co_end(int ret, int ps) "ret=%d postcopy-state=%d"
>  process_incoming_migration_co_postcopy_end_main(void) ""
>  migration_set_incoming_channel(void *ioc, const char *ioctype) "ioc=%p ioctype=%s"
>  migration_set_outgoing_channel(void *ioc, const char *ioctype, const char *hostname)  "ioc=%p ioctype=%s hostname=%s"
> +migrate_incoming_deferred(void) ""
> +migrate_incoming_recover(void) ""
>  
>  # migration/rdma.c
>  qemu_rdma_accept_incoming_migration(void) ""
> -- 
> 2.13.6
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch
  2017-12-01 12:34   ` Dr. David Alan Gilbert
@ 2017-12-04  4:14     ` Peter Xu
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-12-04  4:14 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Fri, Dec 01, 2017 at 12:34:32PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > Switch the state until we try to start the VM on destination side.  The
> > problem is that without doing this we may have a very small window that
> > we'll be in such a state:
> > 
> > - dst VM is in postcopy-active state,
> > - main thread is handling MIG_CMD_PACKAGED message, which loads all the
> >   device states,
> > - ram load thread is reading memory data from source.
> > 
> > Then if we failed at this point when reading the migration stream we'll
> > also switch to postcopy-paused state, but that is not what we want.  If
> > device states failed to load, we should fail the migration directly
> > instead of pause.
> > 
> > Postponing the state switch to the point when we have already loaded the
> > devices' states and been ready to start running destination VM.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> If it's the only way, then this is OK; but I'd prefer you use a separate
> flag somewhere to let you know this, because this means that
> POSTCOPY_ACTIVE on the destination happens at a different point that it
> does on the source (and changing it on the source I think will break
> lots of things).

Yes, I thought it was fine to postpone it a bit to the point even
after receiving the packed data, but I fully understand your point.

> Can't you use the PostcopyState value and check if it's in
> POSTCOPY_INCOMING_RUNNING?

I think, yes. :)

Let me drop this patch, instead I'll check explicitly on
POSTCOPY_INCOMING_RUNNING state in patch 6.  Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause"
  2017-12-01 16:53   ` Dr. David Alan Gilbert
@ 2017-12-04  4:48     ` Peter Xu
  2017-12-04 17:10       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 53+ messages in thread
From: Peter Xu @ 2017-12-04  4:48 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Fri, Dec 01, 2017 at 04:53:28PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > It is used to manually trigger the postcopy pause state.  It works just
> > like when we found the migration stream failed during postcopy, but
> > provide an explicit way for user in case of misterious socket hangs.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> Can we change the name to something like 'migrate-disconnect' - pause
> is a bit easy to confuse with other things and this is really more
> an explicit network disconnect (Is it worth just making it a flag to
> migrate-cancel?)

Then I would prefer to reuse the migrate_cancel command.  

Actually this reminded me about what would happen now if someone on
src VM sends a "migrate_cancel" during postcopy active.  It should
crash the VM, right?

Considering above, I'm thinking whether we should just make it a
default behavior that when do migrate_cancel during postcopy-active we
just do a pause instead of real cancel. After all it cannot re-start
the VM any more on source, so IMHO a real cancel does not mean much
here.  More importantly, what if someone wants to manually trigger
this pause but accidentally forgot to type that new flag (say,
-D[isconnect])?  It'll crash the VM directly.

What do you think?

> 
> 
> > ---
> >  migration/migration.c | 18 ++++++++++++++++++
> >  qapi/migration.json   | 22 ++++++++++++++++++++++
> >  2 files changed, 40 insertions(+)
> > 
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 536a771803..30348a5e27 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -1485,6 +1485,24 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
> >      once = false;
> >  }
> >  
> > +void qmp_migrate_pause(Error **errp)
> > +{
> > +    int ret;
> > +    MigrationState *ms = migrate_get_current();
> > +
> > +    if (ms->state != MIGRATION_STATUS_POSTCOPY_ACTIVE) {
> > +        error_setg(errp, "Migration pause is currently only allowed during"
> > +                   " an active postcopy phase.");
> > +        return;
> > +    }
> > +
> > +    ret = qemu_file_shutdown(ms->to_dst_file);
> > +
> > +    if (ret) {
> > +        error_setg(errp, "Failed to pause migration stream.");
> > +    }
> > +}
> > +
> >  bool migration_is_blocked(Error **errp)
> >  {
> >      if (qemu_savevm_state_blocked(errp)) {
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index 4a3eff62f1..52901f7e2e 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -1074,6 +1074,28 @@
> >  { 'command': 'migrate-incoming', 'data': {'uri': 'str' } }
> >  
> >  ##
> > +# @migrate-pause:
> > +#
> > +# Pause an migration.  Currently it can only pause a postcopy
> > +# migration.  Pausing a precopy migration is not supported yet.
> > +#
> > +# It is mostly used as a manual way to trigger the postcopy paused
> > +# state when the network sockets hang due to some reason, so that we
> > +# can try a recovery afterward.
> 
> Can we say this explicitly;
> 'Force closes the migration connection to trigger the postcopy paused
>  state when the network sockets hang due to some reason, so that we
> can try a recovery afterwards'

Sure!  I'll just see where I should properly put these sentences.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause"
  2017-12-04  4:48     ` Peter Xu
@ 2017-12-04 17:10       ` Dr. David Alan Gilbert
  2017-12-05  2:52         ` Peter Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Dr. David Alan Gilbert @ 2017-12-04 17:10 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

* Peter Xu (peterx@redhat.com) wrote:
> On Fri, Dec 01, 2017 at 04:53:28PM +0000, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > It is used to manually trigger the postcopy pause state.  It works just
> > > like when we found the migration stream failed during postcopy, but
> > > provide an explicit way for user in case of misterious socket hangs.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > 
> > Can we change the name to something like 'migrate-disconnect' - pause
> > is a bit easy to confuse with other things and this is really more
> > an explicit network disconnect (Is it worth just making it a flag to
> > migrate-cancel?)
> 
> Then I would prefer to reuse the migrate_cancel command.  
> 
> Actually this reminded me about what would happen now if someone on
> src VM sends a "migrate_cancel" during postcopy active.  It should
> crash the VM, right?
> 
> Considering above, I'm thinking whether we should just make it a
> default behavior that when do migrate_cancel during postcopy-active we
> just do a pause instead of real cancel. After all it cannot re-start
> the VM any more on source, so IMHO a real cancel does not mean much
> here.  More importantly, what if someone wants to manually trigger
> this pause but accidentally forgot to type that new flag (say,
> -D[isconnect])?  It'll crash the VM directly.
> 
> What do you think?

Yes, that's OK, just be careful about race conditions between the
states,  for example what happens if you do a cancel and you enter
migrate_fd_cancel in postcopy-active, but before you can actually
cancel you end up completing, or the opposite where you do a
migrate-start-postcopy almost immediately before migrade-cancel;
do you get to cancel in teh active or postcopy-active state?


> 
> > 
> > 
> > > ---
> > >  migration/migration.c | 18 ++++++++++++++++++
> > >  qapi/migration.json   | 22 ++++++++++++++++++++++
> > >  2 files changed, 40 insertions(+)
> > > 
> > > diff --git a/migration/migration.c b/migration/migration.c
> > > index 536a771803..30348a5e27 100644
> > > --- a/migration/migration.c
> > > +++ b/migration/migration.c
> > > @@ -1485,6 +1485,24 @@ void qmp_migrate_incoming(const char *uri, Error **errp)
> > >      once = false;
> > >  }
> > >  
> > > +void qmp_migrate_pause(Error **errp)
> > > +{
> > > +    int ret;
> > > +    MigrationState *ms = migrate_get_current();
> > > +
> > > +    if (ms->state != MIGRATION_STATUS_POSTCOPY_ACTIVE) {
> > > +        error_setg(errp, "Migration pause is currently only allowed during"
> > > +                   " an active postcopy phase.");
> > > +        return;
> > > +    }
> > > +
> > > +    ret = qemu_file_shutdown(ms->to_dst_file);
> > > +
> > > +    if (ret) {
> > > +        error_setg(errp, "Failed to pause migration stream.");
> > > +    }
> > > +}
> > > +
> > >  bool migration_is_blocked(Error **errp)
> > >  {
> > >      if (qemu_savevm_state_blocked(errp)) {
> > > diff --git a/qapi/migration.json b/qapi/migration.json
> > > index 4a3eff62f1..52901f7e2e 100644
> > > --- a/qapi/migration.json
> > > +++ b/qapi/migration.json
> > > @@ -1074,6 +1074,28 @@
> > >  { 'command': 'migrate-incoming', 'data': {'uri': 'str' } }
> > >  
> > >  ##
> > > +# @migrate-pause:
> > > +#
> > > +# Pause an migration.  Currently it can only pause a postcopy
> > > +# migration.  Pausing a precopy migration is not supported yet.
> > > +#
> > > +# It is mostly used as a manual way to trigger the postcopy paused
> > > +# state when the network sockets hang due to some reason, so that we
> > > +# can try a recovery afterward.
> > 
> > Can we say this explicitly;
> > 'Force closes the migration connection to trigger the postcopy paused
> >  state when the network sockets hang due to some reason, so that we
> > can try a recovery afterwards'
> 
> Sure!  I'll just see where I should properly put these sentences.

Thanks.

Dave

> Thanks,
> 
> -- 
> Peter Xu
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause"
  2017-12-04 17:10       ` Dr. David Alan Gilbert
@ 2017-12-05  2:52         ` Peter Xu
  0 siblings, 0 replies; 53+ messages in thread
From: Peter Xu @ 2017-12-05  2:52 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: qemu-devel, Alexey Perevalov, Daniel P . Berrange, Juan Quintela,
	Andrea Arcangeli

On Mon, Dec 04, 2017 at 05:10:29PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Fri, Dec 01, 2017 at 04:53:28PM +0000, Dr. David Alan Gilbert wrote:
> > > * Peter Xu (peterx@redhat.com) wrote:
> > > > It is used to manually trigger the postcopy pause state.  It works just
> > > > like when we found the migration stream failed during postcopy, but
> > > > provide an explicit way for user in case of misterious socket hangs.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > 
> > > Can we change the name to something like 'migrate-disconnect' - pause
> > > is a bit easy to confuse with other things and this is really more
> > > an explicit network disconnect (Is it worth just making it a flag to
> > > migrate-cancel?)
> > 
> > Then I would prefer to reuse the migrate_cancel command.  
> > 
> > Actually this reminded me about what would happen now if someone on
> > src VM sends a "migrate_cancel" during postcopy active.  It should
> > crash the VM, right?
> > 
> > Considering above, I'm thinking whether we should just make it a
> > default behavior that when do migrate_cancel during postcopy-active we
> > just do a pause instead of real cancel. After all it cannot re-start
> > the VM any more on source, so IMHO a real cancel does not mean much
> > here.  More importantly, what if someone wants to manually trigger
> > this pause but accidentally forgot to type that new flag (say,
> > -D[isconnect])?  It'll crash the VM directly.
> > 
> > What do you think?
> 
> Yes, that's OK, just be careful about race conditions between the
> states,  for example what happens if you do a cancel and you enter
> migrate_fd_cancel in postcopy-active, but before you can actually
> cancel you end up completing,

If I am going to modify that, migrate_fd_cancel won't be called if we
are in postcopy-active state, instead we'll just do the disconnect
only.

For finally solving all the races between QMP commands and migration
thread, I do think (again) that we need locks or some other sync
method.  I really hope we can have this fixed in QEMU 2.12.  Basically
we will need to go over every migration command to see whether it'll
need to take the migration lock (to be added) or not.  With that,
it'll save a lot of our future time IMHO thinking about races.

> or the opposite where you do a
> migrate-start-postcopy almost immediately before migrade-cancel;
> do you get to cancel in teh active or postcopy-active state?

This is a good example that at least migrate-start-postcopy is
synchronized somehow nicely with the migration thread using a single
variable (actually it can be non-atomic operation I think, anyway, no
race is here as long as we are delivering the message via a single
variable, and qmp command is the only writter).  For this command I
think it's pretty safe.  After all, user should not run that command
too fast if he/she wants a paused postcopy, at least he/she should do
query-migration before that to make sure the state is postcopy-active.
So IMHO this is totally fine.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2017-12-05  2:53 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-08  6:00 [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Peter Xu
2017-11-08  6:00 ` [Qemu-devel] [PATCH v4 01/32] migration: better error handling with QEMUFile Peter Xu
2017-11-30 10:24   ` Dr. David Alan Gilbert
2017-12-01  8:39     ` Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 02/32] migration: reuse mis->userfault_quit_fd Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 03/32] migration: provide postcopy_fault_thread_notify() Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 04/32] migration: new postcopy-pause state Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 05/32] migration: implement "postcopy-pause" src logic Peter Xu
2017-11-30 10:49   ` Dr. David Alan Gilbert
2017-12-01  8:56     ` Peter Xu
2017-12-01 10:49       ` Dr. David Alan Gilbert
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 06/32] migration: allow dst vm pause on postcopy Peter Xu
2017-11-30 11:17   ` Dr. David Alan Gilbert
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 07/32] migration: allow src return path to pause Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 08/32] migration: allow send_rq to fail Peter Xu
2017-11-30 12:13   ` Dr. David Alan Gilbert
2017-12-01  9:30     ` Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 09/32] migration: allow fault thread to pause Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 10/32] qmp: hmp: add migrate "resume" option Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 11/32] migration: pass MigrationState to migrate_init() Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 12/32] migration: rebuild channel on source Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 13/32] migration: new state "postcopy-recover" Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 14/32] migration: wakeup dst ram-load-thread for recover Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 15/32] migration: new cmd MIG_CMD_RECV_BITMAP Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 16/32] migration: new message MIG_RP_MSG_RECV_BITMAP Peter Xu
2017-11-30 17:21   ` Dr. David Alan Gilbert
2017-12-01  9:37     ` Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 17/32] migration: new cmd MIG_CMD_POSTCOPY_RESUME Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 18/32] migration: new message MIG_RP_MSG_RESUME_ACK Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 19/32] migration: introduce SaveVMHandlers.resume_prepare Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 20/32] migration: synchronize dirty bitmap for resume Peter Xu
2017-11-30 18:40   ` Dr. David Alan Gilbert
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 21/32] migration: setup ramstate " Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 22/32] migration: final handshake for the resume Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 23/32] migration: free SocketAddress where allocated Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 24/32] migration: return incoming task tag for sockets Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 25/32] migration: return incoming task tag for exec Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 26/32] migration: return incoming task tag for fd Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 27/32] migration: store listen task tag Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 28/32] migration: allow migrate_incoming for paused VM Peter Xu
2017-12-01 17:21   ` Dr. David Alan Gilbert
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 29/32] migration: init dst in migration_object_init too Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 30/32] migration: delay the postcopy-active state switch Peter Xu
2017-12-01 12:34   ` Dr. David Alan Gilbert
2017-12-04  4:14     ` Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 31/32] migration, qmp: new command "migrate-pause" Peter Xu
2017-12-01 16:53   ` Dr. David Alan Gilbert
2017-12-04  4:48     ` Peter Xu
2017-12-04 17:10       ` Dr. David Alan Gilbert
2017-12-05  2:52         ` Peter Xu
2017-11-08  6:01 ` [Qemu-devel] [PATCH v4 32/32] migration, hmp: new command "migrate_pause" Peter Xu
2017-11-30 20:00 ` [Qemu-devel] [PATCH v4 00/32] Migration: postcopy failure recovery Dr. David Alan Gilbert
2017-12-01 10:23   ` Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.