qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration
@ 2016-02-12 18:00 Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 01/17] migration: add has_postcopy savevm handler Vladimir Sementsov-Ogievskiy
                   ` (20 more replies)
  0 siblings, 21 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Hi all!

These series are derived from my 'Dirty bitmaps migration' series. The
core idea is switch to postcopy migration and drop usage of meta
bitmaps.

These patches provide dirty bitmap postcopy migration feature. Only
named dirty bitmaps are to be migrated. Migration may be enabled using
migration capabilities.

The overall method (thanks to John Snow):

1. migrate bitmaps meta data in .save_live_setup
   - create/find related bitmaps on target
   - disable them
   - create successors (anonimous children) only for enabled migrated
     bitmaps
2. do nothing in precopy stage
3. just before target vm start: enable successors, created in (1)
4. migrate bitmap data
5. reclaime bitmaps (merge successors to their parents)
6. enable bitmaps (only bitmaps, which was enabled in source)


Some patches are unchnaged from (v7) of 'Dirty bitmaps migration'
(DBMv7). I've left Reviewed-by's for them, if you don't like it, say me
and I'll drop them in the following version.

So, relatively to last DBMv7: 

01-04: new patches, splitting common postcopy migration out of ram
       postcopy migration
   05: equal to DBMv7.05
   06: new
   07: equal to DBMv7.06
   08: new
   09: equal to DBMv7.07
   10: new
   11: derived from DBMv7.08, see below
12-15: equal to DBMv7.09-12
   16: derived from DBMv7.13
       - switch from fifo to socket, as postcopy don't work with fifo
         for now
       - change parameters: size, granularity, regions
       - add time.sleep, to wait for postcopy migration phase (bad
         temporary solution.
       - drop Reviewed-by
   17: new

   11: the core patch of the series, it is derived from
       [DBMv7.08: migration: add migration_block-dirty-bitmap.c]
       There are a lot of changes related to switching from precopy to
       postcopy, but functions related to migration stream itself
       (structs, send/load sequences) are mostly unchnaged.

       So, changes, to switch from precopy to postcopy:
       - removed all staff related to meta bitmaps and dirty phase!!!
       - add dirty_bitmap_mig_enable_successors, and call it before
         target vm start in loadvm_postcopy_handle_run
       - add enabled_bitmaps list of bitmaps for
         dirty_bitmap_mig_enable_successors

       - enabled flag is send with start bitmap chunk instead of
         completion chunk
       - sectors_per_chunk is calculated directly from CHUNK_SIZE, not
         using meta bitmap granularity

       - dirty_bitmap_save_iterate: remove dirty_phase, move bulk_phase
         to postcopy stage
       - dirty_bitmap_save_pending: remove dirty phase related pending,
         switch pending to non-postcopyable
       - dirty_bitmap_load_start: get enabled flag and prepare
         successors for enabled bitmaps, also add them to
         enabled_bitmaps list
       - dirty_bitmap_load_complete: for enabled bitmaps: merge them
         with successors and enable

       - savevm handlers:
         * remove separate savevm_dirty_bitmap_live_iterate_handlers state
           (it was bad idea, any way), and move its save_live_iterate to
           savevm_dirty_bitmap_handlers
         * add is_active_iterate savevm handler, which allows iterations
           only in postcopy stage (after stopping source vm)
         * add has_postcopy savevm handler. (ofcourse, just returning true)
         * use save_live_complete_postcopy instead of
           save_live_complete_precopy

       Other changes:
       - some debug output changed
       - remove MIN_LIVE_SIZE, is_live_iterative and related staff (it
         was needed to omit iterations if bitmap data is small, possibly
         this should be reimplemented)

Vladimir Sementsov-Ogievskiy (17):
  migration: add has_postcopy savevm handler
  migration: fix ram_save_pending
  migration: split common postcopy out of ram postcopy
  migration: introduce postcopy-only pending
  block: add bdrv_next_dirty_bitmap()
  block: add bdrv_dirty_bitmap_enable_successor()
  qapi: add dirty-bitmaps migration capability
  migration: include migrate_dirty_bitmaps in migrate_postcopy
  migration/qemu-file: add qemu_put_counted_string()
  migration: add is_active_iterate handler
  migration: add postcopy migration of dirty bitmaps
  iotests: maintain several vms in test
  iotests: add add_incoming_migration to VM class
  qapi: add md5 checksum of last dirty bitmap level to query-block
  iotests: add default node-name
  iotests: add dirty bitmap migration test 117
  iotests: add dirty bitmap migration test 170

 block/dirty-bitmap.c           |  16 +
 include/block/dirty-bitmap.h   |   4 +
 include/migration/block.h      |   1 +
 include/migration/migration.h  |   5 +
 include/migration/qemu-file.h  |   2 +
 include/migration/vmstate.h    |   7 +-
 include/qemu/hbitmap.h         |   8 +
 include/sysemu/sysemu.h        |   5 +-
 migration/Makefile.objs        |   2 +-
 migration/block-dirty-bitmap.c | 646 +++++++++++++++++++++++++++++++++++++++++
 migration/block.c              |   7 +-
 migration/migration.c          |  63 ++--
 migration/postcopy-ram.c       |   4 +-
 migration/qemu-file.c          |  13 +
 migration/ram.c                |  19 +-
 migration/savevm.c             |  56 +++-
 qapi-schema.json               |   4 +-
 qapi/block-core.json           |   5 +-
 tests/qemu-iotests/117         |  82 ++++++
 tests/qemu-iotests/117.out     |   5 +
 tests/qemu-iotests/170         | 101 +++++++
 tests/qemu-iotests/170.out     |   5 +
 tests/qemu-iotests/group       |   2 +
 tests/qemu-iotests/iotests.py  |  17 +-
 trace-events                   |   2 +-
 util/hbitmap.c                 |   8 +
 vl.c                           |   1 +
 27 files changed, 1035 insertions(+), 55 deletions(-)
 create mode 100644 migration/block-dirty-bitmap.c
 create mode 100755 tests/qemu-iotests/117
 create mode 100644 tests/qemu-iotests/117.out
 create mode 100755 tests/qemu-iotests/170
 create mode 100644 tests/qemu-iotests/170.out

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 01/17] migration: add has_postcopy savevm handler
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 02/17] migration: fix ram_save_pending Vladimir Sementsov-Ogievskiy
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Now postcopy-able states are recognized by not NULL
save_live_complete_postcopy handler. But when we have several different
postcopy-able states, it is not convenient. Ram postcopy may be
disabled, while some other postcopy enabled, in this case Ram state
should behave as it is not postcopy-able.

This patch add separate has_postcopy handler to specify behaviour of
savevm state.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/vmstate.h | 1 +
 migration/ram.c             | 6 ++++++
 migration/savevm.c          | 6 ++++--
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index a4b81bb..4605277 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -45,6 +45,7 @@ typedef struct SaveVMHandlers {
 
     /* This runs both outside and inside the iothread lock.  */
     bool (*is_active)(void *opaque);
+    bool (*has_postcopy)(void *opaque);
 
     /* This runs outside the iothread lock in the migration case, and
      * within the lock in the savevm case.  The callback had better only
diff --git a/migration/ram.c b/migration/ram.c
index e49749d..4dab756 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2528,10 +2528,16 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
     return ret;
 }
 
+static bool ram_has_postcopy(void *opaque)
+{
+    return migrate_postcopy_ram();
+}
+
 static SaveVMHandlers savevm_ram_handlers = {
     .save_live_setup = ram_save_setup,
     .save_live_iterate = ram_save_iterate,
     .save_live_complete_postcopy = ram_save_complete,
+    .has_postcopy = ram_has_postcopy,
     .save_live_complete_precopy = ram_save_complete,
     .save_live_pending = ram_save_pending,
     .load_state = ram_load,
diff --git a/migration/savevm.c b/migration/savevm.c
index b9caf59..d059cce 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -952,7 +952,8 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy)
          * call that's already run, it might get confused if we call
          * iterate afterwards.
          */
-        if (postcopy && !se->ops->save_live_complete_postcopy) {
+        if (postcopy &&
+            !(se->ops->has_postcopy && se->ops->has_postcopy(se->opaque))) {
             continue;
         }
         if (qemu_file_rate_limit(f)) {
@@ -1040,7 +1041,8 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only)
 
     QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
         if (!se->ops ||
-            (in_postcopy && se->ops->save_live_complete_postcopy) ||
+            (in_postcopy && se->ops->has_postcopy &&
+             se->ops->has_postcopy(se->opaque)) ||
             (in_postcopy && !iterable_only) ||
             !se->ops->save_live_complete_precopy) {
             continue;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 02/17] migration: fix ram_save_pending
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 01/17] migration: add has_postcopy savevm handler Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Fill postcopy-able pending only if ram postcopy is enabled.
It is necessary because of there will be other postcopy-able states and
when ram postcopy is disabled, it should not spoil common postcopy
related pending.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 migration/ram.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 4dab756..b424e90 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2079,8 +2079,12 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
     }
 
-    /* We can do postcopy, and all the data is postcopiable */
-    *postcopiable_pending += remaining_size;
+    if (migrate_postcopy_ram()) {
+        /* We can do postcopy, and all the data is postcopiable */
+        *postcopiable_pending += remaining_size;
+    } else {
+        *non_postcopiable_pending += remaining_size;
+    }
 }
 
 static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 01/17] migration: add has_postcopy savevm handler Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 02/17] migration: fix ram_save_pending Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 04/17] migration: introduce postcopy-only pending Vladimir Sementsov-Ogievskiy
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Split common postcopy staff from ram postcopy staff.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/migration.h |  1 +
 migration/migration.c         | 39 +++++++++++++++++++++++++++------------
 migration/postcopy-ram.c      |  4 +++-
 migration/savevm.c            | 31 ++++++++++++++++++++++---------
 4 files changed, 53 insertions(+), 22 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 292f397..3ec88ab 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -296,6 +296,7 @@ void migrate_add_blocker(Error *reason);
  */
 void migrate_del_blocker(Error *reason);
 
+bool migrate_postcopy(void);
 bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
 
diff --git a/migration/migration.c b/migration/migration.c
index aaca451..a4ed2a2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1116,6 +1116,11 @@ bool migrate_postcopy_ram(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_X_POSTCOPY_RAM];
 }
 
+bool migrate_postcopy(void)
+{
+    return migrate_postcopy_ram();
+}
+
 bool migrate_auto_converge(void)
 {
     MigrationState *s;
@@ -1443,9 +1448,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * need to tell the destination to throw any pages it's already received
      * that are dirty
      */
-    if (ram_postcopy_send_discard_bitmap(ms)) {
-        error_report("postcopy send discard bitmap failed");
-        goto fail;
+    if (migrate_postcopy_ram()) {
+        if (ram_postcopy_send_discard_bitmap(ms)) {
+            error_report("postcopy send discard bitmap failed");
+            goto fail;
+        }
     }
 
     /*
@@ -1454,8 +1461,10 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * wrap their state up here
      */
     qemu_file_set_rate_limit(ms->file, INT64_MAX);
-    /* Ping just for debugging, helps line traces up */
-    qemu_savevm_send_ping(ms->file, 2);
+    if (migrate_postcopy_ram()) {
+        /* Ping just for debugging, helps line traces up */
+        qemu_savevm_send_ping(ms->file, 2);
+    }
 
     /*
      * While loading the device state we may trigger page transfer
@@ -1481,7 +1490,9 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
     qemu_savevm_send_postcopy_listen(fb);
 
     qemu_savevm_state_complete_precopy(fb, false);
-    qemu_savevm_send_ping(fb, 3);
+    if (migrate_postcopy_ram()) {
+        qemu_savevm_send_ping(fb, 3);
+    }
 
     qemu_savevm_send_postcopy_run(fb);
 
@@ -1497,11 +1508,13 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
 
     qemu_mutex_unlock_iothread();
 
-    /*
-     * Although this ping is just for debug, it could potentially be
-     * used for getting a better measurement of downtime at the source.
-     */
-    qemu_savevm_send_ping(ms->file, 4);
+    if (migrate_postcopy_ram()) {
+        /*
+         * Although this ping is just for debug, it could potentially be
+         * used for getting a better measurement of downtime at the source.
+         */
+        qemu_savevm_send_ping(ms->file, 4);
+    }
 
     ret = qemu_file_get_error(ms->file);
     if (ret) {
@@ -1625,7 +1638,9 @@ static void *migration_thread(void *opaque)
 
         /* And do a ping that will make stuff easier to debug */
         qemu_savevm_send_ping(s->file, 1);
+    }
 
+    if (migrate_postcopy()) {
         /*
          * Tell the destination that we *might* want to do postcopy later;
          * if the other end can't do postcopy it should fail now, nice and
@@ -1659,7 +1674,7 @@ static void *migration_thread(void *opaque)
             if (pending_size && pending_size >= max_size) {
                 /* Still a significant amount to transfer */
 
-                if (migrate_postcopy_ram() &&
+                if (migrate_postcopy() &&
                     s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
                     pend_nonpost <= max_size &&
                     atomic_read(&s->start_postcopy)) {
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 3946aa9..0fc6a2f 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -319,7 +319,9 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
     }
 
     postcopy_state_set(POSTCOPY_INCOMING_END);
-    migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    if (migrate_postcopy_ram()) {
+        migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    }
 
     if (mis->postcopy_tmp_page) {
         munmap(mis->postcopy_tmp_page, getpagesize());
diff --git a/migration/savevm.c b/migration/savevm.c
index d059cce..d47c11f 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -70,7 +70,7 @@ static struct mig_cmd_args {
     [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
     [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
     [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
-    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
+    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
     [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
     [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
@@ -796,12 +796,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const QEMUSizedBuffer *qsb)
 /* Send prior to any postcopy transfer */
 void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
-    uint64_t tmp[2];
-    tmp[0] = cpu_to_be64(getpagesize());
-    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+    if (migrate_postcopy_ram()) {
+        uint64_t tmp[2];
+        tmp[0] = cpu_to_be64(getpagesize());
+        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
 
-    trace_qemu_savevm_send_postcopy_advise();
-    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
+        trace_qemu_savevm_send_postcopy_advise();
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
+                                 16, (uint8_t *)tmp);
+    } else {
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
+    }
 }
 
 /* Sent prior to starting the destination running in postcopy, discard pages
@@ -1269,6 +1274,10 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
         return -1;
     }
 
+    if (!migrate_postcopy_ram()) {
+        return 0;
+    }
+
     if (!postcopy_ram_supported_by_host()) {
         return -1;
     }
@@ -1469,7 +1478,9 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
          * A rare case, we entered listen without having to do any discards,
          * so do the setup that's normally done at the time of the 1st discard.
          */
-        postcopy_ram_prepare_discard(mis);
+        if (migrate_postcopy_ram()) {
+            postcopy_ram_prepare_discard(mis);
+        }
     }
 
     /*
@@ -1477,8 +1488,10 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
      * However, at this point the CPU shouldn't be running, and the IO
      * shouldn't be doing anything yet so don't actually expect requests
      */
-    if (postcopy_ram_enable_notify(mis)) {
-        return -1;
+    if (migrate_postcopy_ram()) {
+        if (postcopy_ram_enable_notify(mis)) {
+            return -1;
+        }
     }
 
     if (mis->have_listen_thread) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 04/17] migration: introduce postcopy-only pending
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (2 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 05/17] block: add bdrv_next_dirty_bitmap() Vladimir Sementsov-Ogievskiy
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

There would be savevm states (dirty-bitmap) which can migrate only in
postcopy stage. The corresponding pending is introduced here.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/vmstate.h |  5 +++--
 include/sysemu/sysemu.h     |  5 +++--
 migration/block.c           |  7 ++++---
 migration/migration.c       | 15 ++++++++-------
 migration/ram.c             |  9 +++++----
 migration/savevm.c          | 12 +++++++-----
 trace-events                |  2 +-
 7 files changed, 31 insertions(+), 24 deletions(-)

diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 4605277..94bc5fb 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -57,8 +57,9 @@ typedef struct SaveVMHandlers {
     /* This runs outside the iothread lock!  */
     int (*save_live_setup)(QEMUFile *f, void *opaque);
     void (*save_live_pending)(QEMUFile *f, void *opaque, uint64_t max_size,
-                              uint64_t *non_postcopiable_pending,
-                              uint64_t *postcopiable_pending);
+                              uint64_t *res_precopy_only,
+                              uint64_t *res_compatible,
+                              uint64_t *res_postcopy_only);
     LoadStateHandler *load_state;
 } SaveVMHandlers;
 
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index a1f5e3b..b02f2b4 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -133,8 +133,9 @@ void qemu_savevm_state_cleanup(void);
 void qemu_savevm_state_complete_postcopy(QEMUFile *f);
 void qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only);
 void qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size,
-                               uint64_t *res_non_postcopiable,
-                               uint64_t *res_postcopiable);
+                               uint64_t *res_precopy_only,
+                               uint64_t *res_compatible,
+                               uint64_t *res_postcopy_only);
 void qemu_savevm_command_send(QEMUFile *f, enum qemu_vm_cmd command,
                               uint16_t len, uint8_t *data);
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value);
diff --git a/migration/block.c b/migration/block.c
index 656f38f..84716a2 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -754,8 +754,9 @@ static int block_save_complete(QEMUFile *f, void *opaque)
 }
 
 static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
-                               uint64_t *non_postcopiable_pending,
-                               uint64_t *postcopiable_pending)
+                               uint64_t *res_precopy_only,
+                               uint64_t *res_compatible,
+                               uint64_t *res_postcopy_only)
 {
     /* Estimate pending number of bytes to send */
     uint64_t pending;
@@ -775,7 +776,7 @@ static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
 
     DPRINTF("Enter save live pending  %" PRIu64 "\n", pending);
     /* We don't do postcopy */
-    *non_postcopiable_pending += pending;
+    *res_precopy_only += pending;
 }
 
 static int block_load(QEMUFile *f, void *opaque, int version_id)
diff --git a/migration/migration.c b/migration/migration.c
index a4ed2a2..fb04780 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1664,20 +1664,21 @@ static void *migration_thread(void *opaque)
         uint64_t pending_size;
 
         if (!qemu_file_rate_limit(s->file)) {
-            uint64_t pend_post, pend_nonpost;
+            uint64_t pend_pre, pend_compat, pend_post;
 
-            qemu_savevm_state_pending(s->file, max_size, &pend_nonpost,
-                                      &pend_post);
-            pending_size = pend_nonpost + pend_post;
+            qemu_savevm_state_pending(s->file, max_size, &pend_pre,
+                                      &pend_compat, &pend_post);
+            pending_size = pend_pre + pend_compat + pend_post;
             trace_migrate_pending(pending_size, max_size,
-                                  pend_post, pend_nonpost);
+                                  pend_pre, pend_compat, pend_post);
             if (pending_size && pending_size >= max_size) {
                 /* Still a significant amount to transfer */
 
                 if (migrate_postcopy() &&
                     s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
-                    pend_nonpost <= max_size &&
-                    atomic_read(&s->start_postcopy)) {
+                    pend_pre <= max_size &&
+                    (atomic_read(&s->start_postcopy) ||
+                     (pend_pre + pend_compat <= max_size))) {
 
                     if (!postcopy_start(s, &old_vm_running)) {
                         current_active_state = MIGRATION_STATUS_POSTCOPY_ACTIVE;
diff --git a/migration/ram.c b/migration/ram.c
index b424e90..cbfc393 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2062,8 +2062,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
 }
 
 static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
-                             uint64_t *non_postcopiable_pending,
-                             uint64_t *postcopiable_pending)
+                             uint64_t *res_precopy_only,
+                             uint64_t *res_compatible,
+                             uint64_t *res_postcopy_only)
 {
     uint64_t remaining_size;
 
@@ -2081,9 +2082,9 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
 
     if (migrate_postcopy_ram()) {
         /* We can do postcopy, and all the data is postcopiable */
-        *postcopiable_pending += remaining_size;
+        *res_compatible += remaining_size;
     } else {
-        *non_postcopiable_pending += remaining_size;
+        *res_precopy_only += remaining_size;
     }
 }
 
diff --git a/migration/savevm.c b/migration/savevm.c
index d47c11f..0543edd 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1127,13 +1127,15 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only)
  * for units that can't do postcopy.
  */
 void qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size,
-                               uint64_t *res_non_postcopiable,
-                               uint64_t *res_postcopiable)
+                               uint64_t *res_precopy_only,
+                               uint64_t *res_compatible,
+                               uint64_t *res_postcopy_only)
 {
     SaveStateEntry *se;
 
-    *res_non_postcopiable = 0;
-    *res_postcopiable = 0;
+    *res_precopy_only = 0;
+    *res_compatible = 0;
+    *res_postcopy_only = 0;
 
 
     QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
@@ -1146,7 +1148,7 @@ void qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size,
             }
         }
         se->ops->save_live_pending(f, se->opaque, max_size,
-                                   res_non_postcopiable, res_postcopiable);
+                res_precopy_only, res_compatible, res_postcopy_only);
     }
 }
 
diff --git a/trace-events b/trace-events
index c9ac144..1922848 100644
--- a/trace-events
+++ b/trace-events
@@ -1470,7 +1470,7 @@ migrate_fd_cleanup(void) ""
 migrate_fd_error(void) ""
 migrate_fd_cancel(void) ""
 migrate_handle_rp_req_pages(const char *rbname, size_t start, size_t len) "in %s at %zx len %zx"
-migrate_pending(uint64_t size, uint64_t max, uint64_t post, uint64_t nonpost) "pending size %" PRIu64 " max %" PRIu64 " (post=%" PRIu64 " nonpost=%" PRIu64 ")"
+migrate_pending(uint64_t size, uint64_t max, uint64_t pre, uint64_t compat, uint64_t post) "pending size %" PRIu64 " max %" PRIu64 " (pre = %" PRIu64 " compat=%" PRIu64 " post=%" PRIu64 ")"
 migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d"
 migration_completion_file_err(void) ""
 migration_completion_postcopy_end(void) ""
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 05/17] block: add bdrv_next_dirty_bitmap()
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (3 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 04/17] migration: introduce postcopy-only pending Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 06/17] block: add bdrv_dirty_bitmap_enable_successor() Vladimir Sementsov-Ogievskiy
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Like bdrv_next()  - bdrv_next_dirty_bitmap() is a function to provide
access to private dirty bitmaps list.

Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/dirty-bitmap.c         | 10 ++++++++++
 include/block/dirty-bitmap.h |  3 +++
 2 files changed, 13 insertions(+)

diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 919ce10..2656b8b 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -490,3 +490,13 @@ int64_t bdrv_get_dirty_count(BdrvDirtyBitmap *bitmap)
 {
     return hbitmap_count(bitmap->bitmap);
 }
+
+BdrvDirtyBitmap *bdrv_next_dirty_bitmap(BlockDriverState *bs,
+                                        BdrvDirtyBitmap *bitmap)
+{
+    if (bitmap == NULL) {
+        return QLIST_FIRST(&bs->dirty_bitmaps);
+    }
+
+    return QLIST_NEXT(bitmap, list);
+}
diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
index 6ba5bec..1ccab36 100644
--- a/include/block/dirty-bitmap.h
+++ b/include/block/dirty-bitmap.h
@@ -68,4 +68,7 @@ void bdrv_dirty_bitmap_deserialize_zeroes(BdrvDirtyBitmap *bitmap,
                                           bool finish);
 void bdrv_dirty_bitmap_deserialize_finish(BdrvDirtyBitmap *bitmap);
 
+BdrvDirtyBitmap *bdrv_next_dirty_bitmap(BlockDriverState *bs,
+                                        BdrvDirtyBitmap *bitmap);
+
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 06/17] block: add bdrv_dirty_bitmap_enable_successor()
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (4 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 05/17] block: add bdrv_next_dirty_bitmap() Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 07/17] qapi: add dirty-bitmaps migration capability Vladimir Sementsov-Ogievskiy
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Enabling bitmap successor is necessary to enable successors of bitmaps
being migrated before target vm start.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/dirty-bitmap.c         | 5 +++++
 include/block/dirty-bitmap.h | 1 +
 2 files changed, 6 insertions(+)

diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 2656b8b..7d4cade 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -216,6 +216,11 @@ int bdrv_dirty_bitmap_create_successor(BlockDriverState *bs,
     return 0;
 }
 
+void bdrv_dirty_bitmap_enable_successor(BdrvDirtyBitmap *bitmap)
+{
+    bdrv_enable_dirty_bitmap(bitmap->successor);
+}
+
 /**
  * For a bitmap with a successor, yield our name to the successor,
  * delete the old bitmap, and return a handle to the new bitmap.
diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
index 1ccab36..2c32310 100644
--- a/include/block/dirty-bitmap.h
+++ b/include/block/dirty-bitmap.h
@@ -21,6 +21,7 @@ BdrvDirtyBitmap *bdrv_dirty_bitmap_abdicate(BlockDriverState *bs,
 BdrvDirtyBitmap *bdrv_reclaim_dirty_bitmap(BlockDriverState *bs,
                                            BdrvDirtyBitmap *bitmap,
                                            Error **errp);
+void bdrv_dirty_bitmap_enable_successor(BdrvDirtyBitmap *bitmap);
 BdrvDirtyBitmap *bdrv_find_dirty_bitmap(BlockDriverState *bs,
                                         const char *name);
 void bdrv_dirty_bitmap_make_anon(BdrvDirtyBitmap *bitmap);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 07/17] qapi: add dirty-bitmaps migration capability
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (5 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 06/17] block: add bdrv_dirty_bitmap_enable_successor() Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 08/17] migration: include migrate_dirty_bitmaps in migrate_postcopy Vladimir Sementsov-Ogievskiy
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/migration.h | 1 +
 migration/migration.c         | 9 +++++++++
 qapi-schema.json              | 4 +++-
 3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 3ec88ab..582a527 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -299,6 +299,7 @@ void migrate_del_blocker(Error *reason);
 bool migrate_postcopy(void);
 bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
+bool migrate_dirty_bitmaps(void);
 
 bool migrate_auto_converge(void);
 
diff --git a/migration/migration.c b/migration/migration.c
index fb04780..31993e9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1175,6 +1175,15 @@ int migrate_decompress_threads(void)
     return s->parameters[MIGRATION_PARAMETER_DECOMPRESS_THREADS];
 }
 
+bool migrate_dirty_bitmaps(void)
+{
+    MigrationState *s;
+
+    s = migrate_get_current();
+
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_DIRTY_BITMAPS];
+}
+
 bool migrate_use_events(void)
 {
     MigrationState *s;
diff --git a/qapi-schema.json b/qapi-schema.json
index b3038b2..5aff3ac 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -544,11 +544,13 @@
 #          been migrated, pulling the remaining pages along as needed. NOTE: If
 #          the migration fails during postcopy the VM will fail.  (since 2.5)
 #
+# @dirty-bitmaps: If enabled, QEMU will migrate named dirty bitmaps. (since 2.6)
+#
 # Since: 1.2
 ##
 { 'enum': 'MigrationCapability',
   'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks',
-           'compress', 'events', 'x-postcopy-ram'] }
+           'compress', 'events', 'x-postcopy-ram', 'dirty-bitmaps'] }
 
 ##
 # @MigrationCapabilityStatus
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 08/17] migration: include migrate_dirty_bitmaps in migrate_postcopy
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (6 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 07/17] qapi: add dirty-bitmaps migration capability Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 09/17] migration/qemu-file: add qemu_put_counted_string() Vladimir Sementsov-Ogievskiy
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Enable postcopy if dirty bitmap migration is endabled.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 migration/migration.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index 31993e9..9ccbeee 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1118,7 +1118,7 @@ bool migrate_postcopy_ram(void)
 
 bool migrate_postcopy(void)
 {
-    return migrate_postcopy_ram();
+    return migrate_postcopy_ram() || migrate_dirty_bitmaps();
 }
 
 bool migrate_auto_converge(void)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 09/17] migration/qemu-file: add qemu_put_counted_string()
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (7 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 08/17] migration: include migrate_dirty_bitmaps in migrate_postcopy Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 10/17] migration: add is_active_iterate handler Vladimir Sementsov-Ogievskiy
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Add function opposite to qemu_get_counted_string.
qemu_put_counted_string puts one-byte length of the string (string
should not be longer than 255 characters), and then it puts the string,
without last zero byte.

Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/qemu-file.h |  2 ++
 migration/qemu-file.c         | 13 +++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/migration/qemu-file.h b/include/migration/qemu-file.h
index b5d08d2..ea8aef8 100644
--- a/include/migration/qemu-file.h
+++ b/include/migration/qemu-file.h
@@ -329,4 +329,6 @@ static inline void qemu_get_sbe64s(QEMUFile *f, int64_t *pv)
 
 size_t qemu_get_counted_string(QEMUFile *f, char buf[256]);
 
+void qemu_put_counted_string(QEMUFile *f, const char *name);
+
 #endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 0bbd257..1e90810 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -662,6 +662,19 @@ size_t qemu_get_counted_string(QEMUFile *f, char buf[256])
 }
 
 /*
+ * Put a string with one preceding byte containing its length. The length of
+ * the string should be less than 256.
+ */
+void qemu_put_counted_string(QEMUFile *f, const char *name)
+{
+    size_t len = strlen(name);
+
+    assert(len < 256);
+    qemu_put_byte(f, len);
+    qemu_put_buffer(f, (const uint8_t *)name, len);
+}
+
+/*
  * Set the blocking state of the QEMUFile.
  * Note: On some transports the OS only keeps a single blocking state for
  *       both directions, and thus changing the blocking on the main
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 10/17] migration: add is_active_iterate handler
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (8 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 09/17] migration/qemu-file: add qemu_put_counted_string() Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 11/17] migration: add postcopy migration of dirty bitmaps Vladimir Sementsov-Ogievskiy
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Only-postcopy savevm states (dirty-bitmap) don't need live iteration, so
to disable them and stop transporting empty sections there is a new
savevm handler.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/vmstate.h | 1 +
 migration/savevm.c          | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 94bc5fb..0f2d734 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -46,6 +46,7 @@ typedef struct SaveVMHandlers {
     /* This runs both outside and inside the iothread lock.  */
     bool (*is_active)(void *opaque);
     bool (*has_postcopy)(void *opaque);
+    bool (*is_active_iterate)(void *opaque);
 
     /* This runs outside the iothread lock in the migration case, and
      * within the lock in the savevm case.  The callback had better only
diff --git a/migration/savevm.c b/migration/savevm.c
index 0543edd..1cfb91c 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -951,6 +951,11 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy)
                 continue;
             }
         }
+        if (se->ops && se->ops->is_active_iterate) {
+            if (!se->ops->is_active_iterate(se->opaque)) {
+                continue;
+            }
+        }
         /*
          * In the postcopy phase, any device that doesn't know how to
          * do postcopy should have saved it's state in the _complete
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 11/17] migration: add postcopy migration of dirty bitmaps
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (9 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 10/17] migration: add is_active_iterate handler Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 12/17] iotests: maintain several vms in test Vladimir Sementsov-Ogievskiy
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Postcopy migration of dirty bitmaps. Only named dirty bitmaps,
associated with root nodes and non-root named nodes are migrated.

If destination qemu is already containing a dirty bitmap with the same name
as a migrated bitmap (for the same node), than, if their granularities are
the same the migration will be done, otherwise the error will be generated.

If destination qemu doesn't contain such bitmap it will be created.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/block.h      |   1 +
 include/migration/migration.h  |   3 +
 migration/Makefile.objs        |   2 +-
 migration/block-dirty-bitmap.c | 646 +++++++++++++++++++++++++++++++++++++++++
 migration/savevm.c             |   2 +
 vl.c                           |   1 +
 6 files changed, 654 insertions(+), 1 deletion(-)
 create mode 100644 migration/block-dirty-bitmap.c

diff --git a/include/migration/block.h b/include/migration/block.h
index ffa8ac0..566bb9f 100644
--- a/include/migration/block.h
+++ b/include/migration/block.h
@@ -14,6 +14,7 @@
 #ifndef BLOCK_MIGRATION_H
 #define BLOCK_MIGRATION_H
 
+void dirty_bitmap_mig_init(void);
 void blk_mig_init(void);
 int blk_mig_active(void);
 uint64_t blk_mig_bytes_transferred(void);
diff --git a/include/migration/migration.h b/include/migration/migration.h
index 582a527..a2def5f 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -362,4 +362,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
 PostcopyState postcopy_state_get(void);
 /* Set the state and return the old state */
 PostcopyState postcopy_state_set(PostcopyState new_state);
+
+void dirty_bitmap_mig_enable_successors(void);
+
 #endif
diff --git a/migration/Makefile.objs b/migration/Makefile.objs
index 0cac6d7..1432619 100644
--- a/migration/Makefile.objs
+++ b/migration/Makefile.objs
@@ -6,5 +6,5 @@ common-obj-y += xbzrle.o postcopy-ram.o
 common-obj-$(CONFIG_RDMA) += rdma.o
 common-obj-$(CONFIG_POSIX) += exec.o unix.o fd.o
 
-common-obj-y += block.o
+common-obj-y += block.o block-dirty-bitmap.o
 
diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c
new file mode 100644
index 0000000..9f15a74
--- /dev/null
+++ b/migration/block-dirty-bitmap.c
@@ -0,0 +1,646 @@
+/*
+ * QEMU dirty bitmap migration
+ *
+ * Postcopy migration of dirty bitmaps. Only named dirty bitmaps, associated
+ * with root nodes and non-root named nodes are migrated.
+ *
+ * If destination qemu is already containing a dirty bitmap with the same name
+ * as a migrated bitmap (for the same node), than, if their granularities are
+ * the same the migration will be done, otherwise the error will be generated.
+ *
+ * If destination qemu doesn't contain such bitmap it will be created.
+ *
+ * format of migration:
+ *
+ * # Header (shared for different chunk types)
+ * 1, 2 or 4 bytes: flags (see qemu_{put,put}_flags)
+ * [ 1 byte: node name size ] \  flags & DEVICE_NAME
+ * [ n bytes: node name     ] /
+ * [ 1 byte: bitmap name size ] \  flags & BITMAP_NAME
+ * [ n bytes: bitmap name     ] /
+ *
+ * # Start of bitmap migration (flags & START)
+ * header
+ * be64: granularity
+ * 1 byte: bitmap enabled flag
+ *
+ * # Complete of bitmap migration (flags & COMPLETE)
+ * header
+ *
+ * # Data chunk of bitmap migration
+ * header
+ * be64: start sector
+ * be32: number of sectors
+ * [ be64: buffer size  ] \ ! (flags & ZEROES)
+ * [ n bytes: buffer    ] /
+ *
+ * The last chunk in stream should contain flags & EOS. The chunk may skip
+ * device and/or bitmap names, assuming them to be the same with the previous
+ * chunk.
+ *
+ *
+ * This file is derived from migration/block.c
+ *
+ * Author:
+ * Vladimir Sementsov-Ogievskiy <vsementsov@parallels.com>
+ *
+ * original copyright message:
+ * =====================================================================
+ * Copyright IBM, Corp. 2009
+ *
+ * Authors:
+ *  Liran Schour   <lirans@il.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ * =====================================================================
+ */
+
+#include "block/block.h"
+#include "block/block_int.h"
+#include "sysemu/block-backend.h"
+#include "qemu/main-loop.h"
+#include "qemu/error-report.h"
+#include "migration/block.h"
+#include "migration/migration.h"
+#include "qemu/hbitmap.h"
+#include "sysemu/sysemu.h"
+#include <assert.h>
+
+#define CHUNK_SIZE     (1 << 10)
+
+/* Flags occupy from one to four bytes. In all but one the 7-th (EXTRA_FLAGS)
+ * bit should be set. */
+#define DIRTY_BITMAP_MIG_FLAG_EOS           0x01
+#define DIRTY_BITMAP_MIG_FLAG_ZEROES        0x02
+#define DIRTY_BITMAP_MIG_FLAG_BITMAP_NAME   0x04
+#define DIRTY_BITMAP_MIG_FLAG_DEVICE_NAME   0x08
+#define DIRTY_BITMAP_MIG_FLAG_START         0x10
+#define DIRTY_BITMAP_MIG_FLAG_COMPLETE      0x20
+#define DIRTY_BITMAP_MIG_FLAG_BITS          0x40
+
+#define DIRTY_BITMAP_MIG_EXTRA_FLAGS        0x80
+#define DIRTY_BITMAP_MIG_FLAGS_SIZE_16      0x8000
+#define DIRTY_BITMAP_MIG_FLAGS_SIZE_32      0x8080
+
+#define DEBUG_DIRTY_BITMAP_MIGRATION 0
+
+#define DPRINTF(fmt, args...) \
+    do { \
+        if (DEBUG_DIRTY_BITMAP_MIGRATION) { \
+            printf("DMIG %s:%d ", __func__, __LINE__); \
+            printf(fmt, ##args); \
+        } \
+    } while (0)
+
+typedef struct DirtyBitmapMigBitmapState {
+    /* Written during setup phase. */
+    BlockDriverState *bs;
+    const char *node_name;
+    BdrvDirtyBitmap *bitmap;
+    uint64_t total_sectors;
+    uint64_t sectors_per_chunk;
+    QSIMPLEQ_ENTRY(DirtyBitmapMigBitmapState) entry;
+
+    /* For bulk phase. */
+    bool bulk_completed;
+    uint64_t cur_sector;
+} DirtyBitmapMigBitmapState;
+
+typedef struct DirtyBitmapMigState {
+    QSIMPLEQ_HEAD(dbms_list, DirtyBitmapMigBitmapState) dbms_list;
+
+    bool bulk_completed;
+
+    /* for send_bitmap_bits() */
+    BlockDriverState *prev_bs;
+    BdrvDirtyBitmap *prev_bitmap;
+} DirtyBitmapMigState;
+
+typedef struct DirtyBitmapLoadState {
+    uint32_t flags;
+    char node_name[256];
+    char bitmap_name[256];
+    BlockDriverState *bs;
+    BdrvDirtyBitmap *bitmap;
+} DirtyBitmapLoadState;
+
+static DirtyBitmapMigState dirty_bitmap_mig_state;
+
+static GSList *enabled_bitmaps;
+
+static uint32_t qemu_get_bitmap_flags(QEMUFile *f)
+{
+    uint8_t flags = qemu_get_byte(f);
+    if (flags & DIRTY_BITMAP_MIG_EXTRA_FLAGS) {
+        flags = flags << 8 | qemu_get_byte(f);
+        if (flags & DIRTY_BITMAP_MIG_EXTRA_FLAGS) {
+            flags = flags << 16 | qemu_get_be16(f);
+        }
+    }
+
+    return flags;
+}
+
+static void qemu_put_bitmap_flags(QEMUFile *f, uint32_t flags)
+{
+    if (!(flags & 0xffffff00)) {
+        qemu_put_byte(f, flags);
+        return;
+    }
+
+    if (!(flags & 0xffff0000)) {
+        qemu_put_be16(f, flags | DIRTY_BITMAP_MIG_FLAGS_SIZE_16);
+        return;
+    }
+
+    qemu_put_be32(f, flags | DIRTY_BITMAP_MIG_FLAGS_SIZE_32);
+}
+
+static void send_bitmap_header(QEMUFile *f, DirtyBitmapMigBitmapState *dbms,
+                               uint32_t additional_flags)
+{
+    BlockDriverState *bs = dbms->bs;
+    BdrvDirtyBitmap *bitmap = dbms->bitmap;
+    uint32_t flags = additional_flags;
+    DPRINTF("enter\n");
+
+    if (bs != dirty_bitmap_mig_state.prev_bs) {
+        dirty_bitmap_mig_state.prev_bs = bs;
+        flags |= DIRTY_BITMAP_MIG_FLAG_DEVICE_NAME;
+    }
+
+    if (bitmap != dirty_bitmap_mig_state.prev_bitmap) {
+        dirty_bitmap_mig_state.prev_bitmap = bitmap;
+        flags |= DIRTY_BITMAP_MIG_FLAG_BITMAP_NAME;
+    }
+
+    qemu_put_bitmap_flags(f, flags);
+
+    if (flags & DIRTY_BITMAP_MIG_FLAG_DEVICE_NAME) {
+        qemu_put_counted_string(f, dbms->node_name);
+    }
+
+    if (flags & DIRTY_BITMAP_MIG_FLAG_BITMAP_NAME) {
+        qemu_put_counted_string(f, bdrv_dirty_bitmap_name(bitmap));
+    }
+}
+
+static void send_bitmap_start(QEMUFile *f, DirtyBitmapMigBitmapState *dbms)
+{
+    send_bitmap_header(f, dbms, DIRTY_BITMAP_MIG_FLAG_START);
+    qemu_put_be32(f, bdrv_dirty_bitmap_granularity(dbms->bitmap));
+    qemu_put_byte(f, bdrv_dirty_bitmap_enabled(dbms->bitmap));
+}
+
+static void send_bitmap_complete(QEMUFile *f, DirtyBitmapMigBitmapState *dbms)
+{
+    send_bitmap_header(f, dbms, DIRTY_BITMAP_MIG_FLAG_COMPLETE);
+}
+
+static void send_bitmap_bits(QEMUFile *f, DirtyBitmapMigBitmapState *dbms,
+                             uint64_t start_sector, uint32_t nr_sectors)
+{
+    /* align for buffer_is_zero() */
+    uint64_t align = 4 * sizeof(long);
+    uint64_t unaligned_size =
+        bdrv_dirty_bitmap_serialization_size(dbms->bitmap,
+                                             start_sector, nr_sectors);
+    uint64_t buf_size = (unaligned_size + align - 1) & ~(align - 1);
+    uint8_t *buf = g_malloc0(buf_size);
+    uint32_t flags = DIRTY_BITMAP_MIG_FLAG_BITS;
+
+    bdrv_dirty_bitmap_serialize_part(dbms->bitmap, buf,
+                                     start_sector, nr_sectors);
+
+    if (buffer_is_zero(buf, buf_size)) {
+        g_free(buf);
+        buf = NULL;
+        flags |= DIRTY_BITMAP_MIG_FLAG_ZEROES;
+    }
+
+    DPRINTF("parameters:"
+            "\n   flags:        %x"
+            "\n   start_sector: %" PRIu64
+            "\n   nr_sectors:   %" PRIu32
+            "\n   data_size:    %" PRIu64 "\n",
+            flags, start_sector, nr_sectors, buf_size);
+
+    send_bitmap_header(f, dbms, flags);
+
+    qemu_put_be64(f, start_sector);
+    qemu_put_be32(f, nr_sectors);
+
+    /* if a block is zero we need to flush here since the network
+     * bandwidth is now a lot higher than the storage device bandwidth.
+     * thus if we queue zero blocks we slow down the migration.
+     * also, skip writing block when migrate only dirty bitmaps. */
+    if (flags & DIRTY_BITMAP_MIG_FLAG_ZEROES) {
+        qemu_fflush(f);
+        return;
+    }
+
+    qemu_put_be64(f, buf_size);
+    qemu_put_buffer(f, buf, buf_size);
+    g_free(buf);
+}
+
+
+/* Called with iothread lock taken.  */
+
+static void init_dirty_bitmap_migration(void)
+{
+    BlockDriverState *bs;
+    BdrvDirtyBitmap *bitmap;
+    DirtyBitmapMigBitmapState *dbms;
+    uint64_t total_bytes = 0;
+
+    dirty_bitmap_mig_state.bulk_completed = false;
+    dirty_bitmap_mig_state.prev_bs = NULL;
+    dirty_bitmap_mig_state.prev_bitmap = NULL;
+
+    for (bs = bdrv_next(NULL); bs; bs = bdrv_next(bs)) {
+        for (bitmap = bdrv_next_dirty_bitmap(bs, NULL); bitmap;
+             bitmap = bdrv_next_dirty_bitmap(bs, bitmap)) {
+            if (!bdrv_dirty_bitmap_name(bitmap)) {
+                continue;
+            }
+
+            if (!bdrv_get_node_name(bs) && blk_bs(bs->blk) != bs) {
+                /* not named non-root node */
+                continue;
+            }
+
+            dbms = g_new0(DirtyBitmapMigBitmapState, 1);
+            dbms->bs = bs;
+            dbms->node_name = bdrv_get_node_name(bs);
+            if (!dbms->node_name || dbms->node_name[0] == '\0') {
+                dbms->node_name = bdrv_get_device_name(bs);
+            }
+            dbms->bitmap = bitmap;
+            dbms->total_sectors = bdrv_nb_sectors(bs);
+            dbms->sectors_per_chunk = CHUNK_SIZE * 8 *
+                bdrv_dirty_bitmap_granularity(bitmap) >> BDRV_SECTOR_BITS;
+
+            total_bytes +=
+                bdrv_dirty_bitmap_serialization_size(bitmap,
+                                                     0, dbms->total_sectors);
+
+            QSIMPLEQ_INSERT_TAIL(&dirty_bitmap_mig_state.dbms_list,
+                                 dbms, entry);
+        }
+    }
+}
+
+/* Called with no lock taken.  */
+static void bulk_phase_send_chunk(QEMUFile *f, DirtyBitmapMigBitmapState *dbms)
+{
+    uint32_t nr_sectors = MIN(dbms->total_sectors - dbms->cur_sector,
+                             dbms->sectors_per_chunk);
+
+    send_bitmap_bits(f, dbms, dbms->cur_sector, nr_sectors);
+
+    dbms->cur_sector += nr_sectors;
+    if (dbms->cur_sector >= dbms->total_sectors) {
+        dbms->bulk_completed = true;
+    }
+}
+
+/* Called with no lock taken.  */
+static void bulk_phase(QEMUFile *f, bool limit)
+{
+    DirtyBitmapMigBitmapState *dbms;
+
+    QSIMPLEQ_FOREACH(dbms, &dirty_bitmap_mig_state.dbms_list, entry) {
+        while (!dbms->bulk_completed) {
+            bulk_phase_send_chunk(f, dbms);
+            if (limit && qemu_file_rate_limit(f)) {
+                return;
+            }
+        }
+    }
+
+    dirty_bitmap_mig_state.bulk_completed = true;
+}
+
+/* Called with iothread lock taken.  */
+static void dirty_bitmap_mig_cleanup(void)
+{
+    DirtyBitmapMigBitmapState *dbms;
+
+    while ((dbms = QSIMPLEQ_FIRST(&dirty_bitmap_mig_state.dbms_list)) != NULL) {
+        QSIMPLEQ_REMOVE_HEAD(&dirty_bitmap_mig_state.dbms_list, entry);
+        g_free(dbms);
+    }
+}
+
+/* for SaveVMHandlers */
+static void dirty_bitmap_migration_cleanup(void *opaque)
+{
+    dirty_bitmap_mig_cleanup();
+}
+
+static int dirty_bitmap_save_iterate(QEMUFile *f, void *opaque)
+{
+    DPRINTF("in postcopy: %s\n", migration_in_postcopy(migrate_get_current()) ?
+            "yes" : "no");
+
+    if (migration_in_postcopy(migrate_get_current()) &&
+        !dirty_bitmap_mig_state.bulk_completed) {
+        bulk_phase(f, true);
+    }
+
+    qemu_put_bitmap_flags(f, DIRTY_BITMAP_MIG_FLAG_EOS);
+
+    return dirty_bitmap_mig_state.bulk_completed;
+}
+
+/* Called with iothread lock taken.  */
+
+static int dirty_bitmap_save_complete(QEMUFile *f, void *opaque)
+{
+    DirtyBitmapMigBitmapState *dbms;
+    DPRINTF("enter\n");
+
+    if (!dirty_bitmap_mig_state.bulk_completed) {
+        bulk_phase(f, false);
+    }
+
+    QSIMPLEQ_FOREACH(dbms, &dirty_bitmap_mig_state.dbms_list, entry) {
+        send_bitmap_complete(f, dbms);
+    }
+
+    qemu_put_bitmap_flags(f, DIRTY_BITMAP_MIG_FLAG_EOS);
+
+    DPRINTF("Dirty bitmaps migration completed\n");
+
+    dirty_bitmap_mig_cleanup();
+    return 0;
+}
+
+static void dirty_bitmap_save_pending(QEMUFile *f, void *opaque,
+                                      uint64_t max_size,
+                                      uint64_t *res_precopy_only,
+                                      uint64_t *res_compatible,
+                                      uint64_t *res_postcopy_only)
+{
+    DirtyBitmapMigBitmapState *dbms;
+    uint64_t pending = 0;
+
+    qemu_mutex_lock_iothread();
+
+    QSIMPLEQ_FOREACH(dbms, &dirty_bitmap_mig_state.dbms_list, entry) {
+        uint64_t gran = bdrv_dirty_bitmap_granularity(dbms->bitmap);
+        uint64_t sectors = dbms->bulk_completed ? 0 :
+                           dbms->total_sectors - dbms->cur_sector;
+
+        pending += (sectors * BDRV_SECTOR_SIZE + gran - 1) / gran;
+    }
+
+    qemu_mutex_unlock_iothread();
+
+    DPRINTF("pending %" PRIu64 ", max: %" PRIu64 "\n",
+            pending, max_size);
+
+    *res_postcopy_only += pending;
+}
+
+/* First occurrence of this bitmap. It should be created if doesn't exist */
+static int dirty_bitmap_load_start(QEMUFile *f, DirtyBitmapLoadState *s)
+{
+    Error *local_err = NULL;
+    uint32_t granularity = qemu_get_be32(f);
+    bool enabled = qemu_get_byte(f);
+
+    if (!s->bitmap) {
+        s->bitmap = bdrv_create_dirty_bitmap(s->bs, granularity,
+                                             s->bitmap_name, &local_err);
+        if (!s->bitmap) {
+            error_report("%s", error_get_pretty(local_err));
+            error_free(local_err);
+            return -EINVAL;
+        }
+    } else {
+        uint32_t dest_granularity =
+            bdrv_dirty_bitmap_granularity(s->bitmap);
+        if (dest_granularity != granularity) {
+            fprintf(stderr,
+                    "Error: "
+                    "Migrated bitmap granularity (%" PRIu32 ") "
+                    "doesn't match the destination bitmap '%s' "
+                    "granularity (%" PRIu32 ")\n",
+                    granularity,
+                    bdrv_dirty_bitmap_name(s->bitmap),
+                    dest_granularity);
+            return -EINVAL;
+        }
+    }
+
+    bdrv_disable_dirty_bitmap(s->bitmap);
+    if (enabled) {
+        bdrv_dirty_bitmap_create_successor(s->bs, s->bitmap, &local_err);
+        if (local_err) {
+            error_report("%s", error_get_pretty(local_err));
+            error_free(local_err);
+            return -EINVAL;
+        }
+
+        enabled_bitmaps = g_slist_prepend(enabled_bitmaps, s->bitmap);
+    }
+
+    return 0;
+}
+
+void dirty_bitmap_mig_enable_successors(void)
+{
+    GSList *item;
+    for (item = enabled_bitmaps; item; item = g_slist_next(item)) {
+        bdrv_dirty_bitmap_enable_successor(item->data);
+    }
+
+    g_slist_free(enabled_bitmaps);
+    enabled_bitmaps = NULL;
+}
+
+static void dirty_bitmap_load_complete(QEMUFile *f, DirtyBitmapLoadState *s)
+{
+    bdrv_dirty_bitmap_deserialize_finish(s->bitmap);
+
+    if (bdrv_dirty_bitmap_frozen(s->bitmap)) {
+        AioContext *aio_context = bdrv_get_aio_context(s->bs);
+        aio_context_acquire(aio_context);
+
+        bdrv_reclaim_dirty_bitmap(s->bs, s->bitmap, &error_abort);
+        bdrv_enable_dirty_bitmap(s->bitmap);
+
+        aio_context_release(aio_context);
+    }
+}
+
+static int dirty_bitmap_load_bits(QEMUFile *f, DirtyBitmapLoadState *s)
+{
+    uint64_t first_sector = qemu_get_be64(f);
+    uint32_t nr_sectors = qemu_get_be32(f);
+    DPRINTF("chunk: %" PRIu64 " %" PRIu32 "\n", first_sector, nr_sectors);
+
+
+    if (s->flags & DIRTY_BITMAP_MIG_FLAG_ZEROES) {
+        DPRINTF("   - zeroes\n");
+        bdrv_dirty_bitmap_deserialize_zeroes(s->bitmap, first_sector,
+                                             nr_sectors, false);
+    } else {
+        uint8_t *buf;
+        uint64_t buf_size = qemu_get_be64(f);
+        uint64_t needed_size =
+            bdrv_dirty_bitmap_serialization_size(s->bitmap,
+                                                 first_sector, nr_sectors);
+
+        if (needed_size > buf_size) {
+            fprintf(stderr,
+                    "Error: Migrated bitmap granularity doesn't "
+                    "match the destination bitmap '%s' granularity\n",
+                    bdrv_dirty_bitmap_name(s->bitmap));
+            return -EINVAL;
+        }
+
+        buf = g_malloc(buf_size);
+        qemu_get_buffer(f, buf, buf_size);
+        bdrv_dirty_bitmap_deserialize_part(s->bitmap, buf,
+                                           first_sector,
+                                           nr_sectors, false);
+        g_free(buf);
+    }
+
+    return 0;
+}
+
+static int dirty_bitmap_load_header(QEMUFile *f, DirtyBitmapLoadState *s)
+{
+    Error *local_err = NULL;
+    s->flags = qemu_get_bitmap_flags(f);
+    DPRINTF("flags: %x\n", s->flags);
+
+    if (s->flags & DIRTY_BITMAP_MIG_FLAG_DEVICE_NAME) {
+        if (!qemu_get_counted_string(f, s->node_name)) {
+            fprintf(stderr, "Unable to read node name string\n");
+            return -EINVAL;
+        }
+        s->bs = bdrv_lookup_bs(s->node_name, s->node_name, &local_err);
+        if (!s->bs) {
+            error_report("%s", error_get_pretty(local_err));
+            error_free(local_err);
+            return -EINVAL;
+        }
+    } else if (!s->bs) {
+        fprintf(stderr, "Error: block device name is not set\n");
+        return -EINVAL;
+    }
+
+    if (s->flags & DIRTY_BITMAP_MIG_FLAG_BITMAP_NAME) {
+        if (!qemu_get_counted_string(f, s->bitmap_name)) {
+            fprintf(stderr, "Unable to read node name string\n");
+            return -EINVAL;
+        }
+        s->bitmap = bdrv_find_dirty_bitmap(s->bs, s->bitmap_name);
+
+        /* bitmap may be NULL here, it wouldn't be an error if it is the
+         * first occurrence of the bitmap */
+        if (!s->bitmap && !(s->flags & DIRTY_BITMAP_MIG_FLAG_START)) {
+            fprintf(stderr, "Error: unknown dirty bitmap "
+                    "'%s' for block device '%s'\n",
+                    s->bitmap_name, s->node_name);
+            return -EINVAL;
+        }
+    } else if (!s->bitmap) {
+        fprintf(stderr, "Error: block device name is not set\n");
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+static int dirty_bitmap_load(QEMUFile *f, void *opaque, int version_id)
+{
+    static DirtyBitmapLoadState s;
+
+    int ret = 0;
+
+    DPRINTF("load start\n");
+
+    do {
+        dirty_bitmap_load_header(f, &s);
+
+        if (s.flags & DIRTY_BITMAP_MIG_FLAG_START) {
+            ret = dirty_bitmap_load_start(f, &s);
+        } else if (s.flags & DIRTY_BITMAP_MIG_FLAG_COMPLETE) {
+            dirty_bitmap_load_complete(f, &s);
+        } else if (s.flags & DIRTY_BITMAP_MIG_FLAG_BITS) {
+            ret = dirty_bitmap_load_bits(f, &s);
+        }
+
+        DPRINTF("ret: %d\n", ret);
+        if (!ret) {
+            ret = qemu_file_get_error(f);
+        }
+
+        DPRINTF("ret: %d\n", ret);
+        if (ret) {
+            return ret;
+        }
+    } while (!(s.flags & DIRTY_BITMAP_MIG_FLAG_EOS));
+
+    DPRINTF("load finish\n");
+    return 0;
+}
+
+static int dirty_bitmap_save_setup(QEMUFile *f, void *opaque)
+{
+    DirtyBitmapMigBitmapState *dbms = NULL;
+    init_dirty_bitmap_migration();
+
+    QSIMPLEQ_FOREACH(dbms, &dirty_bitmap_mig_state.dbms_list, entry) {
+        send_bitmap_start(f, dbms);
+    }
+    qemu_put_bitmap_flags(f, DIRTY_BITMAP_MIG_FLAG_EOS);
+
+    return 0;
+}
+
+static bool dirty_bitmap_is_active(void *opaque)
+{
+    return migrate_dirty_bitmaps();
+}
+
+static bool dirty_bitmap_is_active_iterate(void *opaque)
+{
+    return dirty_bitmap_is_active(opaque) && !runstate_is_running();
+}
+
+static bool dirty_bitmap_has_postcopy(void *opaque)
+{
+    return true;
+}
+
+static SaveVMHandlers savevm_dirty_bitmap_handlers = {
+    .save_live_setup = dirty_bitmap_save_setup,
+    .save_live_complete_postcopy = dirty_bitmap_save_complete,
+    .has_postcopy = dirty_bitmap_has_postcopy,
+    .save_live_pending = dirty_bitmap_save_pending,
+    .save_live_iterate = dirty_bitmap_save_iterate,
+    .is_active_iterate = dirty_bitmap_is_active_iterate,
+    .load_state = dirty_bitmap_load,
+    .cleanup = dirty_bitmap_migration_cleanup,
+    .is_active = dirty_bitmap_is_active,
+};
+
+void dirty_bitmap_mig_init(void)
+{
+    QSIMPLEQ_INIT(&dirty_bitmap_mig_state.dbms_list);
+
+    register_savevm_live(NULL, "dirty-bitmap", 0, 1,
+                         &savevm_dirty_bitmap_handlers,
+                         &dirty_bitmap_mig_state);
+}
diff --git a/migration/savevm.c b/migration/savevm.c
index 1cfb91c..8664953 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1549,6 +1549,8 @@ static int loadvm_postcopy_handle_run(MigrationIncomingState *mis)
 
     trace_loadvm_postcopy_handle_run_vmstart();
 
+    dirty_bitmap_mig_enable_successors();
+
     if (autostart) {
         /* Hold onto your hats, starting the CPU */
         vm_start();
diff --git a/vl.c b/vl.c
index 9ee1c62..9b6bc00 100644
--- a/vl.c
+++ b/vl.c
@@ -4431,6 +4431,7 @@ int main(int argc, char **argv, char **envp)
 
     blk_mig_init();
     ram_mig_init();
+    dirty_bitmap_mig_init();
 
     /* If the currently selected machine wishes to override the units-per-bus
      * property of its default HBA interface type, do so now. */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 12/17] iotests: maintain several vms in test
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (10 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 11/17] migration: add postcopy migration of dirty bitmaps Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 13/17] iotests: add add_incoming_migration to VM class Vladimir Sementsov-Ogievskiy
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

The only problem with it is the same qmp socket name (which is
vm._monitor_path) for all vms. And because of this second vm couldn't be
lauched (vm.launch() fails because of socket is already in use).
This patch adds a number of vm into vm._monitor_path

Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/iotests.py | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 37e6124..ab652ea 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -120,11 +120,12 @@ def event_match(event, match=None):
 
 class VM(object):
     '''A QEMU VM'''
+    nb_vms = 0
 
     def __init__(self):
-        self._monitor_path = os.path.join(test_dir, 'qemu-mon.%d' % os.getpid())
-        self._qemu_log_path = os.path.join(test_dir, 'qemu-log.%d' % os.getpid())
-        self._qtest_path = os.path.join(test_dir, 'qemu-qtest.%d' % os.getpid())
+        self._monitor_path = os.path.join(test_dir, 'qemu-mon.%d.%d' % (os.getpid(), VM.nb_vms))
+        self._qemu_log_path = os.path.join(test_dir, 'qemu-log.%d.%d' % (os.getpid(), VM.nb_vms))
+        self._qtest_path = os.path.join(test_dir, 'qemu-qtest.%d.%d' % (os.getpid(), VM.nb_vms))
         self._args = qemu_args + ['-chardev',
                      'socket,id=mon,path=' + self._monitor_path,
                      '-mon', 'chardev=mon,mode=control',
@@ -133,6 +134,7 @@ class VM(object):
                      '-display', 'none', '-vga', 'none']
         self._num_drives = 0
         self._events = []
+        VM.nb_vms += 1
 
     # This can be used to add an unused monitor instance.
     def add_monitor_telnet(self, ip, port):
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 13/17] iotests: add add_incoming_migration to VM class
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (11 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 12/17] iotests: maintain several vms in test Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 14/17] qapi: add md5 checksum of last dirty bitmap level to query-block Vladimir Sementsov-Ogievskiy
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/iotests.py | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index ab652ea..dc45dd9 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -165,6 +165,12 @@ class VM(object):
         self._num_drives += 1
         return self
 
+    def add_incoming_migration(self, desc):
+        '''Add an incoming migration to the VM'''
+        self._args.append('-incoming')
+        self._args.append(desc)
+        return self
+
     def pause_drive(self, drive, event=None):
         '''Pause drive r/w operations'''
         if not event:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 14/17] qapi: add md5 checksum of last dirty bitmap level to query-block
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (12 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 13/17] iotests: add add_incoming_migration to VM class Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 15/17] iotests: add default node-name Vladimir Sementsov-Ogievskiy
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 block/dirty-bitmap.c   | 1 +
 include/qemu/hbitmap.h | 8 ++++++++
 qapi/block-core.json   | 5 ++++-
 util/hbitmap.c         | 8 ++++++++
 4 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 7d4cade..c0e7e51 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -332,6 +332,7 @@ BlockDirtyInfoList *bdrv_query_dirty_bitmaps(BlockDriverState *bs)
         info->has_name = !!bm->name;
         info->name = g_strdup(bm->name);
         info->status = bdrv_dirty_bitmap_status(bm);
+        info->md5 = hbitmap_md5(bm->bitmap);
         entry->value = info;
         *plist = entry;
         plist = &entry->next;
diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
index 00dbb60..6a57902 100644
--- a/include/qemu/hbitmap.h
+++ b/include/qemu/hbitmap.h
@@ -227,6 +227,14 @@ void hbitmap_deserialize_zeroes(HBitmap *hb, uint64_t start, uint64_t count,
 void hbitmap_deserialize_finish(HBitmap *hb);
 
 /**
+ * hbitmap_md5:
+ * @bitmap: HBitmap to operate on.
+ *
+ * Returns md5 checksum of the last level.
+ */
+char *hbitmap_md5(const HBitmap *bitmap);
+
+/**
  * hbitmap_free:
  * @hb: HBitmap to operate on.
  *
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 0a915ed..9745687 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -360,11 +360,14 @@
 #
 # @status: current status of the dirty bitmap (since 2.4)
 #
+# @md5: md5 checksum (as a hexadecimal string) of the last bitmap level
+#       (since 2.6)
+#
 # Since: 1.3
 ##
 { 'struct': 'BlockDirtyInfo',
   'data': {'*name': 'str', 'count': 'int', 'granularity': 'uint32',
-           'status': 'DirtyBitmapStatus'} }
+           'status': 'DirtyBitmapStatus', 'md5': 'str'} }
 
 ##
 # @BlockInfo:
diff --git a/util/hbitmap.c b/util/hbitmap.c
index 1e49ab7..caf4af6 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -664,3 +664,11 @@ void hbitmap_free_meta(HBitmap *hb)
     hbitmap_free(hb->meta);
     hb->meta = NULL;
 }
+
+char *hbitmap_md5(const HBitmap *bitmap)
+{
+    uint64_t size =
+        MAX((bitmap->size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
+    const guchar *data = (const guchar *)bitmap->levels[HBITMAP_LEVELS - 1];
+    return g_compute_checksum_for_data(G_CHECKSUM_MD5, data, size);
+}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 15/17] iotests: add default node-name
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (13 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 14/17] qapi: add md5 checksum of last dirty bitmap level to query-block Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 16/17] iotests: add dirty bitmap migration test 117 Vladimir Sementsov-Ogievskiy
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

When testing migration, auto-generated by qemu node-names differs in
source and destination qemu and migration fails. After this patch,
auto-generated by iotest nodenames will be the same.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/iotests.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index dc45dd9..cdd752e 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -150,7 +150,8 @@ class VM(object):
     def add_drive(self, path, opts='', interface='virtio'):
         '''Add a virtio-blk drive to the VM'''
         options = ['if=%s' % interface,
-                   'id=drive%d' % self._num_drives]
+                   'id=drive%d' % self._num_drives,
+                   'node-name=drivenode%d' % self._num_drives]
 
         if path is not None:
             options.append('file=%s' % path)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 16/17] iotests: add dirty bitmap migration test 117
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (14 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 15/17] iotests: add default node-name Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 17/17] iotests: add dirty bitmap migration test 170 Vladimir Sementsov-Ogievskiy
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

The test starts two vms (vm_a, vm_b), create dirty bitmap in
the first one, do several writes to corresponding device and
then migrate vm_a to vm_b with dirty bitmaps.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/117     | 82 ++++++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/117.out |  5 +++
 tests/qemu-iotests/group   |  1 +
 3 files changed, 88 insertions(+)
 create mode 100755 tests/qemu-iotests/117
 create mode 100644 tests/qemu-iotests/117.out

diff --git a/tests/qemu-iotests/117 b/tests/qemu-iotests/117
new file mode 100755
index 0000000..bcdc1f8
--- /dev/null
+++ b/tests/qemu-iotests/117
@@ -0,0 +1,82 @@
+#!/usr/bin/env python
+#
+# Tests for dirty bitmaps migration.
+#
+# (C) Vladimir Sementsov-Ogievskiy 2015
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import os
+import iotests
+import time
+from iotests import qemu_img
+
+disk_a = os.path.join(iotests.test_dir, 'disk_a')
+disk_b = os.path.join(iotests.test_dir, 'disk_b')
+
+size   = 0x400000000 # 1G
+sector_size = 512
+granularity = 512
+regions = [
+    { 'start': 0,          'count': 0x100000 },
+    { 'start': 0x100000000, 'count': 0x200000  },
+    { 'start': 0x399900000, 'count': 0x100000  }
+    ]
+
+class TestDirtyBitmapMigration(iotests.QMPTestCase):
+
+    def setUp(self):
+        qemu_img('create', '-f', iotests.imgfmt, disk_a, str(size))
+        qemu_img('create', '-f', iotests.imgfmt, disk_b, str(size))
+        self.vm_a = iotests.VM().add_drive(disk_a)
+        self.vm_b = iotests.VM().add_drive(disk_b)
+        self.vm_b.add_incoming_migration("tcp:0:4444")
+        self.vm_a.launch()
+        self.vm_b.launch()
+
+    def tearDown(self):
+        self.vm_a.shutdown()
+        self.vm_b.shutdown()
+        os.remove(disk_a)
+        os.remove(disk_b)
+
+    def test_migration(self):
+        result = self.vm_a.qmp('block-dirty-bitmap-add', node='drive0',
+                               name='bitmap', granularity=granularity)
+        self.assert_qmp(result, 'return', {});
+
+        for r in regions:
+          self.vm_a.hmp_qemu_io('drive0',
+                                'write %d %d' % (r['start'], r['count']))
+
+        result = self.vm_a.qmp('query-block');
+        md5 = result['return'][0]['dirty-bitmaps'][0]['md5']
+
+        result = self.vm_a.qmp('migrate-set-capabilities',
+                               capabilities=[{'capability': 'dirty-bitmaps',
+                                              'state': True}])
+        self.assert_qmp(result, 'return', {})
+
+        result = self.vm_a.qmp('migrate', uri='tcp:0:4444')
+        self.assertNotEqual(self.vm_a.event_wait("STOP"), None)
+        self.assertNotEqual(self.vm_b.event_wait("RESUME"), None)
+        time.sleep(2)
+
+        result = self.vm_b.qmp('query-block');
+        self.assert_qmp(result, 'return[0]/dirty-bitmaps[0]/md5', md5);
+
+
+if __name__ == '__main__':
+    iotests.main()
diff --git a/tests/qemu-iotests/117.out b/tests/qemu-iotests/117.out
new file mode 100644
index 0000000..ae1213e
--- /dev/null
+++ b/tests/qemu-iotests/117.out
@@ -0,0 +1,5 @@
+.
+----------------------------------------------------------------------
+Ran 1 tests
+
+OK
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index d6e9219..b2d6ab6 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -122,6 +122,7 @@
 114 rw auto quick
 115 rw auto
 116 rw auto quick
+117 rw auto quick
 118 rw auto
 119 rw auto quick
 120 rw auto quick
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 17/17] iotests: add dirty bitmap migration test 170
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (15 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 16/17] iotests: add dirty bitmap migration test 117 Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:00 ` Vladimir Sementsov-Ogievskiy
  2016-02-12 18:10 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Test
- start two vms (vm_a, vm_b)

- in a
    - do writes from set A
    - do writes from set B
    - fix bitmap md5
    - clear bitmap
    - do writes from set A
    - start migration
- than, in b
    - wait vm start (postcopy should start)
    - do writes from set B
    - check btimap md5

The test should verify postcopy migration and than merging with delta
(changes in target, during postcopy process).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 tests/qemu-iotests/170     | 101 +++++++++++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/170.out |   5 +++
 tests/qemu-iotests/group   |   1 +
 3 files changed, 107 insertions(+)
 create mode 100755 tests/qemu-iotests/170
 create mode 100644 tests/qemu-iotests/170.out

diff --git a/tests/qemu-iotests/170 b/tests/qemu-iotests/170
new file mode 100755
index 0000000..896ebe2
--- /dev/null
+++ b/tests/qemu-iotests/170
@@ -0,0 +1,101 @@
+#!/usr/bin/env python
+#
+# Tests for dirty bitmaps migration.
+#
+# (C) Vladimir Sementsov-Ogievskiy 2015
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import os
+import iotests
+import time
+from iotests import qemu_img
+
+disk_a = os.path.join(iotests.test_dir, 'disk_a')
+disk_b = os.path.join(iotests.test_dir, 'disk_b')
+
+size   = 0x400000000 # 1G
+sector_size = 512
+granularity = 512
+regions = [
+    { 'start': 0,          'count': 0x100000 },
+    { 'start': 0x100000000, 'count': 0x200000  },
+    { 'start': 0x399900000, 'count': 0x100000  }
+    ]
+
+class TestDirtyBitmapMigration(iotests.QMPTestCase):
+
+    def setUp(self):
+        qemu_img('create', '-f', iotests.imgfmt, disk_a, str(size))
+        qemu_img('create', '-f', iotests.imgfmt, disk_b, str(size))
+        self.vm_a = iotests.VM().add_drive(disk_a)
+        self.vm_b = iotests.VM().add_drive(disk_b)
+        self.vm_b.add_incoming_migration("tcp:0:4444")
+        self.vm_a.launch()
+        self.vm_b.launch()
+
+    def tearDown(self):
+        self.vm_a.shutdown()
+        self.vm_b.shutdown()
+        os.remove(disk_a)
+        os.remove(disk_b)
+
+    def test_migration(self):
+        result = self.vm_a.qmp('block-dirty-bitmap-add', node='drive0',
+                               name='bitmap', granularity=granularity)
+        self.assert_qmp(result, 'return', {});
+
+        s = 0
+        while s < size:
+            self.vm_a.hmp_qemu_io('drive0', 'write %d %d' % (s, granularity))
+            s += 0x10000
+        s = 0x8000
+        while s < size:
+            self.vm_a.hmp_qemu_io('drive0', 'write %d %d' % (s, granularity))
+            s += 0x10000
+
+        result = self.vm_a.qmp('query-block');
+        md5 = result['return'][0]['dirty-bitmaps'][0]['md5']
+
+        result = self.vm_a.qmp('block-dirty-bitmap-clear', node='drive0',
+                               name='bitmap')
+        self.assert_qmp(result, 'return', {});
+        s = 0
+        while s < size:
+            self.vm_a.hmp_qemu_io('drive0', 'write %d %d' % (s, granularity))
+            s += 0x10000
+
+        result = self.vm_a.qmp('migrate-set-capabilities',
+                               capabilities=[{'capability': 'dirty-bitmaps',
+                                              'state': True}])
+        self.assert_qmp(result, 'return', {})
+
+        result = self.vm_a.qmp('migrate', uri='tcp:0:4444')
+        self.assertNotEqual(self.vm_a.event_wait("STOP"), None)
+        self.assertNotEqual(self.vm_b.event_wait("RESUME"), None)
+
+        s = 0x8000
+        while s < size:
+            self.vm_b.hmp_qemu_io('drive0', 'write %d %d' % (s, granularity))
+            s += 0x10000
+
+        time.sleep(5)
+
+        result = self.vm_b.qmp('query-block');
+        self.assert_qmp(result, 'return[0]/dirty-bitmaps[0]/md5', md5);
+
+
+if __name__ == '__main__':
+    iotests.main()
diff --git a/tests/qemu-iotests/170.out b/tests/qemu-iotests/170.out
new file mode 100644
index 0000000..ae1213e
--- /dev/null
+++ b/tests/qemu-iotests/170.out
@@ -0,0 +1,5 @@
+.
+----------------------------------------------------------------------
+Ran 1 tests
+
+OK
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index b2d6ab6..95854c5 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -143,3 +143,4 @@
 138 rw auto quick
 139 rw auto quick
 142 auto
+170 rw auto quick
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (16 preceding siblings ...)
  2016-02-12 18:00 ` [Qemu-devel] [PATCH 17/17] iotests: add dirty bitmap migration test 170 Vladimir Sementsov-Ogievskiy
@ 2016-02-12 18:10 ` Vladimir Sementsov-Ogievskiy
  2016-02-29 15:32 ` [Qemu-devel] [PATCH RFC] docs: add dirty bitmap postcopy migration docs Vladimir Sementsov-Ogievskiy
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-12 18:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, famz, lirans, quintela, dgilbert, stefanha,
	pbonzini, amit.shah, den, jsnow

Oh, sorry, significant note:

These patches are based on Fam's [PATCH v2 00/13] Dirty bitmap changes 
for migration/persistence work


On 12.02.2016 21:00, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
>
> These series are derived from my 'Dirty bitmaps migration' series. The
> core idea is switch to postcopy migration and drop usage of meta
> bitmaps.
>
> These patches provide dirty bitmap postcopy migration feature. Only
> named dirty bitmaps are to be migrated. Migration may be enabled using
> migration capabilities.
>
> The overall method (thanks to John Snow):
>
> 1. migrate bitmaps meta data in .save_live_setup
>     - create/find related bitmaps on target
>     - disable them
>     - create successors (anonimous children) only for enabled migrated
>       bitmaps
> 2. do nothing in precopy stage
> 3. just before target vm start: enable successors, created in (1)
> 4. migrate bitmap data
> 5. reclaime bitmaps (merge successors to their parents)
> 6. enable bitmaps (only bitmaps, which was enabled in source)
>
>
> Some patches are unchnaged from (v7) of 'Dirty bitmaps migration'
> (DBMv7). I've left Reviewed-by's for them, if you don't like it, say me
> and I'll drop them in the following version.
>
> So, relatively to last DBMv7:
>
> 01-04: new patches, splitting common postcopy migration out of ram
>         postcopy migration
>     05: equal to DBMv7.05
>     06: new
>     07: equal to DBMv7.06
>     08: new
>     09: equal to DBMv7.07
>     10: new
>     11: derived from DBMv7.08, see below
> 12-15: equal to DBMv7.09-12
>     16: derived from DBMv7.13
>         - switch from fifo to socket, as postcopy don't work with fifo
>           for now
>         - change parameters: size, granularity, regions
>         - add time.sleep, to wait for postcopy migration phase (bad
>           temporary solution.
>         - drop Reviewed-by
>     17: new
>
>     11: the core patch of the series, it is derived from
>         [DBMv7.08: migration: add migration_block-dirty-bitmap.c]
>         There are a lot of changes related to switching from precopy to
>         postcopy, but functions related to migration stream itself
>         (structs, send/load sequences) are mostly unchnaged.
>
>         So, changes, to switch from precopy to postcopy:
>         - removed all staff related to meta bitmaps and dirty phase!!!
>         - add dirty_bitmap_mig_enable_successors, and call it before
>           target vm start in loadvm_postcopy_handle_run
>         - add enabled_bitmaps list of bitmaps for
>           dirty_bitmap_mig_enable_successors
>
>         - enabled flag is send with start bitmap chunk instead of
>           completion chunk
>         - sectors_per_chunk is calculated directly from CHUNK_SIZE, not
>           using meta bitmap granularity
>
>         - dirty_bitmap_save_iterate: remove dirty_phase, move bulk_phase
>           to postcopy stage
>         - dirty_bitmap_save_pending: remove dirty phase related pending,
>           switch pending to non-postcopyable
>         - dirty_bitmap_load_start: get enabled flag and prepare
>           successors for enabled bitmaps, also add them to
>           enabled_bitmaps list
>         - dirty_bitmap_load_complete: for enabled bitmaps: merge them
>           with successors and enable
>
>         - savevm handlers:
>           * remove separate savevm_dirty_bitmap_live_iterate_handlers state
>             (it was bad idea, any way), and move its save_live_iterate to
>             savevm_dirty_bitmap_handlers
>           * add is_active_iterate savevm handler, which allows iterations
>             only in postcopy stage (after stopping source vm)
>           * add has_postcopy savevm handler. (ofcourse, just returning true)
>           * use save_live_complete_postcopy instead of
>             save_live_complete_precopy
>
>         Other changes:
>         - some debug output changed
>         - remove MIN_LIVE_SIZE, is_live_iterative and related staff (it
>           was needed to omit iterations if bitmap data is small, possibly
>           this should be reimplemented)
>
> Vladimir Sementsov-Ogievskiy (17):
>    migration: add has_postcopy savevm handler
>    migration: fix ram_save_pending
>    migration: split common postcopy out of ram postcopy
>    migration: introduce postcopy-only pending
>    block: add bdrv_next_dirty_bitmap()
>    block: add bdrv_dirty_bitmap_enable_successor()
>    qapi: add dirty-bitmaps migration capability
>    migration: include migrate_dirty_bitmaps in migrate_postcopy
>    migration/qemu-file: add qemu_put_counted_string()
>    migration: add is_active_iterate handler
>    migration: add postcopy migration of dirty bitmaps
>    iotests: maintain several vms in test
>    iotests: add add_incoming_migration to VM class
>    qapi: add md5 checksum of last dirty bitmap level to query-block
>    iotests: add default node-name
>    iotests: add dirty bitmap migration test 117
>    iotests: add dirty bitmap migration test 170
>
>   block/dirty-bitmap.c           |  16 +
>   include/block/dirty-bitmap.h   |   4 +
>   include/migration/block.h      |   1 +
>   include/migration/migration.h  |   5 +
>   include/migration/qemu-file.h  |   2 +
>   include/migration/vmstate.h    |   7 +-
>   include/qemu/hbitmap.h         |   8 +
>   include/sysemu/sysemu.h        |   5 +-
>   migration/Makefile.objs        |   2 +-
>   migration/block-dirty-bitmap.c | 646 +++++++++++++++++++++++++++++++++++++++++
>   migration/block.c              |   7 +-
>   migration/migration.c          |  63 ++--
>   migration/postcopy-ram.c       |   4 +-
>   migration/qemu-file.c          |  13 +
>   migration/ram.c                |  19 +-
>   migration/savevm.c             |  56 +++-
>   qapi-schema.json               |   4 +-
>   qapi/block-core.json           |   5 +-
>   tests/qemu-iotests/117         |  82 ++++++
>   tests/qemu-iotests/117.out     |   5 +
>   tests/qemu-iotests/170         | 101 +++++++
>   tests/qemu-iotests/170.out     |   5 +
>   tests/qemu-iotests/group       |   2 +
>   tests/qemu-iotests/iotests.py  |  17 +-
>   trace-events                   |   2 +-
>   util/hbitmap.c                 |   8 +
>   vl.c                           |   1 +
>   27 files changed, 1035 insertions(+), 55 deletions(-)
>   create mode 100644 migration/block-dirty-bitmap.c
>   create mode 100755 tests/qemu-iotests/117
>   create mode 100644 tests/qemu-iotests/117.out
>   create mode 100755 tests/qemu-iotests/170
>   create mode 100644 tests/qemu-iotests/170.out
>


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH RFC] docs: add dirty bitmap postcopy migration docs
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (17 preceding siblings ...)
  2016-02-12 18:10 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2016-02-29 15:32 ` Vladimir Sementsov-Ogievskiy
  2016-03-14 11:37 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
  2016-03-25 10:43 ` Vladimir Sementsov-Ogievskiy
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-02-29 15:32 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, vsementsov, famz, lirans, quintela,
	dgilbert, stefanha, pbonzini, amit.shah, den, jsnow

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---

  There is documentation draft for the feature.

  Here is nothing about bitmap migration protocol, it is commented in
  migration/block-dirty-bitmap.c.

  Capability name differs with other patches. Here - postcopy-bitmaps,
  and in patches dirty-bitmaps. In the following series it would be
  fixed to postcopy-bitmaps or x-postcopy-bitmaps (more like RAM)


 docs/migration.txt | 90 +++++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 72 insertions(+), 18 deletions(-)

diff --git a/docs/migration.txt b/docs/migration.txt
index fda8d61..1d94b32 100644
--- a/docs/migration.txt
+++ b/docs/migration.txt
@@ -298,7 +298,7 @@ In most migration scenarios there is only a single data path that runs
 from the source VM to the destination, typically along a single fd (although
 possibly with another fd or similar for some fast way of throwing pages across).
 
-However, some uses need two way communication; in particular the Postcopy
+However, some uses need two way communication; in particular the RAM postcopy
 destination needs to be able to request pages on demand from the source.
 
 For these scenarios there is a 'return path' from the destination to the source;
@@ -321,32 +321,62 @@ the amount of migration traffic and time it takes, the down side is that during
 the postcopy phase, a failure of *either* side or the network connection causes
 the guest to be lost.
 
-In postcopy the destination CPUs are started before all the memory has been
+== Sorts of state data ==
+States data, which should be migrated may be divided into three groups:
+    1. Precopy only - data, which _must_ be transferred before destination CPUs
+       are started.
+    2. Compatible - data, which may be transferred both in precopy and postcopy
+       phases (RAM).
+    3. Postcopy only - data, which _must_ be transferred after destination CPUs
+       are started (dirty bitmaps).
+
+Note: also, any type of data may be transferred in the stopped state, when both
+source and destination are stopped.
+
+Postcopy phase starts after the destination CPUs are started (and after stopped
+phase, of-course), if the following conditions are met:
+    1. Some postcopy migration capabilities are turned on.
+    2. Current state data to be transferred is too large to be transferred in a
+       stopped state.
+    3. Current precopy-only data is small enough to be transferred in the
+       stopped state.
+    4. One of the following (or both):
+        4a. Postcopy is forced by migrate_start_postcopy
+        4b. State data which _may_ be transferred as precopy
+            ( = precopy-only + compatible ) is small enough to be transferred
+            in the stopped state.
+
+== migrate_start_postcopy ==
+
+Issuing 'migrate_start_postcopy' command during precopy migration will cause the
+transition from precopy to postcopy. It can be issued immediately after
+migration is started or any time later on. Issuing it after the end of a
+migration is harmless. This command is not guaranteed to cause immediate start
+of destination and switch to postcopy (see above).
+
+Note: During the postcopy phase, the bandwidth limits set using
+migrate_set_speed are ignored.
+
+Most postcopy related things are explained in 'RAM Postcopy' section, as RAM
+postcopy was the first postcopy mechanism in Qemu and it dictated overall
+architecture.
+
+== RAM Postcopy ==
+In RAM postcopy the destination CPUs are started before all the memory has been
 transferred, and accesses to pages that are yet to be transferred cause
 a fault that's translated by QEMU into a request to the source QEMU.
 
-Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
-doesn't finish in a given time the switch is made to postcopy.
+RAM postcopy can be combined with precopy (i.e. normal migration) so that if
+precopy doesn't finish in a given time the switch is made to postcopy.
 
-=== Enabling postcopy ===
+=== Enabling RAM postcopy ===
 
-To enable postcopy, issue this command on the monitor prior to the
+To enable RAM postcopy, issue this command on the monitor prior to the
 start of migration:
 
 migrate_set_capability x-postcopy-ram on
 
-The normal commands are then used to start a migration, which is still
-started in precopy mode.  Issuing:
-
-migrate_start_postcopy
-
-will now cause the transition from precopy to postcopy.
-It can be issued immediately after migration is started or any
-time later on.  Issuing it after the end of a migration is harmless.
-
-Note: During the postcopy phase, the bandwidth limits set using
-migrate_set_speed is ignored (to avoid delaying requested pages that
-the destination is waiting for).
+Then, to switch to postcopy, 'migrate_start_postcopy' command may be used.
 
 === Postcopy device transfer ===
 
@@ -482,3 +512,27 @@ request for a page that has already been sent is ignored.  Duplicate requests
 such as this can happen as a page is sent at about the same time the
 destination accesses it.
 
+
+== Block dirty bitmaps postcopy ==
+
+Postcopy is good place to migrate dirty bitmaps as they are not critical data,
+and if postcopy fails, we will just drop bitmaps and do full backup instead of
+next incremental and nothing worse.
+
+The good thing here is that bitmaps postcopy doesn't mean RAM postcopy, so if
+only postcopy-bitmaps migration capability is on RAM would be migrated as usual
+in precopy. Also, block dirty bitmap migration doesn't use return path as RAM
+postcopy.
+
+Dirty bitmap migration state data is postcopy-only (see above). So, it is
+migrated only in stopped state or in postcopy phase.
+
+Only named dirty bitmaps, associated with root nodes and non-root named nodes
+are migrated. If destination Qemu is already containing a dirty bitmap with the
+same name as a migrated bitmap (for the same node), then, if their
+granularities are the same the migration will be done, otherwise the error will
+be generated. If destination Qemu doesn't contain such bitmap it will be
+created.
+
+The protocol of migration is specified (and realized) in
+migration/block-dirty-bitmap.c.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (18 preceding siblings ...)
  2016-02-29 15:32 ` [Qemu-devel] [PATCH RFC] docs: add dirty bitmap postcopy migration docs Vladimir Sementsov-Ogievskiy
@ 2016-03-14 11:37 ` Vladimir Sementsov-Ogievskiy
  2016-03-14 21:15   ` John Snow
  2016-03-25 10:43 ` Vladimir Sementsov-Ogievskiy
  20 siblings, 1 reply; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-03-14 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, famz, lirans, quintela, dgilbert, stefanha,
	pbonzini, amit.shah, den, jsnow

ping

On 12.02.2016 21:00, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
>
> These series are derived from my 'Dirty bitmaps migration' series. The
> core idea is switch to postcopy migration and drop usage of meta
> bitmaps.
>
> These patches provide dirty bitmap postcopy migration feature. Only
> named dirty bitmaps are to be migrated. Migration may be enabled using
> migration capabilities.
>
> The overall method (thanks to John Snow):
>
> 1. migrate bitmaps meta data in .save_live_setup
>     - create/find related bitmaps on target
>     - disable them
>     - create successors (anonimous children) only for enabled migrated
>       bitmaps
> 2. do nothing in precopy stage
> 3. just before target vm start: enable successors, created in (1)
> 4. migrate bitmap data
> 5. reclaime bitmaps (merge successors to their parents)
> 6. enable bitmaps (only bitmaps, which was enabled in source)
>
>
> Some patches are unchnaged from (v7) of 'Dirty bitmaps migration'
> (DBMv7). I've left Reviewed-by's for them, if you don't like it, say me
> and I'll drop them in the following version.
>
> So, relatively to last DBMv7:
>
> 01-04: new patches, splitting common postcopy migration out of ram
>         postcopy migration
>     05: equal to DBMv7.05
>     06: new
>     07: equal to DBMv7.06
>     08: new
>     09: equal to DBMv7.07
>     10: new
>     11: derived from DBMv7.08, see below
> 12-15: equal to DBMv7.09-12
>     16: derived from DBMv7.13
>         - switch from fifo to socket, as postcopy don't work with fifo
>           for now
>         - change parameters: size, granularity, regions
>         - add time.sleep, to wait for postcopy migration phase (bad
>           temporary solution.
>         - drop Reviewed-by
>     17: new
>
>     11: the core patch of the series, it is derived from
>         [DBMv7.08: migration: add migration_block-dirty-bitmap.c]
>         There are a lot of changes related to switching from precopy to
>         postcopy, but functions related to migration stream itself
>         (structs, send/load sequences) are mostly unchnaged.
>
>         So, changes, to switch from precopy to postcopy:
>         - removed all staff related to meta bitmaps and dirty phase!!!
>         - add dirty_bitmap_mig_enable_successors, and call it before
>           target vm start in loadvm_postcopy_handle_run
>         - add enabled_bitmaps list of bitmaps for
>           dirty_bitmap_mig_enable_successors
>
>         - enabled flag is send with start bitmap chunk instead of
>           completion chunk
>         - sectors_per_chunk is calculated directly from CHUNK_SIZE, not
>           using meta bitmap granularity
>
>         - dirty_bitmap_save_iterate: remove dirty_phase, move bulk_phase
>           to postcopy stage
>         - dirty_bitmap_save_pending: remove dirty phase related pending,
>           switch pending to non-postcopyable
>         - dirty_bitmap_load_start: get enabled flag and prepare
>           successors for enabled bitmaps, also add them to
>           enabled_bitmaps list
>         - dirty_bitmap_load_complete: for enabled bitmaps: merge them
>           with successors and enable
>
>         - savevm handlers:
>           * remove separate savevm_dirty_bitmap_live_iterate_handlers state
>             (it was bad idea, any way), and move its save_live_iterate to
>             savevm_dirty_bitmap_handlers
>           * add is_active_iterate savevm handler, which allows iterations
>             only in postcopy stage (after stopping source vm)
>           * add has_postcopy savevm handler. (ofcourse, just returning true)
>           * use save_live_complete_postcopy instead of
>             save_live_complete_precopy
>
>         Other changes:
>         - some debug output changed
>         - remove MIN_LIVE_SIZE, is_live_iterative and related staff (it
>           was needed to omit iterations if bitmap data is small, possibly
>           this should be reimplemented)
>
> Vladimir Sementsov-Ogievskiy (17):
>    migration: add has_postcopy savevm handler
>    migration: fix ram_save_pending
>    migration: split common postcopy out of ram postcopy
>    migration: introduce postcopy-only pending
>    block: add bdrv_next_dirty_bitmap()
>    block: add bdrv_dirty_bitmap_enable_successor()
>    qapi: add dirty-bitmaps migration capability
>    migration: include migrate_dirty_bitmaps in migrate_postcopy
>    migration/qemu-file: add qemu_put_counted_string()
>    migration: add is_active_iterate handler
>    migration: add postcopy migration of dirty bitmaps
>    iotests: maintain several vms in test
>    iotests: add add_incoming_migration to VM class
>    qapi: add md5 checksum of last dirty bitmap level to query-block
>    iotests: add default node-name
>    iotests: add dirty bitmap migration test 117
>    iotests: add dirty bitmap migration test 170
>
>   block/dirty-bitmap.c           |  16 +
>   include/block/dirty-bitmap.h   |   4 +
>   include/migration/block.h      |   1 +
>   include/migration/migration.h  |   5 +
>   include/migration/qemu-file.h  |   2 +
>   include/migration/vmstate.h    |   7 +-
>   include/qemu/hbitmap.h         |   8 +
>   include/sysemu/sysemu.h        |   5 +-
>   migration/Makefile.objs        |   2 +-
>   migration/block-dirty-bitmap.c | 646 +++++++++++++++++++++++++++++++++++++++++
>   migration/block.c              |   7 +-
>   migration/migration.c          |  63 ++--
>   migration/postcopy-ram.c       |   4 +-
>   migration/qemu-file.c          |  13 +
>   migration/ram.c                |  19 +-
>   migration/savevm.c             |  56 +++-
>   qapi-schema.json               |   4 +-
>   qapi/block-core.json           |   5 +-
>   tests/qemu-iotests/117         |  82 ++++++
>   tests/qemu-iotests/117.out     |   5 +
>   tests/qemu-iotests/170         | 101 +++++++
>   tests/qemu-iotests/170.out     |   5 +
>   tests/qemu-iotests/group       |   2 +
>   tests/qemu-iotests/iotests.py  |  17 +-
>   trace-events                   |   2 +-
>   util/hbitmap.c                 |   8 +
>   vl.c                           |   1 +
>   27 files changed, 1035 insertions(+), 55 deletions(-)
>   create mode 100644 migration/block-dirty-bitmap.c
>   create mode 100755 tests/qemu-iotests/117
>   create mode 100644 tests/qemu-iotests/117.out
>   create mode 100755 tests/qemu-iotests/170
>   create mode 100644 tests/qemu-iotests/170.out
>


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration
  2016-03-14 11:37 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2016-03-14 21:15   ` John Snow
  0 siblings, 0 replies; 35+ messages in thread
From: John Snow @ 2016-03-14 21:15 UTC (permalink / raw)
  To: quintela
  Cc: kwolf, peter.maydell, Vladimir Sementsov-Ogievskiy, famz, lirans,
	qemu-devel, dgilbert, stefanha, pbonzini, amit.shah, den

Juan, have you had the chance to look at this yet?

--js

On 03/14/2016 07:37 AM, Vladimir Sementsov-Ogievskiy wrote:
> ping
> 
> On 12.02.2016 21:00, Vladimir Sementsov-Ogievskiy wrote:
>> Hi all!
>>
>> These series are derived from my 'Dirty bitmaps migration' series. The
>> core idea is switch to postcopy migration and drop usage of meta
>> bitmaps.
>>
>> These patches provide dirty bitmap postcopy migration feature. Only
>> named dirty bitmaps are to be migrated. Migration may be enabled using
>> migration capabilities.
>>
>> The overall method (thanks to John Snow):
>>
>> 1. migrate bitmaps meta data in .save_live_setup
>>     - create/find related bitmaps on target
>>     - disable them
>>     - create successors (anonimous children) only for enabled migrated
>>       bitmaps
>> 2. do nothing in precopy stage
>> 3. just before target vm start: enable successors, created in (1)
>> 4. migrate bitmap data
>> 5. reclaime bitmaps (merge successors to their parents)
>> 6. enable bitmaps (only bitmaps, which was enabled in source)
>>
>>
>> Some patches are unchnaged from (v7) of 'Dirty bitmaps migration'
>> (DBMv7). I've left Reviewed-by's for them, if you don't like it, say me
>> and I'll drop them in the following version.
>>
>> So, relatively to last DBMv7:
>>
>> 01-04: new patches, splitting common postcopy migration out of ram
>>         postcopy migration
>>     05: equal to DBMv7.05
>>     06: new
>>     07: equal to DBMv7.06
>>     08: new
>>     09: equal to DBMv7.07
>>     10: new
>>     11: derived from DBMv7.08, see below
>> 12-15: equal to DBMv7.09-12
>>     16: derived from DBMv7.13
>>         - switch from fifo to socket, as postcopy don't work with fifo
>>           for now
>>         - change parameters: size, granularity, regions
>>         - add time.sleep, to wait for postcopy migration phase (bad
>>           temporary solution.
>>         - drop Reviewed-by
>>     17: new
>>
>>     11: the core patch of the series, it is derived from
>>         [DBMv7.08: migration: add migration_block-dirty-bitmap.c]
>>         There are a lot of changes related to switching from precopy to
>>         postcopy, but functions related to migration stream itself
>>         (structs, send/load sequences) are mostly unchnaged.
>>
>>         So, changes, to switch from precopy to postcopy:
>>         - removed all staff related to meta bitmaps and dirty phase!!!
>>         - add dirty_bitmap_mig_enable_successors, and call it before
>>           target vm start in loadvm_postcopy_handle_run
>>         - add enabled_bitmaps list of bitmaps for
>>           dirty_bitmap_mig_enable_successors
>>
>>         - enabled flag is send with start bitmap chunk instead of
>>           completion chunk
>>         - sectors_per_chunk is calculated directly from CHUNK_SIZE, not
>>           using meta bitmap granularity
>>
>>         - dirty_bitmap_save_iterate: remove dirty_phase, move bulk_phase
>>           to postcopy stage
>>         - dirty_bitmap_save_pending: remove dirty phase related pending,
>>           switch pending to non-postcopyable
>>         - dirty_bitmap_load_start: get enabled flag and prepare
>>           successors for enabled bitmaps, also add them to
>>           enabled_bitmaps list
>>         - dirty_bitmap_load_complete: for enabled bitmaps: merge them
>>           with successors and enable
>>
>>         - savevm handlers:
>>           * remove separate savevm_dirty_bitmap_live_iterate_handlers
>> state
>>             (it was bad idea, any way), and move its save_live_iterate to
>>             savevm_dirty_bitmap_handlers
>>           * add is_active_iterate savevm handler, which allows iterations
>>             only in postcopy stage (after stopping source vm)
>>           * add has_postcopy savevm handler. (ofcourse, just returning
>> true)
>>           * use save_live_complete_postcopy instead of
>>             save_live_complete_precopy
>>
>>         Other changes:
>>         - some debug output changed
>>         - remove MIN_LIVE_SIZE, is_live_iterative and related staff (it
>>           was needed to omit iterations if bitmap data is small, possibly
>>           this should be reimplemented)
>>
>> Vladimir Sementsov-Ogievskiy (17):
>>    migration: add has_postcopy savevm handler
>>    migration: fix ram_save_pending
>>    migration: split common postcopy out of ram postcopy
>>    migration: introduce postcopy-only pending
>>    block: add bdrv_next_dirty_bitmap()
>>    block: add bdrv_dirty_bitmap_enable_successor()
>>    qapi: add dirty-bitmaps migration capability
>>    migration: include migrate_dirty_bitmaps in migrate_postcopy
>>    migration/qemu-file: add qemu_put_counted_string()
>>    migration: add is_active_iterate handler
>>    migration: add postcopy migration of dirty bitmaps
>>    iotests: maintain several vms in test
>>    iotests: add add_incoming_migration to VM class
>>    qapi: add md5 checksum of last dirty bitmap level to query-block
>>    iotests: add default node-name
>>    iotests: add dirty bitmap migration test 117
>>    iotests: add dirty bitmap migration test 170
>>
>>   block/dirty-bitmap.c           |  16 +
>>   include/block/dirty-bitmap.h   |   4 +
>>   include/migration/block.h      |   1 +
>>   include/migration/migration.h  |   5 +
>>   include/migration/qemu-file.h  |   2 +
>>   include/migration/vmstate.h    |   7 +-
>>   include/qemu/hbitmap.h         |   8 +
>>   include/sysemu/sysemu.h        |   5 +-
>>   migration/Makefile.objs        |   2 +-
>>   migration/block-dirty-bitmap.c | 646
>> +++++++++++++++++++++++++++++++++++++++++
>>   migration/block.c              |   7 +-
>>   migration/migration.c          |  63 ++--
>>   migration/postcopy-ram.c       |   4 +-
>>   migration/qemu-file.c          |  13 +
>>   migration/ram.c                |  19 +-
>>   migration/savevm.c             |  56 +++-
>>   qapi-schema.json               |   4 +-
>>   qapi/block-core.json           |   5 +-
>>   tests/qemu-iotests/117         |  82 ++++++
>>   tests/qemu-iotests/117.out     |   5 +
>>   tests/qemu-iotests/170         | 101 +++++++
>>   tests/qemu-iotests/170.out     |   5 +
>>   tests/qemu-iotests/group       |   2 +
>>   tests/qemu-iotests/iotests.py  |  17 +-
>>   trace-events                   |   2 +-
>>   util/hbitmap.c                 |   8 +
>>   vl.c                           |   1 +
>>   27 files changed, 1035 insertions(+), 55 deletions(-)
>>   create mode 100644 migration/block-dirty-bitmap.c
>>   create mode 100755 tests/qemu-iotests/117
>>   create mode 100644 tests/qemu-iotests/117.out
>>   create mode 100755 tests/qemu-iotests/170
>>   create mode 100644 tests/qemu-iotests/170.out
>>
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration
  2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
                   ` (19 preceding siblings ...)
  2016-03-14 11:37 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2016-03-25 10:43 ` Vladimir Sementsov-Ogievskiy
  20 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-03-25 10:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, famz, lirans, quintela, dgilbert, stefanha,
	pbonzini, amit.shah, den, jsnow

Hi there.. ping. Hey, what do you think about this all?

On 12.02.2016 21:00, Vladimir Sementsov-Ogievskiy wrote:
> Hi all!
>
> These series are derived from my 'Dirty bitmaps migration' series. The
> core idea is switch to postcopy migration and drop usage of meta
> bitmaps.
>
> These patches provide dirty bitmap postcopy migration feature. Only
> named dirty bitmaps are to be migrated. Migration may be enabled using
> migration capabilities.
>
> The overall method (thanks to John Snow):
>
> 1. migrate bitmaps meta data in .save_live_setup
>     - create/find related bitmaps on target
>     - disable them
>     - create successors (anonimous children) only for enabled migrated
>       bitmaps
> 2. do nothing in precopy stage
> 3. just before target vm start: enable successors, created in (1)
> 4. migrate bitmap data
> 5. reclaime bitmaps (merge successors to their parents)
> 6. enable bitmaps (only bitmaps, which was enabled in source)
>
>
> Some patches are unchnaged from (v7) of 'Dirty bitmaps migration'
> (DBMv7). I've left Reviewed-by's for them, if you don't like it, say me
> and I'll drop them in the following version.
>
> So, relatively to last DBMv7:
>
> 01-04: new patches, splitting common postcopy migration out of ram
>         postcopy migration
>     05: equal to DBMv7.05
>     06: new
>     07: equal to DBMv7.06
>     08: new
>     09: equal to DBMv7.07
>     10: new
>     11: derived from DBMv7.08, see below
> 12-15: equal to DBMv7.09-12
>     16: derived from DBMv7.13
>         - switch from fifo to socket, as postcopy don't work with fifo
>           for now
>         - change parameters: size, granularity, regions
>         - add time.sleep, to wait for postcopy migration phase (bad
>           temporary solution.
>         - drop Reviewed-by
>     17: new
>
>     11: the core patch of the series, it is derived from
>         [DBMv7.08: migration: add migration_block-dirty-bitmap.c]
>         There are a lot of changes related to switching from precopy to
>         postcopy, but functions related to migration stream itself
>         (structs, send/load sequences) are mostly unchnaged.
>
>         So, changes, to switch from precopy to postcopy:
>         - removed all staff related to meta bitmaps and dirty phase!!!
>         - add dirty_bitmap_mig_enable_successors, and call it before
>           target vm start in loadvm_postcopy_handle_run
>         - add enabled_bitmaps list of bitmaps for
>           dirty_bitmap_mig_enable_successors
>
>         - enabled flag is send with start bitmap chunk instead of
>           completion chunk
>         - sectors_per_chunk is calculated directly from CHUNK_SIZE, not
>           using meta bitmap granularity
>
>         - dirty_bitmap_save_iterate: remove dirty_phase, move bulk_phase
>           to postcopy stage
>         - dirty_bitmap_save_pending: remove dirty phase related pending,
>           switch pending to non-postcopyable
>         - dirty_bitmap_load_start: get enabled flag and prepare
>           successors for enabled bitmaps, also add them to
>           enabled_bitmaps list
>         - dirty_bitmap_load_complete: for enabled bitmaps: merge them
>           with successors and enable
>
>         - savevm handlers:
>           * remove separate savevm_dirty_bitmap_live_iterate_handlers state
>             (it was bad idea, any way), and move its save_live_iterate to
>             savevm_dirty_bitmap_handlers
>           * add is_active_iterate savevm handler, which allows iterations
>             only in postcopy stage (after stopping source vm)
>           * add has_postcopy savevm handler. (ofcourse, just returning true)
>           * use save_live_complete_postcopy instead of
>             save_live_complete_precopy
>
>         Other changes:
>         - some debug output changed
>         - remove MIN_LIVE_SIZE, is_live_iterative and related staff (it
>           was needed to omit iterations if bitmap data is small, possibly
>           this should be reimplemented)
>
> Vladimir Sementsov-Ogievskiy (17):
>    migration: add has_postcopy savevm handler
>    migration: fix ram_save_pending
>    migration: split common postcopy out of ram postcopy
>    migration: introduce postcopy-only pending
>    block: add bdrv_next_dirty_bitmap()
>    block: add bdrv_dirty_bitmap_enable_successor()
>    qapi: add dirty-bitmaps migration capability
>    migration: include migrate_dirty_bitmaps in migrate_postcopy
>    migration/qemu-file: add qemu_put_counted_string()
>    migration: add is_active_iterate handler
>    migration: add postcopy migration of dirty bitmaps
>    iotests: maintain several vms in test
>    iotests: add add_incoming_migration to VM class
>    qapi: add md5 checksum of last dirty bitmap level to query-block
>    iotests: add default node-name
>    iotests: add dirty bitmap migration test 117
>    iotests: add dirty bitmap migration test 170
>
>   block/dirty-bitmap.c           |  16 +
>   include/block/dirty-bitmap.h   |   4 +
>   include/migration/block.h      |   1 +
>   include/migration/migration.h  |   5 +
>   include/migration/qemu-file.h  |   2 +
>   include/migration/vmstate.h    |   7 +-
>   include/qemu/hbitmap.h         |   8 +
>   include/sysemu/sysemu.h        |   5 +-
>   migration/Makefile.objs        |   2 +-
>   migration/block-dirty-bitmap.c | 646 +++++++++++++++++++++++++++++++++++++++++
>   migration/block.c              |   7 +-
>   migration/migration.c          |  63 ++--
>   migration/postcopy-ram.c       |   4 +-
>   migration/qemu-file.c          |  13 +
>   migration/ram.c                |  19 +-
>   migration/savevm.c             |  56 +++-
>   qapi-schema.json               |   4 +-
>   qapi/block-core.json           |   5 +-
>   tests/qemu-iotests/117         |  82 ++++++
>   tests/qemu-iotests/117.out     |   5 +
>   tests/qemu-iotests/170         | 101 +++++++
>   tests/qemu-iotests/170.out     |   5 +
>   tests/qemu-iotests/group       |   2 +
>   tests/qemu-iotests/iotests.py  |  17 +-
>   trace-events                   |   2 +-
>   util/hbitmap.c                 |   8 +
>   vl.c                           |   1 +
>   27 files changed, 1035 insertions(+), 55 deletions(-)
>   create mode 100644 migration/block-dirty-bitmap.c
>   create mode 100755 tests/qemu-iotests/117
>   create mode 100644 tests/qemu-iotests/117.out
>   create mode 100755 tests/qemu-iotests/170
>   create mode 100644 tests/qemu-iotests/170.out
>


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-02-13  9:54 [Qemu-devel] [PATCH v6 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2017-02-13  9:54 ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-13  9:54 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

Split common postcopy staff from ram postcopy staff.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  1 +
 migration/migration.c         | 39 ++++++++++++++++++++++++-----------
 migration/postcopy-ram.c      |  4 +++-
 migration/savevm.c            | 48 +++++++++++++++++++++++++++++++++++--------
 4 files changed, 70 insertions(+), 22 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 7528cc2..35f7460 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -304,6 +304,7 @@ int migrate_add_blocker(Error *reason, Error **errp);
  */
 void migrate_del_blocker(Error *reason);
 
+bool migrate_postcopy(void);
 bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
 
diff --git a/migration/migration.c b/migration/migration.c
index 2b179c6..ae8dbfb 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1306,6 +1306,11 @@ bool migrate_postcopy_ram(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
 }
 
+bool migrate_postcopy(void)
+{
+    return migrate_postcopy_ram();
+}
+
 bool migrate_auto_converge(void)
 {
     MigrationState *s;
@@ -1635,9 +1640,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * need to tell the destination to throw any pages it's already received
      * that are dirty
      */
-    if (ram_postcopy_send_discard_bitmap(ms)) {
-        error_report("postcopy send discard bitmap failed");
-        goto fail;
+    if (migrate_postcopy_ram()) {
+        if (ram_postcopy_send_discard_bitmap(ms)) {
+            error_report("postcopy send discard bitmap failed");
+            goto fail;
+        }
     }
 
     /*
@@ -1646,8 +1653,10 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * wrap their state up here
      */
     qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX);
-    /* Ping just for debugging, helps line traces up */
-    qemu_savevm_send_ping(ms->to_dst_file, 2);
+    if (migrate_postcopy_ram()) {
+        /* Ping just for debugging, helps line traces up */
+        qemu_savevm_send_ping(ms->to_dst_file, 2);
+    }
 
     /*
      * While loading the device state we may trigger page transfer
@@ -1672,7 +1681,9 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
     qemu_savevm_send_postcopy_listen(fb);
 
     qemu_savevm_state_complete_precopy(fb, false);
-    qemu_savevm_send_ping(fb, 3);
+    if (migrate_postcopy_ram()) {
+        qemu_savevm_send_ping(fb, 3);
+    }
 
     qemu_savevm_send_postcopy_run(fb);
 
@@ -1707,11 +1718,13 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
 
     qemu_mutex_unlock_iothread();
 
-    /*
-     * Although this ping is just for debug, it could potentially be
-     * used for getting a better measurement of downtime at the source.
-     */
-    qemu_savevm_send_ping(ms->to_dst_file, 4);
+    if (migrate_postcopy_ram()) {
+        /*
+         * Although this ping is just for debug, it could potentially be
+         * used for getting a better measurement of downtime at the source.
+         */
+        qemu_savevm_send_ping(ms->to_dst_file, 4);
+    }
 
     ret = qemu_file_get_error(ms->to_dst_file);
     if (ret) {
@@ -1878,7 +1891,9 @@ static void *migration_thread(void *opaque)
 
         /* And do a ping that will make stuff easier to debug */
         qemu_savevm_send_ping(s->to_dst_file, 1);
+    }
 
+    if (migrate_postcopy()) {
         /*
          * Tell the destination that we *might* want to do postcopy later;
          * if the other end can't do postcopy it should fail now, nice and
@@ -1912,7 +1927,7 @@ static void *migration_thread(void *opaque)
             if (pending_size && pending_size >= max_size) {
                 /* Still a significant amount to transfer */
 
-                if (migrate_postcopy_ram() &&
+                if (migrate_postcopy() &&
                     s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
                     pend_nonpost <= max_size &&
                     atomic_read(&s->start_postcopy)) {
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index a40dddb..aef5690 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -339,7 +339,9 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
     }
 
     postcopy_state_set(POSTCOPY_INCOMING_END);
-    migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    if (migrate_postcopy_ram()) {
+        migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    }
 
     if (mis->postcopy_tmp_page) {
         munmap(mis->postcopy_tmp_page, getpagesize());
diff --git a/migration/savevm.c b/migration/savevm.c
index 8ebb27a..8fab701 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -73,7 +73,7 @@ static struct mig_cmd_args {
     [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
     [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
     [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
-    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
+    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
     [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
     [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
@@ -82,6 +82,23 @@ static struct mig_cmd_args {
     [MIG_CMD_MAX]              = { .len = -1, .name = "MAX" },
 };
 
+/* Note for MIG_CMD_POSTCOPY_ADVISE:
+ * The format of arguments is depending on postcopy mode:
+ * - postcopy RAM only
+ *   uint64_t host page size
+ *   uint64_t taget page size
+ *
+ * - postcopy RAM and postcopy dirty bitmaps
+ *   format is the same as for postcopy RAM only
+ *
+ * - postcopy dirty bitmaps only
+ *   Nothing. Command length field is 0.
+ *
+ * Be careful: adding a new postcopy entity with some other parameters should
+ * not break format self-description ability. Good way is to introduce some
+ * generic extendable format with an exception for two old entities.
+ */
+
 static int announce_self_create(uint8_t *buf,
                                 uint8_t *mac_addr)
 {
@@ -868,12 +885,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
 /* Send prior to any postcopy transfer */
 void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
-    uint64_t tmp[2];
-    tmp[0] = cpu_to_be64(getpagesize());
-    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+    if (migrate_postcopy_ram()) {
+        uint64_t tmp[2];
+        tmp[0] = cpu_to_be64(getpagesize());
+        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
 
-    trace_qemu_savevm_send_postcopy_advise();
-    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
+        trace_qemu_savevm_send_postcopy_advise();
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
+                                 16, (uint8_t *)tmp);
+    } else {
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
+    }
 }
 
 /* Sent prior to starting the destination running in postcopy, discard pages
@@ -1356,6 +1378,10 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
         return -1;
     }
 
+    if (!migrate_postcopy_ram()) {
+        return 0;
+    }
+
     if (!postcopy_ram_supported_by_host()) {
         postcopy_state_set(POSTCOPY_INCOMING_NONE);
         return -1;
@@ -1557,7 +1583,9 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
          * A rare case, we entered listen without having to do any discards,
          * so do the setup that's normally done at the time of the 1st discard.
          */
-        postcopy_ram_prepare_discard(mis);
+        if (migrate_postcopy_ram()) {
+            postcopy_ram_prepare_discard(mis);
+        }
     }
 
     /*
@@ -1565,8 +1593,10 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
      * However, at this point the CPU shouldn't be running, and the IO
      * shouldn't be doing anything yet so don't actually expect requests
      */
-    if (postcopy_ram_enable_notify(mis)) {
-        return -1;
+    if (migrate_postcopy_ram()) {
+        if (postcopy_ram_enable_notify(mis)) {
+            return -1;
+        }
     }
 
     if (mis->have_listen_thread) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-02-07 15:05 [Qemu-devel] [PATCH v5 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2017-02-07 15:05 ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-07 15:05 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

Split common postcopy staff from ram postcopy staff.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  1 +
 migration/migration.c         | 39 ++++++++++++++++++++++++-----------
 migration/postcopy-ram.c      |  4 +++-
 migration/savevm.c            | 48 +++++++++++++++++++++++++++++++++++--------
 4 files changed, 70 insertions(+), 22 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index af9135f..3aa228c 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -305,6 +305,7 @@ int migrate_add_blocker(Error *reason, Error **errp);
  */
 void migrate_del_blocker(Error *reason);
 
+bool migrate_postcopy(void);
 bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
 
diff --git a/migration/migration.c b/migration/migration.c
index 2766d2f..0a5fd38 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1310,6 +1310,11 @@ bool migrate_postcopy_ram(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
 }
 
+bool migrate_postcopy(void)
+{
+    return migrate_postcopy_ram();
+}
+
 bool migrate_auto_converge(void)
 {
     MigrationState *s;
@@ -1637,9 +1642,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * need to tell the destination to throw any pages it's already received
      * that are dirty
      */
-    if (ram_postcopy_send_discard_bitmap(ms)) {
-        error_report("postcopy send discard bitmap failed");
-        goto fail;
+    if (migrate_postcopy_ram()) {
+        if (ram_postcopy_send_discard_bitmap(ms)) {
+            error_report("postcopy send discard bitmap failed");
+            goto fail;
+        }
     }
 
     /*
@@ -1648,8 +1655,10 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * wrap their state up here
      */
     qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX);
-    /* Ping just for debugging, helps line traces up */
-    qemu_savevm_send_ping(ms->to_dst_file, 2);
+    if (migrate_postcopy_ram()) {
+        /* Ping just for debugging, helps line traces up */
+        qemu_savevm_send_ping(ms->to_dst_file, 2);
+    }
 
     /*
      * While loading the device state we may trigger page transfer
@@ -1674,7 +1683,9 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
     qemu_savevm_send_postcopy_listen(fb);
 
     qemu_savevm_state_complete_precopy(fb, false);
-    qemu_savevm_send_ping(fb, 3);
+    if (migrate_postcopy_ram()) {
+        qemu_savevm_send_ping(fb, 3);
+    }
 
     qemu_savevm_send_postcopy_run(fb);
 
@@ -1697,11 +1708,13 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
 
     qemu_mutex_unlock_iothread();
 
-    /*
-     * Although this ping is just for debug, it could potentially be
-     * used for getting a better measurement of downtime at the source.
-     */
-    qemu_savevm_send_ping(ms->to_dst_file, 4);
+    if (migrate_postcopy_ram()) {
+        /*
+         * Although this ping is just for debug, it could potentially be
+         * used for getting a better measurement of downtime at the source.
+         */
+        qemu_savevm_send_ping(ms->to_dst_file, 4);
+    }
 
     ret = qemu_file_get_error(ms->to_dst_file);
     if (ret) {
@@ -1857,7 +1870,9 @@ static void *migration_thread(void *opaque)
 
         /* And do a ping that will make stuff easier to debug */
         qemu_savevm_send_ping(s->to_dst_file, 1);
+    }
 
+    if (migrate_postcopy()) {
         /*
          * Tell the destination that we *might* want to do postcopy later;
          * if the other end can't do postcopy it should fail now, nice and
@@ -1891,7 +1906,7 @@ static void *migration_thread(void *opaque)
             if (pending_size && pending_size >= max_size) {
                 /* Still a significant amount to transfer */
 
-                if (migrate_postcopy_ram() &&
+                if (migrate_postcopy() &&
                     s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
                     pend_nonpost <= max_size &&
                     atomic_read(&s->start_postcopy)) {
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index a40dddb..aef5690 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -339,7 +339,9 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
     }
 
     postcopy_state_set(POSTCOPY_INCOMING_END);
-    migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    if (migrate_postcopy_ram()) {
+        migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    }
 
     if (mis->postcopy_tmp_page) {
         munmap(mis->postcopy_tmp_page, getpagesize());
diff --git a/migration/savevm.c b/migration/savevm.c
index 5235833..965a58c 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -73,7 +73,7 @@ static struct mig_cmd_args {
     [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
     [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
     [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
-    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
+    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
     [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
     [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
@@ -82,6 +82,23 @@ static struct mig_cmd_args {
     [MIG_CMD_MAX]              = { .len = -1, .name = "MAX" },
 };
 
+/* Note for MIG_CMD_POSTCOPY_ADVISE:
+ * The format of arguments is depending on postcopy mode:
+ * - postcopy RAM only
+ *   uint64_t host page size
+ *   uint64_t taget page size
+ *
+ * - postcopy RAM and postcopy dirty bitmaps
+ *   format is the same as for postcopy RAM only
+ *
+ * - postcopy dirty bitmaps only
+ *   Nothing. Command length field is 0.
+ *
+ * Be careful: adding a new postcopy entity with some other parameters should
+ * not break format self-description ability. Good way is to introduce some
+ * generic extendable format with an exception for two old entities.
+ */
+
 static int announce_self_create(uint8_t *buf,
                                 uint8_t *mac_addr)
 {
@@ -856,12 +873,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
 /* Send prior to any postcopy transfer */
 void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
-    uint64_t tmp[2];
-    tmp[0] = cpu_to_be64(getpagesize());
-    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+    if (migrate_postcopy_ram()) {
+        uint64_t tmp[2];
+        tmp[0] = cpu_to_be64(getpagesize());
+        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
 
-    trace_qemu_savevm_send_postcopy_advise();
-    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
+        trace_qemu_savevm_send_postcopy_advise();
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
+                                 16, (uint8_t *)tmp);
+    } else {
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
+    }
 }
 
 /* Sent prior to starting the destination running in postcopy, discard pages
@@ -1344,6 +1366,10 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
         return -1;
     }
 
+    if (!migrate_postcopy_ram()) {
+        return 0;
+    }
+
     if (!postcopy_ram_supported_by_host()) {
         return -1;
     }
@@ -1544,7 +1570,9 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
          * A rare case, we entered listen without having to do any discards,
          * so do the setup that's normally done at the time of the 1st discard.
          */
-        postcopy_ram_prepare_discard(mis);
+        if (migrate_postcopy_ram()) {
+            postcopy_ram_prepare_discard(mis);
+        }
     }
 
     /*
@@ -1552,8 +1580,10 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
      * However, at this point the CPU shouldn't be running, and the IO
      * shouldn't be doing anything yet so don't actually expect requests
      */
-    if (postcopy_ram_enable_notify(mis)) {
-        return -1;
+    if (migrate_postcopy_ram()) {
+        if (postcopy_ram_enable_notify(mis)) {
+            return -1;
+        }
     }
 
     if (mis->have_listen_thread) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-02-03 11:13           ` Vladimir Sementsov-Ogievskiy
@ 2017-02-06 11:30             ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 35+ messages in thread
From: Dr. David Alan Gilbert @ 2017-02-06 11:30 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: quintela, qemu-block, qemu-devel, pbonzini, armbru, eblake, famz,
	stefanha, amit.shah, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

* Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> 01.02.2017 14:06, Vladimir Sementsov-Ogievskiy wrote:
> > 24.01.2017 22:53, Dr. David Alan Gilbert wrote:
> > > * Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> > > > 24.01.2017 12:24, Juan Quintela wrote:
> > > > > Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
> > > > > > Split common postcopy staff from ram postcopy staff.
> > > > > > 
> > > > > > Signed-off-by: Vladimir Sementsov-Ogievskiy
> > > > > > <vsementsov@virtuozzo.com>
> > > > > > ---
> > > > > >    include/migration/migration.h |  1 +
> > > > > >    migration/migration.c         | 39
> > > > > > +++++++++++++++++++++++++++------------
> > > > > >    migration/postcopy-ram.c      |  4 +++-
> > > > > >    migration/savevm.c            | 31 ++++++++++++++++++++++---------
> > > > > >    4 files changed, 53 insertions(+), 22 deletions(-)
> > > > > > 
> > > > > Hi
> > > > > 
> > > > > {
> > > > > >        MigrationState *s;
> > > > > > @@ -1587,9 +1592,11 @@ static int
> > > > > > postcopy_start(MigrationState *ms, bool *old_vm_running)
> > > > > >         * need to tell the destination to throw any
> > > > > > pages it's already received
> > > > > >         * that are dirty
> > > > > >         */
> > > > > > -    if (ram_postcopy_send_discard_bitmap(ms)) {
> > > > > > -        error_report("postcopy send discard bitmap failed");
> > > > > > -        goto fail;
> > > > > > +    if (migrate_postcopy_ram()) {
> > > > > > +        if (ram_postcopy_send_discard_bitmap(ms)) {
> > > > > > +            error_report("postcopy send discard bitmap failed");
> > > > > > +            goto fail;
> > > > > > +        }
> > > > > I will have preffered that for the ram commands, to embed the
> > > > > migrate_postocpy_ram() check inside them, but that is taste, and
> > > > > everyone has its own O:-)
> > > > > 
> > > > > > diff --git a/migration/savevm.c b/migration/savevm.c
> > > > > > index e436cb2..c8a71c8 100644
> > > > > > --- a/migration/savevm.c
> > > > > > +++ b/migration/savevm.c
> > > > > > @@ -73,7 +73,7 @@ static struct mig_cmd_args {
> > > > > >        [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
> > > > > >        [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name =
> > > > > > "OPEN_RETURN_PATH" },
> > > > > >        [MIG_CMD_PING]             = { .len =
> > > > > > sizeof(uint32_t), .name = "PING" },
> > > > > > -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name =
> > > > > > "POSTCOPY_ADVISE" },
> > > > > > +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name =
> > > > > > "POSTCOPY_ADVISE" },
> > > > > >        [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name =
> > > > > > "POSTCOPY_LISTEN" },
> > > > > >        [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name =
> > > > > > "POSTCOPY_RUN" },
> > > > > >        [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
> > > > > > @@ -827,12 +827,17 @@ int
> > > > > > qemu_savevm_send_packaged(QEMUFile *f, const uint8_t
> > > > > > *buf, size_t len)
> > > > > >    /* Send prior to any postcopy transfer */
> > > > > >    void qemu_savevm_send_postcopy_advise(QEMUFile *f)
> > > > > >    {
> > > > > > -    uint64_t tmp[2];
> > > > > > -    tmp[0] = cpu_to_be64(getpagesize());
> > > > > > -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > > > > > +    if (migrate_postcopy_ram()) {
> > > > > > +        uint64_t tmp[2];
> > > > > > +        tmp[0] = cpu_to_be64(getpagesize());
> > > > > > +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > > > > > -    trace_qemu_savevm_send_postcopy_advise();
> > > > > > -    qemu_savevm_command_send(f,
> > > > > > MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> > > > > > +        trace_qemu_savevm_send_postcopy_advise();
> > > > > > +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
> > > > > > +                                 16, (uint8_t *)tmp);
> > > > > > +    } else {
> > > > > > +        qemu_savevm_command_send(f,
> > > > > > MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
> > > > > > +    }
> > > > > >    }
> > > > > >    /* Sent prior to starting the destination running in
> > > > > > postcopy, discard pages
> > > > > I haven't yet figured out why you are reusing this command with a
> > > > > different number of parameters.
> > > > > For this to pass, I need that Dave comment on this.
> > > > > 
> > > > > So,
> > > > > 
> > > > > Reviewed-by: Juan Quintela <quintela@redhat.com>
> > > > > 
> > > > > conditioned that Dave agrees with this.
> > > > These parameters are unrelated if ram postcopy is disabled. So,
> > > > I should be
> > > > better to have a possibility of skipping them, then to send
> > > > unneeded numbers
> > > > and write separate code to read them from the stream (rewriting
> > > > loadvm_postcopy_handle_advise to just read these two numbers and
> > > > do nothing
> > > > about ram postcopy for this case).
> > > I think I'd prefer either a new command or keeping these fields
> > > (probably all 0 ?)
> > > my worry is what happens in the case if we add a 3rd postcopy
> > > subfeature;
> > > In your case we have three possibilities:
> > > 
> > >      a) Postcopy RAM only - 16 bytes
> > >      b) Postcopy persistent-dirty-bitmap only - 0 bytes
> > >      c) Both - 16 bytes
> > > 
> > > Lets say we added postcopy-foo in the future and it wanted to add
> > > another 16 bytes, what would it send if it was
> > > foo+persistent-dirty-bitmap and no RAM?
> > > We'd end up with 16 bytes sent but you'd have to be very careful
> > > never to get that confused with case (a) above.
> > > 
> > > (I don't feel too strongly about it though)
> > 
> > We would like to stick current solution if it is not a problem. About
> > postcopy-foo: on the one hand, the format may be determined from
> > migration capabilities, on the other hand we can add for example 4bytes
> > format identifier starting from postcopy-foo and make all parameters
> > size to be at least 20 bytes. Or something like this.
> > 
> 
> Ok?

Yes OK; just please make sure it's all very commented and obvious.

Dave

> > 
> > 
> > > 
> > > Dave
> > > 
> > > > -- 
> > > > Best regards,
> > > > Vladimir
> > > > 
> > > -- 
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > 
> > 
> 
> 
> -- 
> Best regards,
> Vladimir
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-02-01 11:06         ` Vladimir Sementsov-Ogievskiy
@ 2017-02-03 11:13           ` Vladimir Sementsov-Ogievskiy
  2017-02-06 11:30             ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-03 11:13 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: quintela, qemu-block, qemu-devel, pbonzini, armbru, eblake, famz,
	stefanha, amit.shah, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

01.02.2017 14:06, Vladimir Sementsov-Ogievskiy wrote:
> 24.01.2017 22:53, Dr. David Alan Gilbert wrote:
>> * Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
>>> 24.01.2017 12:24, Juan Quintela wrote:
>>>> Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
>>>>> Split common postcopy staff from ram postcopy staff.
>>>>>
>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>>>>> <vsementsov@virtuozzo.com>
>>>>> ---
>>>>>    include/migration/migration.h |  1 +
>>>>>    migration/migration.c         | 39 
>>>>> +++++++++++++++++++++++++++------------
>>>>>    migration/postcopy-ram.c      |  4 +++-
>>>>>    migration/savevm.c            | 31 ++++++++++++++++++++++---------
>>>>>    4 files changed, 53 insertions(+), 22 deletions(-)
>>>>>
>>>> Hi
>>>>
>>>> {
>>>>>        MigrationState *s;
>>>>> @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState 
>>>>> *ms, bool *old_vm_running)
>>>>>         * need to tell the destination to throw any pages it's 
>>>>> already received
>>>>>         * that are dirty
>>>>>         */
>>>>> -    if (ram_postcopy_send_discard_bitmap(ms)) {
>>>>> -        error_report("postcopy send discard bitmap failed");
>>>>> -        goto fail;
>>>>> +    if (migrate_postcopy_ram()) {
>>>>> +        if (ram_postcopy_send_discard_bitmap(ms)) {
>>>>> +            error_report("postcopy send discard bitmap failed");
>>>>> +            goto fail;
>>>>> +        }
>>>> I will have preffered that for the ram commands, to embed the
>>>> migrate_postocpy_ram() check inside them, but that is taste, and
>>>> everyone has its own O:-)
>>>>
>>>>> diff --git a/migration/savevm.c b/migration/savevm.c
>>>>> index e436cb2..c8a71c8 100644
>>>>> --- a/migration/savevm.c
>>>>> +++ b/migration/savevm.c
>>>>> @@ -73,7 +73,7 @@ static struct mig_cmd_args {
>>>>>        [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
>>>>>        [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = 
>>>>> "OPEN_RETURN_PATH" },
>>>>>        [MIG_CMD_PING]             = { .len = sizeof(uint32_t), 
>>>>> .name = "PING" },
>>>>> -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = 
>>>>> "POSTCOPY_ADVISE" },
>>>>> +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = 
>>>>> "POSTCOPY_ADVISE" },
>>>>>        [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = 
>>>>> "POSTCOPY_LISTEN" },
>>>>>        [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = 
>>>>> "POSTCOPY_RUN" },
>>>>>        [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
>>>>> @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, 
>>>>> const uint8_t *buf, size_t len)
>>>>>    /* Send prior to any postcopy transfer */
>>>>>    void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>>>>>    {
>>>>> -    uint64_t tmp[2];
>>>>> -    tmp[0] = cpu_to_be64(getpagesize());
>>>>> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>>>> +    if (migrate_postcopy_ram()) {
>>>>> +        uint64_t tmp[2];
>>>>> +        tmp[0] = cpu_to_be64(getpagesize());
>>>>> +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>>>> -    trace_qemu_savevm_send_postcopy_advise();
>>>>> -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, 
>>>>> (uint8_t *)tmp);
>>>>> +        trace_qemu_savevm_send_postcopy_advise();
>>>>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
>>>>> +                                 16, (uint8_t *)tmp);
>>>>> +    } else {
>>>>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, 
>>>>> NULL);
>>>>> +    }
>>>>>    }
>>>>>    /* Sent prior to starting the destination running in postcopy, 
>>>>> discard pages
>>>> I haven't yet figured out why you are reusing this command with a
>>>> different number of parameters.
>>>> For this to pass, I need that Dave comment on this.
>>>>
>>>> So,
>>>>
>>>> Reviewed-by: Juan Quintela <quintela@redhat.com>
>>>>
>>>> conditioned that Dave agrees with this.
>>> These parameters are unrelated if ram postcopy is disabled. So, I 
>>> should be
>>> better to have a possibility of skipping them, then to send unneeded 
>>> numbers
>>> and write separate code to read them from the stream (rewriting
>>> loadvm_postcopy_handle_advise to just read these two numbers and do 
>>> nothing
>>> about ram postcopy for this case).
>> I think I'd prefer either a new command or keeping these fields 
>> (probably all 0 ?)
>> my worry is what happens in the case if we add a 3rd postcopy 
>> subfeature;
>> In your case we have three possibilities:
>>
>>      a) Postcopy RAM only - 16 bytes
>>      b) Postcopy persistent-dirty-bitmap only - 0 bytes
>>      c) Both - 16 bytes
>>
>> Lets say we added postcopy-foo in the future and it wanted to add
>> another 16 bytes, what would it send if it was 
>> foo+persistent-dirty-bitmap and no RAM?
>> We'd end up with 16 bytes sent but you'd have to be very careful
>> never to get that confused with case (a) above.
>>
>> (I don't feel too strongly about it though)
>
> We would like to stick current solution if it is not a problem. About 
> postcopy-foo: on the one hand, the format may be determined from 
> migration capabilities, on the other hand we can add for example 
> 4bytes format identifier starting from postcopy-foo and make all 
> parameters size to be at least 20 bytes. Or something like this.
>

Ok?

>
>
>>
>> Dave
>>
>>> -- 
>>> Best regards,
>>> Vladimir
>>>
>> -- 
>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
>


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-01-24 19:53       ` Dr. David Alan Gilbert
  2017-01-31 14:04         ` Vladimir Sementsov-Ogievskiy
@ 2017-02-01 11:06         ` Vladimir Sementsov-Ogievskiy
  2017-02-03 11:13           ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-02-01 11:06 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: quintela, qemu-block, qemu-devel, pbonzini, armbru, eblake, famz,
	stefanha, amit.shah, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

24.01.2017 22:53, Dr. David Alan Gilbert wrote:
> * Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
>> 24.01.2017 12:24, Juan Quintela wrote:
>>> Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
>>>> Split common postcopy staff from ram postcopy staff.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>    include/migration/migration.h |  1 +
>>>>    migration/migration.c         | 39 +++++++++++++++++++++++++++------------
>>>>    migration/postcopy-ram.c      |  4 +++-
>>>>    migration/savevm.c            | 31 ++++++++++++++++++++++---------
>>>>    4 files changed, 53 insertions(+), 22 deletions(-)
>>>>
>>> Hi
>>>
>>> {
>>>>        MigrationState *s;
>>>> @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
>>>>         * need to tell the destination to throw any pages it's already received
>>>>         * that are dirty
>>>>         */
>>>> -    if (ram_postcopy_send_discard_bitmap(ms)) {
>>>> -        error_report("postcopy send discard bitmap failed");
>>>> -        goto fail;
>>>> +    if (migrate_postcopy_ram()) {
>>>> +        if (ram_postcopy_send_discard_bitmap(ms)) {
>>>> +            error_report("postcopy send discard bitmap failed");
>>>> +            goto fail;
>>>> +        }
>>> I will have preffered that for the ram commands, to embed the
>>> migrate_postocpy_ram() check inside them, but that is taste, and
>>> everyone has its own O:-)
>>>
>>>> diff --git a/migration/savevm.c b/migration/savevm.c
>>>> index e436cb2..c8a71c8 100644
>>>> --- a/migration/savevm.c
>>>> +++ b/migration/savevm.c
>>>> @@ -73,7 +73,7 @@ static struct mig_cmd_args {
>>>>        [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
>>>>        [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
>>>>        [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
>>>> -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
>>>> +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
>>>>        [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
>>>>        [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
>>>>        [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
>>>> @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
>>>>    /* Send prior to any postcopy transfer */
>>>>    void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>>>>    {
>>>> -    uint64_t tmp[2];
>>>> -    tmp[0] = cpu_to_be64(getpagesize());
>>>> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>>> +    if (migrate_postcopy_ram()) {
>>>> +        uint64_t tmp[2];
>>>> +        tmp[0] = cpu_to_be64(getpagesize());
>>>> +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>>> -    trace_qemu_savevm_send_postcopy_advise();
>>>> -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
>>>> +        trace_qemu_savevm_send_postcopy_advise();
>>>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
>>>> +                                 16, (uint8_t *)tmp);
>>>> +    } else {
>>>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
>>>> +    }
>>>>    }
>>>>    /* Sent prior to starting the destination running in postcopy, discard pages
>>> I haven't yet figured out why you are reusing this command with a
>>> different number of parameters.
>>> For this to pass, I need that Dave comment on this.
>>>
>>> So,
>>>
>>> Reviewed-by: Juan Quintela <quintela@redhat.com>
>>>
>>> conditioned that Dave agrees with this.
>> These parameters are unrelated if ram postcopy is disabled. So, I should be
>> better to have a possibility of skipping them, then to send unneeded numbers
>> and write separate code to read them from the stream (rewriting
>> loadvm_postcopy_handle_advise to just read these two numbers and do nothing
>> about ram postcopy for this case).
> I think I'd prefer either a new command or keeping these fields (probably all 0 ?)
> my worry is what happens in the case if we add a 3rd postcopy subfeature;
> In your case we have three possibilities:
>
>      a) Postcopy RAM only - 16 bytes
>      b) Postcopy persistent-dirty-bitmap only - 0 bytes
>      c) Both - 16 bytes
>
> Lets say we added postcopy-foo in the future and it wanted to add
> another 16 bytes, what would it send if it was foo+persistent-dirty-bitmap and no RAM?
> We'd end up with 16 bytes sent but you'd have to be very careful
> never to get that confused with case (a) above.
>
> (I don't feel too strongly about it though)

We would like to stick current solution if it is not a problem. About 
postcopy-foo: on the one hand, the format may be determined from 
migration capabilities, on the other hand we can add for example 4bytes 
format identifier starting from postcopy-foo and make all parameters 
size to be at least 20 bytes. Or something like this.



>
> Dave
>
>> -- 
>> Best regards,
>> Vladimir
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-01-31 14:04         ` Vladimir Sementsov-Ogievskiy
@ 2017-01-31 15:15           ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 35+ messages in thread
From: Dr. David Alan Gilbert @ 2017-01-31 15:15 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: quintela, qemu-block, qemu-devel, pbonzini, armbru, eblake, famz,
	stefanha, amit.shah, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

* Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> 24.01.2017 22:53, Dr. David Alan Gilbert wrote:
> > * Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> > > 24.01.2017 12:24, Juan Quintela wrote:
> > > > Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
> > > > > Split common postcopy staff from ram postcopy staff.
> > > > > 
> > > > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > > > > ---
> > > > >    include/migration/migration.h |  1 +
> > > > >    migration/migration.c         | 39 +++++++++++++++++++++++++++------------
> > > > >    migration/postcopy-ram.c      |  4 +++-
> > > > >    migration/savevm.c            | 31 ++++++++++++++++++++++---------
> > > > >    4 files changed, 53 insertions(+), 22 deletions(-)
> > > > > 
> > > > Hi
> > > > 
> > > > {
> > > > >        MigrationState *s;
> > > > > @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
> > > > >         * need to tell the destination to throw any pages it's already received
> > > > >         * that are dirty
> > > > >         */
> > > > > -    if (ram_postcopy_send_discard_bitmap(ms)) {
> > > > > -        error_report("postcopy send discard bitmap failed");
> > > > > -        goto fail;
> > > > > +    if (migrate_postcopy_ram()) {
> > > > > +        if (ram_postcopy_send_discard_bitmap(ms)) {
> > > > > +            error_report("postcopy send discard bitmap failed");
> > > > > +            goto fail;
> > > > > +        }
> > > > I will have preffered that for the ram commands, to embed the
> > > > migrate_postocpy_ram() check inside them, but that is taste, and
> > > > everyone has its own O:-)
> > > > 
> > > > > diff --git a/migration/savevm.c b/migration/savevm.c
> > > > > index e436cb2..c8a71c8 100644
> > > > > --- a/migration/savevm.c
> > > > > +++ b/migration/savevm.c
> > > > > @@ -73,7 +73,7 @@ static struct mig_cmd_args {
> > > > >        [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
> > > > >        [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
> > > > >        [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
> > > > > -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
> > > > > +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
> > > > >        [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
> > > > >        [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
> > > > >        [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
> > > > > @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
> > > > >    /* Send prior to any postcopy transfer */
> > > > >    void qemu_savevm_send_postcopy_advise(QEMUFile *f)
> > > > >    {
> > > > > -    uint64_t tmp[2];
> > > > > -    tmp[0] = cpu_to_be64(getpagesize());
> > > > > -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > > > > +    if (migrate_postcopy_ram()) {
> > > > > +        uint64_t tmp[2];
> > > > > +        tmp[0] = cpu_to_be64(getpagesize());
> > > > > +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > > > > -    trace_qemu_savevm_send_postcopy_advise();
> > > > > -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> > > > > +        trace_qemu_savevm_send_postcopy_advise();
> > > > > +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
> > > > > +                                 16, (uint8_t *)tmp);
> > > > > +    } else {
> > > > > +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
> > > > > +    }
> > > > >    }
> > > > >    /* Sent prior to starting the destination running in postcopy, discard pages
> > > > I haven't yet figured out why you are reusing this command with a
> > > > different number of parameters.
> > > > For this to pass, I need that Dave comment on this.
> > > > 
> > > > So,
> > > > 
> > > > Reviewed-by: Juan Quintela <quintela@redhat.com>
> > > > 
> > > > conditioned that Dave agrees with this.
> > > These parameters are unrelated if ram postcopy is disabled. So, I should be
> > > better to have a possibility of skipping them, then to send unneeded numbers
> > > and write separate code to read them from the stream (rewriting
> > > loadvm_postcopy_handle_advise to just read these two numbers and do nothing
> > > about ram postcopy for this case).
> > I think I'd prefer either a new command or keeping these fields (probably all 0 ?)
> > my worry is what happens in the case if we add a 3rd postcopy subfeature;
> > In your case we have three possibilities:
> > 
> >      a) Postcopy RAM only - 16 bytes
> >      b) Postcopy persistent-dirty-bitmap only - 0 bytes
> >      c) Both - 16 bytes
> > 
> > Lets say we added postcopy-foo in the future and it wanted to add
> > another 16 bytes, what would it send if it was foo+persistent-dirty-bitmap and no RAM?
> > We'd end up with 16 bytes sent but you'd have to be very careful
> > never to get that confused with case (a) above.
> > 
> > (I don't feel too strongly about it though)
> 
> Hmm.. Actually I use migrate_postcopy_ram() to distinct these thing in
> loadvm_postcopy_handle_advise.. And it seem like it is a mistake. Are
> migration capabilities available on target?.. On the other hand
> postcopy-test doesn't fail...

Libvirt does set it on the destination, and it's already useful for checking
the destination host has the appropriate kernel userfault support;
so I'm fine with requiring it.
However it's good where possible to fail nicely if someone doesn't set it.

Dave

> > 
> > Dave
> > 
> > > -- 
> > > Best regards,
> > > Vladimir
> > > 
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> 
> -- 
> Best regards,
> Vladimir
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-01-24 19:53       ` Dr. David Alan Gilbert
@ 2017-01-31 14:04         ` Vladimir Sementsov-Ogievskiy
  2017-01-31 15:15           ` Dr. David Alan Gilbert
  2017-02-01 11:06         ` Vladimir Sementsov-Ogievskiy
  1 sibling, 1 reply; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-01-31 14:04 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: quintela, qemu-block, qemu-devel, pbonzini, armbru, eblake, famz,
	stefanha, amit.shah, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

24.01.2017 22:53, Dr. David Alan Gilbert wrote:
> * Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
>> 24.01.2017 12:24, Juan Quintela wrote:
>>> Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
>>>> Split common postcopy staff from ram postcopy staff.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>>> ---
>>>>    include/migration/migration.h |  1 +
>>>>    migration/migration.c         | 39 +++++++++++++++++++++++++++------------
>>>>    migration/postcopy-ram.c      |  4 +++-
>>>>    migration/savevm.c            | 31 ++++++++++++++++++++++---------
>>>>    4 files changed, 53 insertions(+), 22 deletions(-)
>>>>
>>> Hi
>>>
>>> {
>>>>        MigrationState *s;
>>>> @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
>>>>         * need to tell the destination to throw any pages it's already received
>>>>         * that are dirty
>>>>         */
>>>> -    if (ram_postcopy_send_discard_bitmap(ms)) {
>>>> -        error_report("postcopy send discard bitmap failed");
>>>> -        goto fail;
>>>> +    if (migrate_postcopy_ram()) {
>>>> +        if (ram_postcopy_send_discard_bitmap(ms)) {
>>>> +            error_report("postcopy send discard bitmap failed");
>>>> +            goto fail;
>>>> +        }
>>> I will have preffered that for the ram commands, to embed the
>>> migrate_postocpy_ram() check inside them, but that is taste, and
>>> everyone has its own O:-)
>>>
>>>> diff --git a/migration/savevm.c b/migration/savevm.c
>>>> index e436cb2..c8a71c8 100644
>>>> --- a/migration/savevm.c
>>>> +++ b/migration/savevm.c
>>>> @@ -73,7 +73,7 @@ static struct mig_cmd_args {
>>>>        [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
>>>>        [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
>>>>        [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
>>>> -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
>>>> +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
>>>>        [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
>>>>        [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
>>>>        [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
>>>> @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
>>>>    /* Send prior to any postcopy transfer */
>>>>    void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>>>>    {
>>>> -    uint64_t tmp[2];
>>>> -    tmp[0] = cpu_to_be64(getpagesize());
>>>> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>>> +    if (migrate_postcopy_ram()) {
>>>> +        uint64_t tmp[2];
>>>> +        tmp[0] = cpu_to_be64(getpagesize());
>>>> +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>>> -    trace_qemu_savevm_send_postcopy_advise();
>>>> -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
>>>> +        trace_qemu_savevm_send_postcopy_advise();
>>>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
>>>> +                                 16, (uint8_t *)tmp);
>>>> +    } else {
>>>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
>>>> +    }
>>>>    }
>>>>    /* Sent prior to starting the destination running in postcopy, discard pages
>>> I haven't yet figured out why you are reusing this command with a
>>> different number of parameters.
>>> For this to pass, I need that Dave comment on this.
>>>
>>> So,
>>>
>>> Reviewed-by: Juan Quintela <quintela@redhat.com>
>>>
>>> conditioned that Dave agrees with this.
>> These parameters are unrelated if ram postcopy is disabled. So, I should be
>> better to have a possibility of skipping them, then to send unneeded numbers
>> and write separate code to read them from the stream (rewriting
>> loadvm_postcopy_handle_advise to just read these two numbers and do nothing
>> about ram postcopy for this case).
> I think I'd prefer either a new command or keeping these fields (probably all 0 ?)
> my worry is what happens in the case if we add a 3rd postcopy subfeature;
> In your case we have three possibilities:
>
>      a) Postcopy RAM only - 16 bytes
>      b) Postcopy persistent-dirty-bitmap only - 0 bytes
>      c) Both - 16 bytes
>
> Lets say we added postcopy-foo in the future and it wanted to add
> another 16 bytes, what would it send if it was foo+persistent-dirty-bitmap and no RAM?
> We'd end up with 16 bytes sent but you'd have to be very careful
> never to get that confused with case (a) above.
>
> (I don't feel too strongly about it though)

Hmm.. Actually I use migrate_postcopy_ram() to distinct these thing in 
loadvm_postcopy_handle_advise.. And it seem like it is a mistake. Are 
migration capabilities available on target?.. On the other hand 
postcopy-test doesn't fail...

>
> Dave
>
>> -- 
>> Best regards,
>> Vladimir
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-01-24  9:35     ` Vladimir Sementsov-Ogievskiy
@ 2017-01-24 19:53       ` Dr. David Alan Gilbert
  2017-01-31 14:04         ` Vladimir Sementsov-Ogievskiy
  2017-02-01 11:06         ` Vladimir Sementsov-Ogievskiy
  0 siblings, 2 replies; 35+ messages in thread
From: Dr. David Alan Gilbert @ 2017-01-24 19:53 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: quintela, qemu-block, qemu-devel, pbonzini, armbru, eblake, famz,
	stefanha, amit.shah, mreitz, kwolf, peter.maydell, den, jsnow,
	lirans

* Vladimir Sementsov-Ogievskiy (vsementsov@virtuozzo.com) wrote:
> 24.01.2017 12:24, Juan Quintela wrote:
> > Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
> > > Split common postcopy staff from ram postcopy staff.
> > > 
> > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > > ---
> > >   include/migration/migration.h |  1 +
> > >   migration/migration.c         | 39 +++++++++++++++++++++++++++------------
> > >   migration/postcopy-ram.c      |  4 +++-
> > >   migration/savevm.c            | 31 ++++++++++++++++++++++---------
> > >   4 files changed, 53 insertions(+), 22 deletions(-)
> > > 
> > Hi
> > 
> > {
> > >       MigrationState *s;
> > > @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
> > >        * need to tell the destination to throw any pages it's already received
> > >        * that are dirty
> > >        */
> > > -    if (ram_postcopy_send_discard_bitmap(ms)) {
> > > -        error_report("postcopy send discard bitmap failed");
> > > -        goto fail;
> > > +    if (migrate_postcopy_ram()) {
> > > +        if (ram_postcopy_send_discard_bitmap(ms)) {
> > > +            error_report("postcopy send discard bitmap failed");
> > > +            goto fail;
> > > +        }
> > I will have preffered that for the ram commands, to embed the
> > migrate_postocpy_ram() check inside them, but that is taste, and
> > everyone has its own O:-)
> > 
> > > diff --git a/migration/savevm.c b/migration/savevm.c
> > > index e436cb2..c8a71c8 100644
> > > --- a/migration/savevm.c
> > > +++ b/migration/savevm.c
> > > @@ -73,7 +73,7 @@ static struct mig_cmd_args {
> > >       [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
> > >       [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
> > >       [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
> > > -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
> > > +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
> > >       [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
> > >       [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
> > >       [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
> > > @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
> > >   /* Send prior to any postcopy transfer */
> > >   void qemu_savevm_send_postcopy_advise(QEMUFile *f)
> > >   {
> > > -    uint64_t tmp[2];
> > > -    tmp[0] = cpu_to_be64(getpagesize());
> > > -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > > +    if (migrate_postcopy_ram()) {
> > > +        uint64_t tmp[2];
> > > +        tmp[0] = cpu_to_be64(getpagesize());
> > > +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > > -    trace_qemu_savevm_send_postcopy_advise();
> > > -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> > > +        trace_qemu_savevm_send_postcopy_advise();
> > > +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
> > > +                                 16, (uint8_t *)tmp);
> > > +    } else {
> > > +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
> > > +    }
> > >   }
> > >   /* Sent prior to starting the destination running in postcopy, discard pages
> > I haven't yet figured out why you are reusing this command with a
> > different number of parameters.
> > For this to pass, I need that Dave comment on this.
> > 
> > So,
> > 
> > Reviewed-by: Juan Quintela <quintela@redhat.com>
> > 
> > conditioned that Dave agrees with this.
> 
> These parameters are unrelated if ram postcopy is disabled. So, I should be
> better to have a possibility of skipping them, then to send unneeded numbers
> and write separate code to read them from the stream (rewriting
> loadvm_postcopy_handle_advise to just read these two numbers and do nothing
> about ram postcopy for this case).

I think I'd prefer either a new command or keeping these fields (probably all 0 ?)
my worry is what happens in the case if we add a 3rd postcopy subfeature;
In your case we have three possibilities:

    a) Postcopy RAM only - 16 bytes
    b) Postcopy persistent-dirty-bitmap only - 0 bytes
    c) Both - 16 bytes

Lets say we added postcopy-foo in the future and it wanted to add
another 16 bytes, what would it send if it was foo+persistent-dirty-bitmap and no RAM?
We'd end up with 16 bytes sent but you'd have to be very careful
never to get that confused with case (a) above.

(I don't feel too strongly about it though)

Dave

> -- 
> Best regards,
> Vladimir
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2017-01-24  9:24   ` Juan Quintela
@ 2017-01-24  9:35     ` Vladimir Sementsov-Ogievskiy
  2017-01-24 19:53       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2017-01-24  9:35 UTC (permalink / raw)
  To: quintela
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, mreitz, kwolf, peter.maydell, dgilbert, den, jsnow,
	lirans

24.01.2017 12:24, Juan Quintela wrote:
> Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
>> Split common postcopy staff from ram postcopy staff.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   include/migration/migration.h |  1 +
>>   migration/migration.c         | 39 +++++++++++++++++++++++++++------------
>>   migration/postcopy-ram.c      |  4 +++-
>>   migration/savevm.c            | 31 ++++++++++++++++++++++---------
>>   4 files changed, 53 insertions(+), 22 deletions(-)
>>
> Hi
>
> {
>>       MigrationState *s;
>> @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
>>        * need to tell the destination to throw any pages it's already received
>>        * that are dirty
>>        */
>> -    if (ram_postcopy_send_discard_bitmap(ms)) {
>> -        error_report("postcopy send discard bitmap failed");
>> -        goto fail;
>> +    if (migrate_postcopy_ram()) {
>> +        if (ram_postcopy_send_discard_bitmap(ms)) {
>> +            error_report("postcopy send discard bitmap failed");
>> +            goto fail;
>> +        }
> I will have preffered that for the ram commands, to embed the
> migrate_postocpy_ram() check inside them, but that is taste, and
> everyone has its own O:-)
>
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index e436cb2..c8a71c8 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -73,7 +73,7 @@ static struct mig_cmd_args {
>>       [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
>>       [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
>>       [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
>> -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
>> +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
>>       [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
>>       [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
>>       [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
>> @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
>>   /* Send prior to any postcopy transfer */
>>   void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>>   {
>> -    uint64_t tmp[2];
>> -    tmp[0] = cpu_to_be64(getpagesize());
>> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>> +    if (migrate_postcopy_ram()) {
>> +        uint64_t tmp[2];
>> +        tmp[0] = cpu_to_be64(getpagesize());
>> +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>>   
>> -    trace_qemu_savevm_send_postcopy_advise();
>> -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
>> +        trace_qemu_savevm_send_postcopy_advise();
>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
>> +                                 16, (uint8_t *)tmp);
>> +    } else {
>> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
>> +    }
>>   }
>>   
>>   /* Sent prior to starting the destination running in postcopy, discard pages
> I haven't yet figured out why you are reusing this command with a
> different number of parameters.
> For this to pass, I need that Dave comment on this.
>
> So,
>
> Reviewed-by: Juan Quintela <quintela@redhat.com>
>
> conditioned that Dave agrees with this.

These parameters are unrelated if ram postcopy is disabled. So, I should 
be better to have a possibility of skipping them, then to send unneeded 
numbers and write separate code to read them from the stream (rewriting 
loadvm_postcopy_handle_advise to just read these two numbers and do 
nothing about ram postcopy for this case).

-- 
Best regards,
Vladimir

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2016-11-22 17:54 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
@ 2017-01-24  9:24   ` Juan Quintela
  2017-01-24  9:35     ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 35+ messages in thread
From: Juan Quintela @ 2017-01-24  9:24 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-block, qemu-devel, pbonzini, armbru, eblake, famz, stefanha,
	amit.shah, mreitz, kwolf, peter.maydell, dgilbert, den, jsnow,
	lirans

Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> wrote:
> Split common postcopy staff from ram postcopy staff.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  include/migration/migration.h |  1 +
>  migration/migration.c         | 39 +++++++++++++++++++++++++++------------
>  migration/postcopy-ram.c      |  4 +++-
>  migration/savevm.c            | 31 ++++++++++++++++++++++---------
>  4 files changed, 53 insertions(+), 22 deletions(-)
>

Hi

{
>      MigrationState *s;
> @@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
>       * need to tell the destination to throw any pages it's already received
>       * that are dirty
>       */
> -    if (ram_postcopy_send_discard_bitmap(ms)) {
> -        error_report("postcopy send discard bitmap failed");
> -        goto fail;
> +    if (migrate_postcopy_ram()) {
> +        if (ram_postcopy_send_discard_bitmap(ms)) {
> +            error_report("postcopy send discard bitmap failed");
> +            goto fail;
> +        }

I will have preffered that for the ram commands, to embed the
migrate_postocpy_ram() check inside them, but that is taste, and
everyone has its own O:-)

> diff --git a/migration/savevm.c b/migration/savevm.c
> index e436cb2..c8a71c8 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -73,7 +73,7 @@ static struct mig_cmd_args {
>      [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
>      [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
>      [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
> -    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
> +    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
>      [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
>      [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
>      [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
> @@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
>  /* Send prior to any postcopy transfer */
>  void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>  {
> -    uint64_t tmp[2];
> -    tmp[0] = cpu_to_be64(getpagesize());
> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> +    if (migrate_postcopy_ram()) {
> +        uint64_t tmp[2];
> +        tmp[0] = cpu_to_be64(getpagesize());
> +        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
>  
> -    trace_qemu_savevm_send_postcopy_advise();
> -    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> +        trace_qemu_savevm_send_postcopy_advise();
> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
> +                                 16, (uint8_t *)tmp);
> +    } else {
> +        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
> +    }
>  }
>  
>  /* Sent prior to starting the destination running in postcopy, discard pages

I haven't yet figured out why you are reusing this command with a
different number of parameters.
For this to pass, I need that Dave comment on this.

So,

Reviewed-by: Juan Quintela <quintela@redhat.com>

conditioned that Dave agrees with this.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2016-11-22 17:54 [Qemu-devel] [PATCH v4 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
@ 2016-11-22 17:54 ` Vladimir Sementsov-Ogievskiy
  2017-01-24  9:24   ` Juan Quintela
  0 siblings, 1 reply; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-11-22 17:54 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

Split common postcopy staff from ram postcopy staff.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/migration.h |  1 +
 migration/migration.c         | 39 +++++++++++++++++++++++++++------------
 migration/postcopy-ram.c      |  4 +++-
 migration/savevm.c            | 31 ++++++++++++++++++++++---------
 4 files changed, 53 insertions(+), 22 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index c309d23..d06f6c4 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -294,6 +294,7 @@ void migrate_add_blocker(Error *reason);
  */
 void migrate_del_blocker(Error *reason);
 
+bool migrate_postcopy(void);
 bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
 
diff --git a/migration/migration.c b/migration/migration.c
index f498ab8..5966dff 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1260,6 +1260,11 @@ bool migrate_postcopy_ram(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
 }
 
+bool migrate_postcopy(void)
+{
+    return migrate_postcopy_ram();
+}
+
 bool migrate_auto_converge(void)
 {
     MigrationState *s;
@@ -1587,9 +1592,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * need to tell the destination to throw any pages it's already received
      * that are dirty
      */
-    if (ram_postcopy_send_discard_bitmap(ms)) {
-        error_report("postcopy send discard bitmap failed");
-        goto fail;
+    if (migrate_postcopy_ram()) {
+        if (ram_postcopy_send_discard_bitmap(ms)) {
+            error_report("postcopy send discard bitmap failed");
+            goto fail;
+        }
     }
 
     /*
@@ -1598,8 +1605,10 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * wrap their state up here
      */
     qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX);
-    /* Ping just for debugging, helps line traces up */
-    qemu_savevm_send_ping(ms->to_dst_file, 2);
+    if (migrate_postcopy_ram()) {
+        /* Ping just for debugging, helps line traces up */
+        qemu_savevm_send_ping(ms->to_dst_file, 2);
+    }
 
     /*
      * While loading the device state we may trigger page transfer
@@ -1624,7 +1633,9 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
     qemu_savevm_send_postcopy_listen(fb);
 
     qemu_savevm_state_complete_precopy(fb, false);
-    qemu_savevm_send_ping(fb, 3);
+    if (migrate_postcopy_ram()) {
+        qemu_savevm_send_ping(fb, 3);
+    }
 
     qemu_savevm_send_postcopy_run(fb);
 
@@ -1647,11 +1658,13 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
 
     qemu_mutex_unlock_iothread();
 
-    /*
-     * Although this ping is just for debug, it could potentially be
-     * used for getting a better measurement of downtime at the source.
-     */
-    qemu_savevm_send_ping(ms->to_dst_file, 4);
+    if (migrate_postcopy_ram()) {
+        /*
+         * Although this ping is just for debug, it could potentially be
+         * used for getting a better measurement of downtime at the source.
+         */
+        qemu_savevm_send_ping(ms->to_dst_file, 4);
+    }
 
     ret = qemu_file_get_error(ms->to_dst_file);
     if (ret) {
@@ -1802,7 +1815,9 @@ static void *migration_thread(void *opaque)
 
         /* And do a ping that will make stuff easier to debug */
         qemu_savevm_send_ping(s->to_dst_file, 1);
+    }
 
+    if (migrate_postcopy()) {
         /*
          * Tell the destination that we *might* want to do postcopy later;
          * if the other end can't do postcopy it should fail now, nice and
@@ -1836,7 +1851,7 @@ static void *migration_thread(void *opaque)
             if (pending_size && pending_size >= max_size) {
                 /* Still a significant amount to transfer */
 
-                if (migrate_postcopy_ram() &&
+                if (migrate_postcopy() &&
                     s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
                     pend_nonpost <= max_size &&
                     atomic_read(&s->start_postcopy)) {
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index a40dddb..aef5690 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -339,7 +339,9 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
     }
 
     postcopy_state_set(POSTCOPY_INCOMING_END);
-    migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    if (migrate_postcopy_ram()) {
+        migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    }
 
     if (mis->postcopy_tmp_page) {
         munmap(mis->postcopy_tmp_page, getpagesize());
diff --git a/migration/savevm.c b/migration/savevm.c
index e436cb2..c8a71c8 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -73,7 +73,7 @@ static struct mig_cmd_args {
     [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
     [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
     [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
-    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
+    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
     [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
     [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
@@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
 /* Send prior to any postcopy transfer */
 void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
-    uint64_t tmp[2];
-    tmp[0] = cpu_to_be64(getpagesize());
-    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+    if (migrate_postcopy_ram()) {
+        uint64_t tmp[2];
+        tmp[0] = cpu_to_be64(getpagesize());
+        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
 
-    trace_qemu_savevm_send_postcopy_advise();
-    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
+        trace_qemu_savevm_send_postcopy_advise();
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
+                                 16, (uint8_t *)tmp);
+    } else {
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
+    }
 }
 
 /* Sent prior to starting the destination running in postcopy, discard pages
@@ -1315,6 +1320,10 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
         return -1;
     }
 
+    if (!migrate_postcopy_ram()) {
+        return 0;
+    }
+
     if (!postcopy_ram_supported_by_host()) {
         return -1;
     }
@@ -1515,7 +1524,9 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
          * A rare case, we entered listen without having to do any discards,
          * so do the setup that's normally done at the time of the 1st discard.
          */
-        postcopy_ram_prepare_discard(mis);
+        if (migrate_postcopy_ram()) {
+            postcopy_ram_prepare_discard(mis);
+        }
     }
 
     /*
@@ -1523,8 +1534,10 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
      * However, at this point the CPU shouldn't be running, and the IO
      * shouldn't be doing anything yet so don't actually expect requests
      */
-    if (postcopy_ram_enable_notify(mis)) {
-        return -1;
+    if (migrate_postcopy_ram()) {
+        if (postcopy_ram_enable_notify(mis)) {
+            return -1;
+        }
     }
 
     if (mis->have_listen_thread) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy
  2016-11-21 15:29 [Qemu-devel] [PATCH v3 " Vladimir Sementsov-Ogievskiy
@ 2016-11-21 15:29 ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 35+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2016-11-21 15:29 UTC (permalink / raw)
  To: qemu-block, qemu-devel
  Cc: pbonzini, armbru, eblake, famz, stefanha, amit.shah, quintela,
	mreitz, kwolf, peter.maydell, dgilbert, den, jsnow, vsementsov,
	lirans

Split common postcopy staff from ram postcopy staff.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 include/migration/migration.h |  1 +
 migration/migration.c         | 39 +++++++++++++++++++++++++++------------
 migration/postcopy-ram.c      |  4 +++-
 migration/savevm.c            | 31 ++++++++++++++++++++++---------
 4 files changed, 53 insertions(+), 22 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index c309d23..d06f6c4 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -294,6 +294,7 @@ void migrate_add_blocker(Error *reason);
  */
 void migrate_del_blocker(Error *reason);
 
+bool migrate_postcopy(void);
 bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
 
diff --git a/migration/migration.c b/migration/migration.c
index e331f28..4b5e282 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1259,6 +1259,11 @@ bool migrate_postcopy_ram(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_RAM];
 }
 
+bool migrate_postcopy(void)
+{
+    return migrate_postcopy_ram();
+}
+
 bool migrate_auto_converge(void)
 {
     MigrationState *s;
@@ -1586,9 +1591,11 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * need to tell the destination to throw any pages it's already received
      * that are dirty
      */
-    if (ram_postcopy_send_discard_bitmap(ms)) {
-        error_report("postcopy send discard bitmap failed");
-        goto fail;
+    if (migrate_postcopy_ram()) {
+        if (ram_postcopy_send_discard_bitmap(ms)) {
+            error_report("postcopy send discard bitmap failed");
+            goto fail;
+        }
     }
 
     /*
@@ -1597,8 +1604,10 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
      * wrap their state up here
      */
     qemu_file_set_rate_limit(ms->to_dst_file, INT64_MAX);
-    /* Ping just for debugging, helps line traces up */
-    qemu_savevm_send_ping(ms->to_dst_file, 2);
+    if (migrate_postcopy_ram()) {
+        /* Ping just for debugging, helps line traces up */
+        qemu_savevm_send_ping(ms->to_dst_file, 2);
+    }
 
     /*
      * While loading the device state we may trigger page transfer
@@ -1623,7 +1632,9 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
     qemu_savevm_send_postcopy_listen(fb);
 
     qemu_savevm_state_complete_precopy(fb, false);
-    qemu_savevm_send_ping(fb, 3);
+    if (migrate_postcopy_ram()) {
+        qemu_savevm_send_ping(fb, 3);
+    }
 
     qemu_savevm_send_postcopy_run(fb);
 
@@ -1646,11 +1657,13 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
 
     qemu_mutex_unlock_iothread();
 
-    /*
-     * Although this ping is just for debug, it could potentially be
-     * used for getting a better measurement of downtime at the source.
-     */
-    qemu_savevm_send_ping(ms->to_dst_file, 4);
+    if (migrate_postcopy_ram()) {
+        /*
+         * Although this ping is just for debug, it could potentially be
+         * used for getting a better measurement of downtime at the source.
+         */
+        qemu_savevm_send_ping(ms->to_dst_file, 4);
+    }
 
     ret = qemu_file_get_error(ms->to_dst_file);
     if (ret) {
@@ -1801,7 +1814,9 @@ static void *migration_thread(void *opaque)
 
         /* And do a ping that will make stuff easier to debug */
         qemu_savevm_send_ping(s->to_dst_file, 1);
+    }
 
+    if (migrate_postcopy()) {
         /*
          * Tell the destination that we *might* want to do postcopy later;
          * if the other end can't do postcopy it should fail now, nice and
@@ -1835,7 +1850,7 @@ static void *migration_thread(void *opaque)
             if (pending_size && pending_size >= max_size) {
                 /* Still a significant amount to transfer */
 
-                if (migrate_postcopy_ram() &&
+                if (migrate_postcopy() &&
                     s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
                     pend_nonpost <= max_size &&
                     atomic_read(&s->start_postcopy)) {
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index a40dddb..aef5690 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -339,7 +339,9 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
     }
 
     postcopy_state_set(POSTCOPY_INCOMING_END);
-    migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    if (migrate_postcopy_ram()) {
+        migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
+    }
 
     if (mis->postcopy_tmp_page) {
         munmap(mis->postcopy_tmp_page, getpagesize());
diff --git a/migration/savevm.c b/migration/savevm.c
index e436cb2..c8a71c8 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -73,7 +73,7 @@ static struct mig_cmd_args {
     [MIG_CMD_INVALID]          = { .len = -1, .name = "INVALID" },
     [MIG_CMD_OPEN_RETURN_PATH] = { .len =  0, .name = "OPEN_RETURN_PATH" },
     [MIG_CMD_PING]             = { .len = sizeof(uint32_t), .name = "PING" },
-    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = 16, .name = "POSTCOPY_ADVISE" },
+    [MIG_CMD_POSTCOPY_ADVISE]  = { .len = -1, .name = "POSTCOPY_ADVISE" },
     [MIG_CMD_POSTCOPY_LISTEN]  = { .len =  0, .name = "POSTCOPY_LISTEN" },
     [MIG_CMD_POSTCOPY_RUN]     = { .len =  0, .name = "POSTCOPY_RUN" },
     [MIG_CMD_POSTCOPY_RAM_DISCARD] = {
@@ -827,12 +827,17 @@ int qemu_savevm_send_packaged(QEMUFile *f, const uint8_t *buf, size_t len)
 /* Send prior to any postcopy transfer */
 void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
-    uint64_t tmp[2];
-    tmp[0] = cpu_to_be64(getpagesize());
-    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+    if (migrate_postcopy_ram()) {
+        uint64_t tmp[2];
+        tmp[0] = cpu_to_be64(getpagesize());
+        tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
 
-    trace_qemu_savevm_send_postcopy_advise();
-    qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
+        trace_qemu_savevm_send_postcopy_advise();
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE,
+                                 16, (uint8_t *)tmp);
+    } else {
+        qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 0, NULL);
+    }
 }
 
 /* Sent prior to starting the destination running in postcopy, discard pages
@@ -1315,6 +1320,10 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
         return -1;
     }
 
+    if (!migrate_postcopy_ram()) {
+        return 0;
+    }
+
     if (!postcopy_ram_supported_by_host()) {
         return -1;
     }
@@ -1515,7 +1524,9 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
          * A rare case, we entered listen without having to do any discards,
          * so do the setup that's normally done at the time of the 1st discard.
          */
-        postcopy_ram_prepare_discard(mis);
+        if (migrate_postcopy_ram()) {
+            postcopy_ram_prepare_discard(mis);
+        }
     }
 
     /*
@@ -1523,8 +1534,10 @@ static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
      * However, at this point the CPU shouldn't be running, and the IO
      * shouldn't be doing anything yet so don't actually expect requests
      */
-    if (postcopy_ram_enable_notify(mis)) {
-        return -1;
+    if (migrate_postcopy_ram()) {
+        if (postcopy_ram_enable_notify(mis)) {
+            return -1;
+        }
     }
 
     if (mis->have_listen_thread) {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2017-02-13  9:55 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-12 18:00 [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 01/17] migration: add has_postcopy savevm handler Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 02/17] migration: fix ram_save_pending Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 04/17] migration: introduce postcopy-only pending Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 05/17] block: add bdrv_next_dirty_bitmap() Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 06/17] block: add bdrv_dirty_bitmap_enable_successor() Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 07/17] qapi: add dirty-bitmaps migration capability Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 08/17] migration: include migrate_dirty_bitmaps in migrate_postcopy Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 09/17] migration/qemu-file: add qemu_put_counted_string() Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 10/17] migration: add is_active_iterate handler Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 11/17] migration: add postcopy migration of dirty bitmaps Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 12/17] iotests: maintain several vms in test Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 13/17] iotests: add add_incoming_migration to VM class Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 14/17] qapi: add md5 checksum of last dirty bitmap level to query-block Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 15/17] iotests: add default node-name Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 16/17] iotests: add dirty bitmap migration test 117 Vladimir Sementsov-Ogievskiy
2016-02-12 18:00 ` [Qemu-devel] [PATCH 17/17] iotests: add dirty bitmap migration test 170 Vladimir Sementsov-Ogievskiy
2016-02-12 18:10 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
2016-02-29 15:32 ` [Qemu-devel] [PATCH RFC] docs: add dirty bitmap postcopy migration docs Vladimir Sementsov-Ogievskiy
2016-03-14 11:37 ` [Qemu-devel] [PATCH RFC 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
2016-03-14 21:15   ` John Snow
2016-03-25 10:43 ` Vladimir Sementsov-Ogievskiy
2016-11-21 15:29 [Qemu-devel] [PATCH v3 " Vladimir Sementsov-Ogievskiy
2016-11-21 15:29 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
2016-11-22 17:54 [Qemu-devel] [PATCH v4 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
2016-11-22 17:54 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
2017-01-24  9:24   ` Juan Quintela
2017-01-24  9:35     ` Vladimir Sementsov-Ogievskiy
2017-01-24 19:53       ` Dr. David Alan Gilbert
2017-01-31 14:04         ` Vladimir Sementsov-Ogievskiy
2017-01-31 15:15           ` Dr. David Alan Gilbert
2017-02-01 11:06         ` Vladimir Sementsov-Ogievskiy
2017-02-03 11:13           ` Vladimir Sementsov-Ogievskiy
2017-02-06 11:30             ` Dr. David Alan Gilbert
2017-02-07 15:05 [Qemu-devel] [PATCH v5 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
2017-02-07 15:05 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy
2017-02-13  9:54 [Qemu-devel] [PATCH v6 00/17] Dirty bitmaps postcopy migration Vladimir Sementsov-Ogievskiy
2017-02-13  9:54 ` [Qemu-devel] [PATCH 03/17] migration: split common postcopy out of ram postcopy Vladimir Sementsov-Ogievskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).