All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP
@ 2021-02-04 12:48 Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 01/12] block: push error reporting into bdrv_all_*_snapshot functions Daniel P. Berrangé
                   ` (12 more replies)
  0 siblings, 13 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

 v1: https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg00866.html
 v2: https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg07523.html
 v3: https://lists.gnu.org/archive/html/qemu-devel/2020-08/msg07076.html
 v4: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg05221.html
 v5: https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg00587.html
 v6: https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg02158.html
 v7: https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg06205.html
 v8: https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg06464.html
 v9: https://lists.gnu.org/archive/html/qemu-devel/2021-01/msg05016.html
 vA: https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg00620.html

This series aims to provide a better designed replacement for the
savevm/loadvm/delvm HMP commands, which despite their flaws continue
to be actively used in the QMP world via the HMP command passthrough
facility.

The main problems addressed are:

 - The logic to pick which disk to store the vmstate in is not
   satsifactory.

   The first block driver state cannot be assumed to be the root disk
   image, it might be OVMF varstore and we don't want to store vmstate
   in there.

 - The logic to decide which disks must be snapshotted is hardwired
   to all disks which are writable

   Again with OVMF there might be a writable varstore, but this can be
   raw rather than qcow2 format, and thus unable to be snapshotted.
   While users might wish to snapshot their varstore, in some/many/most
   cases it is entirely uneccessary. Users are blocked from snapshotting
   their VM though due to this varstore.

 - The commands are synchronous blocking execution and returning
   errors immediately.

   This is partially addressed by integrating with the job framework.
   This forces the client to use the async commands to determine
   the completion status or error message from the operations.

In the block code I've only dealt with node names for block devices, as
IIUC, this is all that libvirt should need in the -blockdev world it now
lives in. IOW, I've made not attempt to cope with people wanting to use
these QMP commands in combination with -drive args, as libvirt will
never use -drive with a QEMU new enough to have these new commands.

The main limitations of this current impl

 - The snapshot process runs serialized in the main thread. ie QEMU
   guest execution is blocked for the duration. The job framework
   lets us fix this in future without changing the QMP semantics
   exposed to the apps.

 - Most vmstate loading errors just go to stderr, as they are not
   using Error **errp reporting. Thus the job framework just
   reports a fairly generic message

     "Error -22 while loading VM state"

   Again this can be fixed later without changing the QMP semantics
   exposed to apps.

I've done some minimal work in libvirt to start to make use of the new
commands to validate their functionality, but this isn't finished yet.

My ultimate goal is to make the GNOME Boxes maintainer happy again by
having internal snapshots work with OVMF:

  https://gitlab.gnome.org/GNOME/gnome-boxes/-/commit/c486da262f6566326fbcb5e=
f45c5f64048f16a6e

Changed in v11:

 - Add missing docs for events for snapshot-delete
 - Fix mistaken operation name in snapshot-delete docs

Changed in v10:

 - Fix some mis-placed patch chunks
 - Update qapi version number annotations
 - Move iotests to new naming scheme
 - Fix shell based iotests in tests/qemu-iotests/tests subdir
 - Expand QAPI examples
 - Remove bogus submodule commit update
 - Optimize shell pattern matching code
 - Misc other typo/whitespace fixes

Changed in v9:

 - Rebase to git master to resolve conflicts
 - Fixed accidental regression in error handling in previous v8
 - Fixed formatting of iotest expected output now that we switched
   to preserving whitespace in QMP input

Changed in v8:

 - Rebase to git master to resolve conflicts
 - Updated QAPI since versions to 6.0

Changed in v7:

 - Incorporate changes from:

     https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg03165.html

 - Tweaked error message

Changed in v6:

 - Resolve many conflicts with recent replay changes
 - Misc typos in QAPI

Changed in v5:

 - Fix prevention of tag overwriting
 - Refactor and expand test suite coverage to validate
   more negative scenarios

Changed in v4:

 - Make the device lists mandatory, dropping all support for
   QEMU's built-in heuristics to select devices.

 - Improve some error reporting and I/O test coverage

Changed in v3:

 - Schedule a bottom half to escape from coroutine context in
   the jobs. This is needed because the locking in the snapshot
   code goes horribly wrong when run from a background coroutine
   instead of the main event thread.

 - Re-factor way we iterate over devices, so that we correctly
   report non-existant devices passed by the user over QMP.

 - Add QAPI docs notes about limitations wrt vmstate error
   reporting (it all goes to stderr not an Error **errp)
   so QMP only gets a fairly generic error message currently.

 - Add I/O test to validate many usage scenarios / errors

 - Add I/O test helpers to handle QMP events with a deterministic
   ordering

 - Ensure 'delete-snapshot' reports an error if requesting
   delete from devices that don't support snapshot, instead of
   silently succeeding with no erro.

Changed in v2:

 - Use new command names "snapshot-{load,save,delete}" to make it
   clear that these are different from the "savevm|loadvm|delvm"
   as they use the Job framework

 - Use an include list for block devs, not an exclude list

Daniel P. Berrang=C3=A9 (11):
  block: push error reporting into bdrv_all_*_snapshot functions
  migration: stop returning errno from load_snapshot()
  block: add ability to specify list of blockdevs during snapshot
  block: allow specifying name of block device for vmstate storage
  block: rename and alter bdrv_all_find_snapshot semantics
  migration: control whether snapshots are ovewritten
  migration: wire up support for snapshot device selection
  migration: introduce a delete_snapshot wrapper
  iotests: add support for capturing and matching QMP events
  iotests: fix loading of common.config from tests/ subdir
  migration: introduce snapshot-{save,load,delete} QMP commands

Philippe Mathieu-Daud=C3=A9 (1):
  migration: Make save_snapshot() return bool, not 0/-1

 block/monitor/block-hmp-cmds.c                |   7 +-
 block/snapshot.c                              | 256 ++++++---
 include/block/snapshot.h                      |  23 +-
 include/migration/snapshot.h                  |  47 +-
 migration/savevm.c                            | 296 ++++++++--
 monitor/hmp-cmds.c                            |  12 +-
 qapi/job.json                                 |   9 +-
 qapi/migration.json                           | 173 ++++++
 replay/replay-debugging.c                     |  12 +-
 replay/replay-snapshot.c                      |   5 +-
 softmmu/vl.c                                  |   2 +-
 tests/qemu-iotests/267.out                    |  12 +-
 tests/qemu-iotests/common.qemu                | 106 +++-
 tests/qemu-iotests/common.rc                  |  10 +-
 .../tests/internal-snapshots-qapi             | 386 +++++++++++++
 .../tests/internal-snapshots-qapi.out         | 520 ++++++++++++++++++
 16 files changed, 1721 insertions(+), 155 deletions(-)
 create mode 100755 tests/qemu-iotests/tests/internal-snapshots-qapi
 create mode 100644 tests/qemu-iotests/tests/internal-snapshots-qapi.out

--=20
2.29.2




^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v11 01/12] block: push error reporting into bdrv_all_*_snapshot functions
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 02/12] migration: Make save_snapshot() return bool, not 0/-1 Daniel P. Berrangé
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Philippe Mathieu-Daudé,
	Markus Armbruster, Dr. David Alan Gilbert, Pavel Dovgalyuk,
	Paolo Bonzini, Max Reitz

The bdrv_all_*_snapshot functions return a BlockDriverState pointer
for the invalid backend, which the callers then use to report an
error message. In some cases multiple callers are reporting the
same error message, but with slightly different text. In the future
there will be more error scenarios for some of these methods, which
will benefit from fine grained error message reporting. So it is
helpful to push error reporting down a level.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
[PMD: Initialize variables]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 block/monitor/block-hmp-cmds.c |  7 ++--
 block/snapshot.c               | 77 +++++++++++++++++-----------------
 include/block/snapshot.h       | 14 +++----
 migration/savevm.c             | 39 +++++------------
 monitor/hmp-cmds.c             |  7 +---
 replay/replay-debugging.c      |  4 +-
 tests/qemu-iotests/267.out     | 10 ++---
 7 files changed, 68 insertions(+), 90 deletions(-)

diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index afd75ab628..9532d085ea 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -900,10 +900,11 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
 
     ImageEntry *image_entry, *next_ie;
     SnapshotEntry *snapshot_entry;
+    Error *err = NULL;
 
-    bs = bdrv_all_find_vmstate_bs();
+    bs = bdrv_all_find_vmstate_bs(&err);
     if (!bs) {
-        monitor_printf(mon, "No available block device supports snapshots\n");
+        error_report_err(err);
         return;
     }
     aio_context = bdrv_get_aio_context(bs);
@@ -953,7 +954,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
     total = 0;
     for (i = 0; i < nb_sns; i++) {
         SnapshotEntry *next_sn;
-        if (bdrv_all_find_snapshot(sn_tab[i].name, &bs1) == 0) {
+        if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) {
             global_snapshots[total] = i;
             total++;
             QTAILQ_FOREACH(image_entry, &image_list, next) {
diff --git a/block/snapshot.c b/block/snapshot.c
index a2bf3a54eb..482e3fc7b7 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -462,14 +462,14 @@ static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs)
  * These functions will properly handle dataplane (take aio_context_acquire
  * when appropriate for appropriate block drivers) */
 
-bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs)
+bool bdrv_all_can_snapshot(Error **errp)
 {
-    bool ok = true;
     BlockDriverState *bs;
     BdrvNextIterator it;
 
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *ctx = bdrv_get_aio_context(bs);
+        bool ok = true;
 
         aio_context_acquire(ctx);
         if (bdrv_all_snapshots_includes_bs(bs)) {
@@ -477,26 +477,25 @@ bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs)
         }
         aio_context_release(ctx);
         if (!ok) {
+            error_setg(errp, "Device '%s' is writable but does not support "
+                       "snapshots", bdrv_get_device_or_node_name(bs));
             bdrv_next_cleanup(&it);
-            goto fail;
+            return false;
         }
     }
 
-fail:
-    *first_bad_bs = bs;
-    return ok;
+    return true;
 }
 
-int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs,
-                             Error **errp)
+int bdrv_all_delete_snapshot(const char *name, Error **errp)
 {
-    int ret = 0;
     BlockDriverState *bs;
     BdrvNextIterator it;
     QEMUSnapshotInfo sn1, *snapshot = &sn1;
 
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *ctx = bdrv_get_aio_context(bs);
+        int ret = 0;
 
         aio_context_acquire(ctx);
         if (bdrv_all_snapshots_includes_bs(bs) &&
@@ -507,26 +506,25 @@ int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs,
         }
         aio_context_release(ctx);
         if (ret < 0) {
+            error_prepend(errp, "Could not delete snapshot '%s' on '%s': ",
+                          name, bdrv_get_device_or_node_name(bs));
             bdrv_next_cleanup(&it);
-            goto fail;
+            return -1;
         }
     }
 
-fail:
-    *first_bad_bs = bs;
-    return ret;
+    return 0;
 }
 
 
-int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs,
-                           Error **errp)
+int bdrv_all_goto_snapshot(const char *name, Error **errp)
 {
-    int ret = 0;
     BlockDriverState *bs;
     BdrvNextIterator it;
 
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *ctx = bdrv_get_aio_context(bs);
+        int ret = 0;
 
         aio_context_acquire(ctx);
         if (bdrv_all_snapshots_includes_bs(bs)) {
@@ -534,75 +532,75 @@ int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs,
         }
         aio_context_release(ctx);
         if (ret < 0) {
+            error_prepend(errp, "Could not load snapshot '%s' on '%s': ",
+                          name, bdrv_get_device_or_node_name(bs));
             bdrv_next_cleanup(&it);
-            goto fail;
+            return -1;
         }
     }
 
-fail:
-    *first_bad_bs = bs;
-    return ret;
+    return 0;
 }
 
-int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs)
+int bdrv_all_find_snapshot(const char *name, Error **errp)
 {
     QEMUSnapshotInfo sn;
-    int err = 0;
     BlockDriverState *bs;
     BdrvNextIterator it;
 
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *ctx = bdrv_get_aio_context(bs);
+        int ret = 0;
 
         aio_context_acquire(ctx);
         if (bdrv_all_snapshots_includes_bs(bs)) {
-            err = bdrv_snapshot_find(bs, &sn, name);
+            ret = bdrv_snapshot_find(bs, &sn, name);
         }
         aio_context_release(ctx);
-        if (err < 0) {
+        if (ret < 0) {
+            error_setg(errp, "Could not find snapshot '%s' on '%s'",
+                       name, bdrv_get_device_or_node_name(bs));
             bdrv_next_cleanup(&it);
-            goto fail;
+            return -1;
         }
     }
 
-fail:
-    *first_bad_bs = bs;
-    return err;
+    return 0;
 }
 
 int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
                              BlockDriverState *vm_state_bs,
                              uint64_t vm_state_size,
-                             BlockDriverState **first_bad_bs)
+                             Error **errp)
 {
-    int err = 0;
     BlockDriverState *bs;
     BdrvNextIterator it;
 
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *ctx = bdrv_get_aio_context(bs);
+        int ret = 0;
 
         aio_context_acquire(ctx);
         if (bs == vm_state_bs) {
             sn->vm_state_size = vm_state_size;
-            err = bdrv_snapshot_create(bs, sn);
+            ret = bdrv_snapshot_create(bs, sn);
         } else if (bdrv_all_snapshots_includes_bs(bs)) {
             sn->vm_state_size = 0;
-            err = bdrv_snapshot_create(bs, sn);
+            ret = bdrv_snapshot_create(bs, sn);
         }
         aio_context_release(ctx);
-        if (err < 0) {
+        if (ret < 0) {
+            error_setg(errp, "Could not create snapshot '%s' on '%s'",
+                       sn->name, bdrv_get_device_or_node_name(bs));
             bdrv_next_cleanup(&it);
-            goto fail;
+            return -1;
         }
     }
 
-fail:
-    *first_bad_bs = bs;
-    return err;
+    return 0;
 }
 
-BlockDriverState *bdrv_all_find_vmstate_bs(void)
+BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp)
 {
     BlockDriverState *bs;
     BdrvNextIterator it;
@@ -620,5 +618,8 @@ BlockDriverState *bdrv_all_find_vmstate_bs(void)
             break;
         }
     }
+    if (!bs) {
+        error_setg(errp, "No block device supports snapshots");
+    }
     return bs;
 }
diff --git a/include/block/snapshot.h b/include/block/snapshot.h
index b0fe42993d..5cb2b696ad 100644
--- a/include/block/snapshot.h
+++ b/include/block/snapshot.h
@@ -77,17 +77,15 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
  * These functions will properly handle dataplane (take aio_context_acquire
  * when appropriate for appropriate block drivers */
 
-bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs);
-int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bsd_bs,
-                             Error **errp);
-int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs,
-                           Error **errp);
-int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs);
+bool bdrv_all_can_snapshot(Error **errp);
+int bdrv_all_delete_snapshot(const char *name, Error **errp);
+int bdrv_all_goto_snapshot(const char *name, Error **errp);
+int bdrv_all_find_snapshot(const char *name, Error **errp);
 int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
                              BlockDriverState *vm_state_bs,
                              uint64_t vm_state_size,
-                             BlockDriverState **first_bad_bs);
+                             Error **errp);
 
-BlockDriverState *bdrv_all_find_vmstate_bs(void);
+BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index 4f3b69ecfc..4a7237337e 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2731,7 +2731,7 @@ int qemu_load_device_state(QEMUFile *f)
 
 int save_snapshot(const char *name, Error **errp)
 {
-    BlockDriverState *bs, *bs1;
+    BlockDriverState *bs;
     QEMUSnapshotInfo sn1, *sn = &sn1;
     int ret = -1, ret2;
     QEMUFile *f;
@@ -2751,25 +2751,19 @@ int save_snapshot(const char *name, Error **errp)
         return ret;
     }
 
-    if (!bdrv_all_can_snapshot(&bs)) {
-        error_setg(errp, "Device '%s' is writable but does not support "
-                   "snapshots", bdrv_get_device_or_node_name(bs));
+    if (!bdrv_all_can_snapshot(errp)) {
         return ret;
     }
 
     /* Delete old snapshots of the same name */
     if (name) {
-        ret = bdrv_all_delete_snapshot(name, &bs1, errp);
-        if (ret < 0) {
-            error_prepend(errp, "Error while deleting snapshot on device "
-                          "'%s': ", bdrv_get_device_or_node_name(bs1));
+        if (bdrv_all_delete_snapshot(name, errp) < 0) {
             return ret;
         }
     }
 
-    bs = bdrv_all_find_vmstate_bs();
+    bs = bdrv_all_find_vmstate_bs(errp);
     if (bs == NULL) {
-        error_setg(errp, "No block device can accept snapshots");
         return ret;
     }
     aio_context = bdrv_get_aio_context(bs);
@@ -2833,11 +2827,9 @@ int save_snapshot(const char *name, Error **errp)
     aio_context_release(aio_context);
     aio_context = NULL;
 
-    ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, &bs);
+    ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, errp);
     if (ret < 0) {
-        error_setg(errp, "Error while creating snapshot on '%s'",
-                   bdrv_get_device_or_node_name(bs));
-        bdrv_all_delete_snapshot(sn->name, &bs, NULL);
+        bdrv_all_delete_snapshot(sn->name, NULL);
         goto the_end;
     }
 
@@ -2940,30 +2932,23 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp)
 
 int load_snapshot(const char *name, Error **errp)
 {
-    BlockDriverState *bs, *bs_vm_state;
+    BlockDriverState *bs_vm_state;
     QEMUSnapshotInfo sn;
     QEMUFile *f;
     int ret;
     AioContext *aio_context;
     MigrationIncomingState *mis = migration_incoming_get_current();
 
-    if (!bdrv_all_can_snapshot(&bs)) {
-        error_setg(errp,
-                   "Device '%s' is writable but does not support snapshots",
-                   bdrv_get_device_or_node_name(bs));
+    if (!bdrv_all_can_snapshot(errp)) {
         return -ENOTSUP;
     }
-    ret = bdrv_all_find_snapshot(name, &bs);
+    ret = bdrv_all_find_snapshot(name, errp);
     if (ret < 0) {
-        error_setg(errp,
-                   "Device '%s' does not have the requested snapshot '%s'",
-                   bdrv_get_device_or_node_name(bs), name);
         return ret;
     }
 
-    bs_vm_state = bdrv_all_find_vmstate_bs();
+    bs_vm_state = bdrv_all_find_vmstate_bs(errp);
     if (!bs_vm_state) {
-        error_setg(errp, "No block device supports snapshots");
         return -ENOTSUP;
     }
     aio_context = bdrv_get_aio_context(bs_vm_state);
@@ -2989,10 +2974,8 @@ int load_snapshot(const char *name, Error **errp)
     /* Flush all IO requests so they don't interfere with the new state.  */
     bdrv_drain_all_begin();
 
-    ret = bdrv_all_goto_snapshot(name, &bs, errp);
+    ret = bdrv_all_goto_snapshot(name, errp);
     if (ret < 0) {
-        error_prepend(errp, "Could not load snapshot '%s' on '%s': ",
-                      name, bdrv_get_device_or_node_name(bs));
         goto err_drain;
     }
 
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index a48bc1e904..95fd6eec98 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -1146,15 +1146,10 @@ void hmp_savevm(Monitor *mon, const QDict *qdict)
 
 void hmp_delvm(Monitor *mon, const QDict *qdict)
 {
-    BlockDriverState *bs;
     Error *err = NULL;
     const char *name = qdict_get_str(qdict, "name");
 
-    if (bdrv_all_delete_snapshot(name, &bs, &err) < 0) {
-        error_prepend(&err,
-                      "deleting snapshot on device '%s': ",
-                      bdrv_get_device_name(bs));
-    }
+    bdrv_all_delete_snapshot(name, &err);
     hmp_handle_error(mon, err);
 }
 
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index 5ec574724a..3a9b609e62 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -148,7 +148,7 @@ static char *replay_find_nearest_snapshot(int64_t icount,
 
     *snapshot_icount = -1;
 
-    bs = bdrv_all_find_vmstate_bs();
+    bs = bdrv_all_find_vmstate_bs(NULL);
     if (!bs) {
         goto fail;
     }
@@ -159,7 +159,7 @@ static char *replay_find_nearest_snapshot(int64_t icount,
     aio_context_release(aio_context);
 
     for (i = 0; i < nb_sns; i++) {
-        if (bdrv_all_find_snapshot(sn_tab[i].name, &bs) == 0) {
+        if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) {
             if (sn_tab[i].icount != -1ULL
                 && sn_tab[i].icount <= icount
                 && (!nearest || nearest->icount < sn_tab[i].icount)) {
diff --git a/tests/qemu-iotests/267.out b/tests/qemu-iotests/267.out
index 27471ffae8..6149029b25 100644
--- a/tests/qemu-iotests/267.out
+++ b/tests/qemu-iotests/267.out
@@ -6,9 +6,9 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
 Testing:
 QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
-Error: No block device can accept snapshots
+Error: No block device supports snapshots
 (qemu) info snapshots
-No available block device supports snapshots
+No block device supports snapshots
 (qemu) loadvm snap0
 Error: No block device supports snapshots
 (qemu) quit
@@ -22,7 +22,7 @@ QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
 Error: Device 'none0' is writable but does not support snapshots
 (qemu) info snapshots
-No available block device supports snapshots
+No block device supports snapshots
 (qemu) loadvm snap0
 Error: Device 'none0' is writable but does not support snapshots
 (qemu) quit
@@ -58,7 +58,7 @@ QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
 Error: Device 'virtio0' is writable but does not support snapshots
 (qemu) info snapshots
-No available block device supports snapshots
+No block device supports snapshots
 (qemu) loadvm snap0
 Error: Device 'virtio0' is writable but does not support snapshots
 (qemu) quit
@@ -83,7 +83,7 @@ QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
 Error: Device 'file' is writable but does not support snapshots
 (qemu) info snapshots
-No available block device supports snapshots
+No block device supports snapshots
 (qemu) loadvm snap0
 Error: Device 'file' is writable but does not support snapshots
 (qemu) quit
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 02/12] migration: Make save_snapshot() return bool, not 0/-1
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 01/12] block: push error reporting into bdrv_all_*_snapshot functions Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 03/12] migration: stop returning errno from load_snapshot() Daniel P. Berrangé
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Pavel Dovgalyuk,
	qemu-block, Juan Quintela, John Snow, Philippe Mathieu-Daudé,
	Markus Armbruster, Dr. David Alan Gilbert, Pavel Dovgalyuk,
	Paolo Bonzini, Max Reitz

From: Philippe Mathieu-Daudé <philmd@redhat.com>

Just for consistency, following the example documented since
commit e3fe3988d7 ("error: Document Error API usage rules"),
return a boolean value indicating an error is set or not.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/migration/snapshot.h |  9 ++++++++-
 migration/savevm.c           | 16 ++++++++--------
 replay/replay-debugging.c    |  2 +-
 replay/replay-snapshot.c     |  2 +-
 4 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h
index c85b6ec75b..0eaf1ba0b1 100644
--- a/include/migration/snapshot.h
+++ b/include/migration/snapshot.h
@@ -15,7 +15,14 @@
 #ifndef QEMU_MIGRATION_SNAPSHOT_H
 #define QEMU_MIGRATION_SNAPSHOT_H
 
-int save_snapshot(const char *name, Error **errp);
+/**
+ * save_snapshot: Save an internal snapshot.
+ * @name: name of internal snapshot
+ * @errp: pointer to error object
+ * On success, return %true.
+ * On failure, store an error through @errp and return %false.
+ */
+bool save_snapshot(const char *name, Error **errp);
 int load_snapshot(const char *name, Error **errp);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index 4a7237337e..ef7963f6c9 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2729,7 +2729,7 @@ int qemu_load_device_state(QEMUFile *f)
     return 0;
 }
 
-int save_snapshot(const char *name, Error **errp)
+bool save_snapshot(const char *name, Error **errp)
 {
     BlockDriverState *bs;
     QEMUSnapshotInfo sn1, *sn = &sn1;
@@ -2742,29 +2742,29 @@ int save_snapshot(const char *name, Error **errp)
     AioContext *aio_context;
 
     if (migration_is_blocked(errp)) {
-        return ret;
+        return false;
     }
 
     if (!replay_can_snapshot()) {
         error_setg(errp, "Record/replay does not allow making snapshot "
                    "right now. Try once more later.");
-        return ret;
+        return false;
     }
 
     if (!bdrv_all_can_snapshot(errp)) {
-        return ret;
+        return false;
     }
 
     /* Delete old snapshots of the same name */
     if (name) {
         if (bdrv_all_delete_snapshot(name, errp) < 0) {
-            return ret;
+            return false;
         }
     }
 
     bs = bdrv_all_find_vmstate_bs(errp);
     if (bs == NULL) {
-        return ret;
+        return false;
     }
     aio_context = bdrv_get_aio_context(bs);
 
@@ -2773,7 +2773,7 @@ int save_snapshot(const char *name, Error **errp)
     ret = global_state_store();
     if (ret) {
         error_setg(errp, "Error saving global state");
-        return ret;
+        return false;
     }
     vm_stop(RUN_STATE_SAVE_VM);
 
@@ -2845,7 +2845,7 @@ int save_snapshot(const char *name, Error **errp)
     if (saved_vm_running) {
         vm_start();
     }
-    return ret;
+    return ret == 0;
 }
 
 void qmp_xen_save_devices_state(const char *filename, bool has_live, bool live,
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index 3a9b609e62..8e0050915d 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -323,7 +323,7 @@ void replay_gdb_attached(void)
      */
     if (replay_mode == REPLAY_MODE_PLAY
         && !replay_snapshot) {
-        if (save_snapshot("start_debugging", NULL) != 0) {
+        if (!save_snapshot("start_debugging", NULL)) {
             /* Can't create the snapshot. Continue conventional debugging. */
         }
     }
diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c
index e26fa4c892..4f2560d156 100644
--- a/replay/replay-snapshot.c
+++ b/replay/replay-snapshot.c
@@ -77,7 +77,7 @@ void replay_vmstate_init(void)
 
     if (replay_snapshot) {
         if (replay_mode == REPLAY_MODE_RECORD) {
-            if (save_snapshot(replay_snapshot, &err) != 0) {
+            if (!save_snapshot(replay_snapshot, &err)) {
                 error_report_err(err);
                 error_report("Could not create snapshot for icount record");
                 exit(1);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 03/12] migration: stop returning errno from load_snapshot()
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 01/12] block: push error reporting into bdrv_all_*_snapshot functions Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 02/12] migration: Make save_snapshot() return bool, not 0/-1 Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 04/12] block: add ability to specify list of blockdevs during snapshot Daniel P. Berrangé
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Philippe Mathieu-Daudé,
	Markus Armbruster, Dr. David Alan Gilbert, Pavel Dovgalyuk,
	Pavel Dovgalyuk, Paolo Bonzini, Max Reitz

None of the callers care about the errno value since there is a full
Error object populated. This gives consistency with save_snapshot()
which already just returns a boolean value.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
[PMD: Return false/true instead of -1/0, document function]
Acked-by: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/migration/snapshot.h | 10 +++++++++-
 migration/savevm.c           | 19 +++++++++----------
 monitor/hmp-cmds.c           |  2 +-
 replay/replay-snapshot.c     |  2 +-
 softmmu/vl.c                 |  2 +-
 5 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h
index 0eaf1ba0b1..d7d210820c 100644
--- a/include/migration/snapshot.h
+++ b/include/migration/snapshot.h
@@ -23,6 +23,14 @@
  * On failure, store an error through @errp and return %false.
  */
 bool save_snapshot(const char *name, Error **errp);
-int load_snapshot(const char *name, Error **errp);
+
+/**
+ * load_snapshot: Load an internal snapshot.
+ * @name: name of internal snapshot
+ * @errp: pointer to error object
+ * On success, return %true.
+ * On failure, store an error through @errp and return %false.
+ */
+bool load_snapshot(const char *name, Error **errp);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index ef7963f6c9..e6972b56b3 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2930,7 +2930,7 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp)
     migration_incoming_state_destroy();
 }
 
-int load_snapshot(const char *name, Error **errp)
+bool load_snapshot(const char *name, Error **errp)
 {
     BlockDriverState *bs_vm_state;
     QEMUSnapshotInfo sn;
@@ -2940,16 +2940,16 @@ int load_snapshot(const char *name, Error **errp)
     MigrationIncomingState *mis = migration_incoming_get_current();
 
     if (!bdrv_all_can_snapshot(errp)) {
-        return -ENOTSUP;
+        return false;
     }
     ret = bdrv_all_find_snapshot(name, errp);
     if (ret < 0) {
-        return ret;
+        return false;
     }
 
     bs_vm_state = bdrv_all_find_vmstate_bs(errp);
     if (!bs_vm_state) {
-        return -ENOTSUP;
+        return false;
     }
     aio_context = bdrv_get_aio_context(bs_vm_state);
 
@@ -2958,11 +2958,11 @@ int load_snapshot(const char *name, Error **errp)
     ret = bdrv_snapshot_find(bs_vm_state, &sn, name);
     aio_context_release(aio_context);
     if (ret < 0) {
-        return ret;
+        return false;
     } else if (sn.vm_state_size == 0) {
         error_setg(errp, "This is a disk-only snapshot. Revert to it "
                    " offline using qemu-img");
-        return -EINVAL;
+        return false;
     }
 
     /*
@@ -2983,7 +2983,6 @@ int load_snapshot(const char *name, Error **errp)
     f = qemu_fopen_bdrv(bs_vm_state, 0);
     if (!f) {
         error_setg(errp, "Could not open VM state file");
-        ret = -EINVAL;
         goto err_drain;
     }
 
@@ -3003,14 +3002,14 @@ int load_snapshot(const char *name, Error **errp)
 
     if (ret < 0) {
         error_setg(errp, "Error %d while loading VM state", ret);
-        return ret;
+        return false;
     }
 
-    return 0;
+    return true;
 
 err_drain:
     bdrv_drain_all_end();
-    return ret;
+    return false;
 }
 
 void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev)
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index 95fd6eec98..8022e52b28 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -1130,7 +1130,7 @@ void hmp_loadvm(Monitor *mon, const QDict *qdict)
 
     vm_stop(RUN_STATE_RESTORE_VM);
 
-    if (load_snapshot(name, &err) == 0 && saved_vm_running) {
+    if (!load_snapshot(name, &err) && saved_vm_running) {
         vm_start();
     }
     hmp_handle_error(mon, err);
diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c
index 4f2560d156..b289365937 100644
--- a/replay/replay-snapshot.c
+++ b/replay/replay-snapshot.c
@@ -83,7 +83,7 @@ void replay_vmstate_init(void)
                 exit(1);
             }
         } else if (replay_mode == REPLAY_MODE_PLAY) {
-            if (load_snapshot(replay_snapshot, &err) != 0) {
+            if (!load_snapshot(replay_snapshot, &err)) {
                 error_report_err(err);
                 error_report("Could not load snapshot for icount replay");
                 exit(1);
diff --git a/softmmu/vl.c b/softmmu/vl.c
index bd55468669..8f655086b7 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -2529,7 +2529,7 @@ void qmp_x_exit_preconfig(Error **errp)
 
     if (loadvm) {
         Error *local_err = NULL;
-        if (load_snapshot(loadvm, &local_err) < 0) {
+        if (!load_snapshot(loadvm, &local_err)) {
             error_report_err(local_err);
             autostart = 0;
             exit(1);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 04/12] block: add ability to specify list of blockdevs during snapshot
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (2 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 03/12] migration: stop returning errno from load_snapshot() Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 05/12] block: allow specifying name of block device for vmstate storage Daniel P. Berrangé
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

When running snapshot operations, there are various rules for which
blockdevs are included/excluded. While this provides reasonable default
behaviour, there are scenarios that are not well handled by the default
logic. Some of the conditions do not have a single correct answer.

Thus there needs to be a way for the mgmt app to provide an explicit
list of blockdevs to perform snapshots across. This can be achieved
by passing a list of node names that should be used.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 block/monitor/block-hmp-cmds.c |   4 +-
 block/snapshot.c               | 172 ++++++++++++++++++++++++---------
 include/block/snapshot.h       |  22 +++--
 migration/savevm.c             |  18 ++--
 monitor/hmp-cmds.c             |   2 +-
 replay/replay-debugging.c      |   4 +-
 6 files changed, 159 insertions(+), 63 deletions(-)

diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index 9532d085ea..e15121be1f 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -902,7 +902,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
     SnapshotEntry *snapshot_entry;
     Error *err = NULL;
 
-    bs = bdrv_all_find_vmstate_bs(&err);
+    bs = bdrv_all_find_vmstate_bs(false, NULL, &err);
     if (!bs) {
         error_report_err(err);
         return;
@@ -954,7 +954,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
     total = 0;
     for (i = 0; i < nb_sns; i++) {
         SnapshotEntry *next_sn;
-        if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) {
+        if (bdrv_all_find_snapshot(sn_tab[i].name, false, NULL, NULL) == 0) {
             global_snapshots[total] = i;
             total++;
             QTAILQ_FOREACH(image_entry, &image_list, next) {
diff --git a/block/snapshot.c b/block/snapshot.c
index 482e3fc7b7..220173deae 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -447,6 +447,41 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
     return ret;
 }
 
+
+static int bdrv_all_get_snapshot_devices(bool has_devices, strList *devices,
+                                         GList **all_bdrvs,
+                                         Error **errp)
+{
+    g_autoptr(GList) bdrvs = NULL;
+
+    if (has_devices) {
+        if (!devices) {
+            error_setg(errp, "At least one device is required for snapshot");
+            return -1;
+        }
+
+        while (devices) {
+            BlockDriverState *bs = bdrv_find_node(devices->value);
+            if (!bs) {
+                error_setg(errp, "No block device node '%s'", devices->value);
+                return -1;
+            }
+            bdrvs = g_list_append(bdrvs, bs);
+            devices = devices->next;
+        }
+    } else {
+        BlockDriverState *bs;
+        BdrvNextIterator it;
+        for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+            bdrvs = g_list_append(bdrvs, bs);
+        }
+    }
+
+    *all_bdrvs = g_steal_pointer(&bdrvs);
+    return 0;
+}
+
+
 static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs)
 {
     if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
@@ -462,43 +497,59 @@ static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs)
  * These functions will properly handle dataplane (take aio_context_acquire
  * when appropriate for appropriate block drivers) */
 
-bool bdrv_all_can_snapshot(Error **errp)
+bool bdrv_all_can_snapshot(bool has_devices, strList *devices,
+                           Error **errp)
 {
-    BlockDriverState *bs;
-    BdrvNextIterator it;
+    g_autoptr(GList) bdrvs = NULL;
+    GList *iterbdrvs;
 
-    for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+    if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
+        return false;
+    }
+
+    iterbdrvs = bdrvs;
+    while (iterbdrvs) {
+        BlockDriverState *bs = iterbdrvs->data;
         AioContext *ctx = bdrv_get_aio_context(bs);
         bool ok = true;
 
         aio_context_acquire(ctx);
-        if (bdrv_all_snapshots_includes_bs(bs)) {
+        if (devices || bdrv_all_snapshots_includes_bs(bs)) {
             ok = bdrv_can_snapshot(bs);
         }
         aio_context_release(ctx);
         if (!ok) {
             error_setg(errp, "Device '%s' is writable but does not support "
                        "snapshots", bdrv_get_device_or_node_name(bs));
-            bdrv_next_cleanup(&it);
             return false;
         }
+
+        iterbdrvs = iterbdrvs->next;
     }
 
     return true;
 }
 
-int bdrv_all_delete_snapshot(const char *name, Error **errp)
+int bdrv_all_delete_snapshot(const char *name,
+                             bool has_devices, strList *devices,
+                             Error **errp)
 {
-    BlockDriverState *bs;
-    BdrvNextIterator it;
-    QEMUSnapshotInfo sn1, *snapshot = &sn1;
+    g_autoptr(GList) bdrvs = NULL;
+    GList *iterbdrvs;
+
+    if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
+        return -1;
+    }
 
-    for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+    iterbdrvs = bdrvs;
+    while (iterbdrvs) {
+        BlockDriverState *bs = iterbdrvs->data;
         AioContext *ctx = bdrv_get_aio_context(bs);
+        QEMUSnapshotInfo sn1, *snapshot = &sn1;
         int ret = 0;
 
         aio_context_acquire(ctx);
-        if (bdrv_all_snapshots_includes_bs(bs) &&
+        if ((devices || bdrv_all_snapshots_includes_bs(bs)) &&
             bdrv_snapshot_find(bs, snapshot, name) >= 0)
         {
             ret = bdrv_snapshot_delete(bs, snapshot->id_str,
@@ -508,61 +559,80 @@ int bdrv_all_delete_snapshot(const char *name, Error **errp)
         if (ret < 0) {
             error_prepend(errp, "Could not delete snapshot '%s' on '%s': ",
                           name, bdrv_get_device_or_node_name(bs));
-            bdrv_next_cleanup(&it);
             return -1;
         }
+
+        iterbdrvs = iterbdrvs->next;
     }
 
     return 0;
 }
 
 
-int bdrv_all_goto_snapshot(const char *name, Error **errp)
+int bdrv_all_goto_snapshot(const char *name,
+                           bool has_devices, strList *devices,
+                           Error **errp)
 {
-    BlockDriverState *bs;
-    BdrvNextIterator it;
+    g_autoptr(GList) bdrvs = NULL;
+    GList *iterbdrvs;
 
-    for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+    if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
+        return -1;
+    }
+
+    iterbdrvs = bdrvs;
+    while (iterbdrvs) {
+        BlockDriverState *bs = iterbdrvs->data;
         AioContext *ctx = bdrv_get_aio_context(bs);
         int ret = 0;
 
         aio_context_acquire(ctx);
-        if (bdrv_all_snapshots_includes_bs(bs)) {
+        if (devices || bdrv_all_snapshots_includes_bs(bs)) {
             ret = bdrv_snapshot_goto(bs, name, errp);
         }
         aio_context_release(ctx);
         if (ret < 0) {
             error_prepend(errp, "Could not load snapshot '%s' on '%s': ",
                           name, bdrv_get_device_or_node_name(bs));
-            bdrv_next_cleanup(&it);
             return -1;
         }
+
+        iterbdrvs = iterbdrvs->next;
     }
 
     return 0;
 }
 
-int bdrv_all_find_snapshot(const char *name, Error **errp)
+int bdrv_all_find_snapshot(const char *name,
+                           bool has_devices, strList *devices,
+                           Error **errp)
 {
-    QEMUSnapshotInfo sn;
-    BlockDriverState *bs;
-    BdrvNextIterator it;
+    g_autoptr(GList) bdrvs = NULL;
+    GList *iterbdrvs;
+
+    if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
+        return -1;
+    }
 
-    for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+    iterbdrvs = bdrvs;
+    while (iterbdrvs) {
+        BlockDriverState *bs = iterbdrvs->data;
         AioContext *ctx = bdrv_get_aio_context(bs);
+        QEMUSnapshotInfo sn;
         int ret = 0;
 
         aio_context_acquire(ctx);
-        if (bdrv_all_snapshots_includes_bs(bs)) {
+        if (devices || bdrv_all_snapshots_includes_bs(bs)) {
             ret = bdrv_snapshot_find(bs, &sn, name);
         }
         aio_context_release(ctx);
         if (ret < 0) {
             error_setg(errp, "Could not find snapshot '%s' on '%s'",
                        name, bdrv_get_device_or_node_name(bs));
-            bdrv_next_cleanup(&it);
             return -1;
         }
+
+        iterbdrvs = iterbdrvs->next;
     }
 
     return 0;
@@ -571,12 +641,19 @@ int bdrv_all_find_snapshot(const char *name, Error **errp)
 int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
                              BlockDriverState *vm_state_bs,
                              uint64_t vm_state_size,
+                             bool has_devices, strList *devices,
                              Error **errp)
 {
-    BlockDriverState *bs;
-    BdrvNextIterator it;
+    g_autoptr(GList) bdrvs = NULL;
+    GList *iterbdrvs;
+
+    if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
+        return -1;
+    }
 
-    for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+    iterbdrvs = bdrvs;
+    while (iterbdrvs) {
+        BlockDriverState *bs = iterbdrvs->data;
         AioContext *ctx = bdrv_get_aio_context(bs);
         int ret = 0;
 
@@ -584,7 +661,7 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
         if (bs == vm_state_bs) {
             sn->vm_state_size = vm_state_size;
             ret = bdrv_snapshot_create(bs, sn);
-        } else if (bdrv_all_snapshots_includes_bs(bs)) {
+        } else if (devices || bdrv_all_snapshots_includes_bs(bs)) {
             sn->vm_state_size = 0;
             ret = bdrv_snapshot_create(bs, sn);
         }
@@ -592,34 +669,43 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
         if (ret < 0) {
             error_setg(errp, "Could not create snapshot '%s' on '%s'",
                        sn->name, bdrv_get_device_or_node_name(bs));
-            bdrv_next_cleanup(&it);
             return -1;
         }
+
+        iterbdrvs = iterbdrvs->next;
     }
 
     return 0;
 }
 
-BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp)
+BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices,
+                                           Error **errp)
 {
-    BlockDriverState *bs;
-    BdrvNextIterator it;
+    g_autoptr(GList) bdrvs = NULL;
+    GList *iterbdrvs;
 
-    for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
+    if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) {
+        return NULL;
+    }
+
+    iterbdrvs = bdrvs;
+    while (iterbdrvs) {
+        BlockDriverState *bs = iterbdrvs->data;
         AioContext *ctx = bdrv_get_aio_context(bs);
-        bool found;
+        bool found = false;
 
         aio_context_acquire(ctx);
-        found = bdrv_all_snapshots_includes_bs(bs) && bdrv_can_snapshot(bs);
+        found = (devices || bdrv_all_snapshots_includes_bs(bs)) &&
+            bdrv_can_snapshot(bs);
         aio_context_release(ctx);
 
         if (found) {
-            bdrv_next_cleanup(&it);
-            break;
+            return bs;
         }
+
+        iterbdrvs = iterbdrvs->next;
     }
-    if (!bs) {
-        error_setg(errp, "No block device supports snapshots");
-    }
-    return bs;
+
+    error_setg(errp, "No block device supports snapshots");
+    return NULL;
 }
diff --git a/include/block/snapshot.h b/include/block/snapshot.h
index 5cb2b696ad..2569a903f2 100644
--- a/include/block/snapshot.h
+++ b/include/block/snapshot.h
@@ -25,7 +25,7 @@
 #ifndef SNAPSHOT_H
 #define SNAPSHOT_H
 
-
+#include "qapi/qapi-builtin-types.h"
 
 #define SNAPSHOT_OPT_BASE       "snapshot."
 #define SNAPSHOT_OPT_ID         "snapshot.id"
@@ -77,15 +77,25 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
  * These functions will properly handle dataplane (take aio_context_acquire
  * when appropriate for appropriate block drivers */
 
-bool bdrv_all_can_snapshot(Error **errp);
-int bdrv_all_delete_snapshot(const char *name, Error **errp);
-int bdrv_all_goto_snapshot(const char *name, Error **errp);
-int bdrv_all_find_snapshot(const char *name, Error **errp);
+bool bdrv_all_can_snapshot(bool has_devices, strList *devices,
+                           Error **errp);
+int bdrv_all_delete_snapshot(const char *name,
+                             bool has_devices, strList *devices,
+                             Error **errp);
+int bdrv_all_goto_snapshot(const char *name,
+                           bool has_devices, strList *devices,
+                           Error **errp);
+int bdrv_all_find_snapshot(const char *name,
+                           bool has_devices, strList *devices,
+                           Error **errp);
 int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
                              BlockDriverState *vm_state_bs,
                              uint64_t vm_state_size,
+                             bool has_devices,
+                             strList *devices,
                              Error **errp);
 
-BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp);
+BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices,
+                                           Error **errp);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index e6972b56b3..90dded91f4 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2751,18 +2751,18 @@ bool save_snapshot(const char *name, Error **errp)
         return false;
     }
 
-    if (!bdrv_all_can_snapshot(errp)) {
+    if (!bdrv_all_can_snapshot(false, NULL, errp)) {
         return false;
     }
 
     /* Delete old snapshots of the same name */
     if (name) {
-        if (bdrv_all_delete_snapshot(name, errp) < 0) {
+        if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) {
             return false;
         }
     }
 
-    bs = bdrv_all_find_vmstate_bs(errp);
+    bs = bdrv_all_find_vmstate_bs(false, NULL, errp);
     if (bs == NULL) {
         return false;
     }
@@ -2827,9 +2827,9 @@ bool save_snapshot(const char *name, Error **errp)
     aio_context_release(aio_context);
     aio_context = NULL;
 
-    ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, errp);
+    ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, false, NULL, errp);
     if (ret < 0) {
-        bdrv_all_delete_snapshot(sn->name, NULL);
+        bdrv_all_delete_snapshot(sn->name, false, NULL, NULL);
         goto the_end;
     }
 
@@ -2939,15 +2939,15 @@ bool load_snapshot(const char *name, Error **errp)
     AioContext *aio_context;
     MigrationIncomingState *mis = migration_incoming_get_current();
 
-    if (!bdrv_all_can_snapshot(errp)) {
+    if (!bdrv_all_can_snapshot(false, NULL, errp)) {
         return false;
     }
-    ret = bdrv_all_find_snapshot(name, errp);
+    ret = bdrv_all_find_snapshot(name, false, NULL, errp);
     if (ret < 0) {
         return false;
     }
 
-    bs_vm_state = bdrv_all_find_vmstate_bs(errp);
+    bs_vm_state = bdrv_all_find_vmstate_bs(false, NULL, errp);
     if (!bs_vm_state) {
         return false;
     }
@@ -2974,7 +2974,7 @@ bool load_snapshot(const char *name, Error **errp)
     /* Flush all IO requests so they don't interfere with the new state.  */
     bdrv_drain_all_begin();
 
-    ret = bdrv_all_goto_snapshot(name, errp);
+    ret = bdrv_all_goto_snapshot(name, false, NULL, errp);
     if (ret < 0) {
         goto err_drain;
     }
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index 8022e52b28..d382918b23 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -1149,7 +1149,7 @@ void hmp_delvm(Monitor *mon, const QDict *qdict)
     Error *err = NULL;
     const char *name = qdict_get_str(qdict, "name");
 
-    bdrv_all_delete_snapshot(name, &err);
+    bdrv_all_delete_snapshot(name, false, NULL, &err);
     hmp_handle_error(mon, err);
 }
 
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index 8e0050915d..67d8237077 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -148,7 +148,7 @@ static char *replay_find_nearest_snapshot(int64_t icount,
 
     *snapshot_icount = -1;
 
-    bs = bdrv_all_find_vmstate_bs(NULL);
+    bs = bdrv_all_find_vmstate_bs(false, NULL, NULL);
     if (!bs) {
         goto fail;
     }
@@ -159,7 +159,7 @@ static char *replay_find_nearest_snapshot(int64_t icount,
     aio_context_release(aio_context);
 
     for (i = 0; i < nb_sns; i++) {
-        if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) {
+        if (bdrv_all_find_snapshot(sn_tab[i].name, false, NULL, NULL) == 0) {
             if (sn_tab[i].icount != -1ULL
                 && sn_tab[i].icount <= icount
                 && (!nearest || nearest->icount < sn_tab[i].icount)) {
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 05/12] block: allow specifying name of block device for vmstate storage
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (3 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 04/12] block: add ability to specify list of blockdevs during snapshot Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 06/12] block: rename and alter bdrv_all_find_snapshot semantics Daniel P. Berrangé
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

Currently the vmstate will be stored in the first block device that
supports snapshots. Historically this would have usually been the
root device, but with UEFI it might be the variable store. There
needs to be a way to override the choice of block device to store
the state in.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 block/monitor/block-hmp-cmds.c |  2 +-
 block/snapshot.c               | 26 +++++++++++++++++++++++---
 include/block/snapshot.h       |  3 ++-
 migration/savevm.c             |  4 ++--
 replay/replay-debugging.c      |  2 +-
 tests/qemu-iotests/267.out     | 12 ++++++------
 6 files changed, 35 insertions(+), 14 deletions(-)

diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index e15121be1f..9cc5d4b51e 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -902,7 +902,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
     SnapshotEntry *snapshot_entry;
     Error *err = NULL;
 
-    bs = bdrv_all_find_vmstate_bs(false, NULL, &err);
+    bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, &err);
     if (!bs) {
         error_report_err(err);
         return;
diff --git a/block/snapshot.c b/block/snapshot.c
index 220173deae..0b129bee8f 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -678,7 +678,9 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
     return 0;
 }
 
-BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices,
+
+BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs,
+                                           bool has_devices, strList *devices,
                                            Error **errp)
 {
     g_autoptr(GList) bdrvs = NULL;
@@ -699,13 +701,31 @@ BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices,
             bdrv_can_snapshot(bs);
         aio_context_release(ctx);
 
-        if (found) {
+        if (vmstate_bs) {
+            if (g_str_equal(vmstate_bs,
+                            bdrv_get_node_name(bs))) {
+                if (found) {
+                    return bs;
+                } else {
+                    error_setg(errp,
+                               "vmstate block device '%s' does not support snapshots",
+                               vmstate_bs);
+                    return NULL;
+                }
+            }
+        } else if (found) {
             return bs;
         }
 
         iterbdrvs = iterbdrvs->next;
     }
 
-    error_setg(errp, "No block device supports snapshots");
+    if (vmstate_bs) {
+        error_setg(errp,
+                   "vmstate block device '%s' does not exist", vmstate_bs);
+    } else {
+        error_setg(errp,
+                   "no block device can store vmstate for snapshot");
+    }
     return NULL;
 }
diff --git a/include/block/snapshot.h b/include/block/snapshot.h
index 2569a903f2..8a6a37240d 100644
--- a/include/block/snapshot.h
+++ b/include/block/snapshot.h
@@ -95,7 +95,8 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
                              strList *devices,
                              Error **errp);
 
-BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices,
+BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs,
+                                           bool has_devices, strList *devices,
                                            Error **errp);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index 90dded91f4..1fc4bffe8b 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2762,7 +2762,7 @@ bool save_snapshot(const char *name, Error **errp)
         }
     }
 
-    bs = bdrv_all_find_vmstate_bs(false, NULL, errp);
+    bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp);
     if (bs == NULL) {
         return false;
     }
@@ -2947,7 +2947,7 @@ bool load_snapshot(const char *name, Error **errp)
         return false;
     }
 
-    bs_vm_state = bdrv_all_find_vmstate_bs(false, NULL, errp);
+    bs_vm_state = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp);
     if (!bs_vm_state) {
         return false;
     }
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index 67d8237077..ca37cf4025 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -148,7 +148,7 @@ static char *replay_find_nearest_snapshot(int64_t icount,
 
     *snapshot_icount = -1;
 
-    bs = bdrv_all_find_vmstate_bs(false, NULL, NULL);
+    bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, NULL);
     if (!bs) {
         goto fail;
     }
diff --git a/tests/qemu-iotests/267.out b/tests/qemu-iotests/267.out
index 6149029b25..7176e376e1 100644
--- a/tests/qemu-iotests/267.out
+++ b/tests/qemu-iotests/267.out
@@ -6,11 +6,11 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
 Testing:
 QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
-Error: No block device supports snapshots
+Error: no block device can store vmstate for snapshot
 (qemu) info snapshots
-No block device supports snapshots
+no block device can store vmstate for snapshot
 (qemu) loadvm snap0
-Error: No block device supports snapshots
+Error: no block device can store vmstate for snapshot
 (qemu) quit
 
 
@@ -22,7 +22,7 @@ QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
 Error: Device 'none0' is writable but does not support snapshots
 (qemu) info snapshots
-No block device supports snapshots
+no block device can store vmstate for snapshot
 (qemu) loadvm snap0
 Error: Device 'none0' is writable but does not support snapshots
 (qemu) quit
@@ -58,7 +58,7 @@ QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
 Error: Device 'virtio0' is writable but does not support snapshots
 (qemu) info snapshots
-No block device supports snapshots
+no block device can store vmstate for snapshot
 (qemu) loadvm snap0
 Error: Device 'virtio0' is writable but does not support snapshots
 (qemu) quit
@@ -83,7 +83,7 @@ QEMU X.Y.Z monitor - type 'help' for more information
 (qemu) savevm snap0
 Error: Device 'file' is writable but does not support snapshots
 (qemu) info snapshots
-No block device supports snapshots
+no block device can store vmstate for snapshot
 (qemu) loadvm snap0
 Error: Device 'file' is writable but does not support snapshots
 (qemu) quit
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 06/12] block: rename and alter bdrv_all_find_snapshot semantics
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (4 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 05/12] block: allow specifying name of block device for vmstate storage Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 07/12] migration: control whether snapshots are ovewritten Daniel P. Berrangé
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

Currently bdrv_all_find_snapshot() will return 0 if it finds
a snapshot, -1 if an error occurs, or if it fails to find a
snapshot. New callers to be added want to distinguish between
the error scenario and failing to find a snapshot.

Rename it to bdrv_all_has_snapshot and make it return -1 on
error, 0 if no snapshot is found and 1 if snapshot is found.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 block/monitor/block-hmp-cmds.c |  2 +-
 block/snapshot.c               | 19 ++++++++++++-------
 include/block/snapshot.h       |  6 +++---
 migration/savevm.c             |  7 ++++++-
 replay/replay-debugging.c      |  6 +++++-
 5 files changed, 27 insertions(+), 13 deletions(-)

diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index 9cc5d4b51e..75d7fa9510 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -954,7 +954,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict)
     total = 0;
     for (i = 0; i < nb_sns; i++) {
         SnapshotEntry *next_sn;
-        if (bdrv_all_find_snapshot(sn_tab[i].name, false, NULL, NULL) == 0) {
+        if (bdrv_all_has_snapshot(sn_tab[i].name, false, NULL, NULL) == 1) {
             global_snapshots[total] = i;
             total++;
             QTAILQ_FOREACH(image_entry, &image_list, next) {
diff --git a/block/snapshot.c b/block/snapshot.c
index 0b129bee8f..e8ae9a28c1 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -603,9 +603,9 @@ int bdrv_all_goto_snapshot(const char *name,
     return 0;
 }
 
-int bdrv_all_find_snapshot(const char *name,
-                           bool has_devices, strList *devices,
-                           Error **errp)
+int bdrv_all_has_snapshot(const char *name,
+                          bool has_devices, strList *devices,
+                          Error **errp)
 {
     g_autoptr(GList) bdrvs = NULL;
     GList *iterbdrvs;
@@ -627,15 +627,20 @@ int bdrv_all_find_snapshot(const char *name,
         }
         aio_context_release(ctx);
         if (ret < 0) {
-            error_setg(errp, "Could not find snapshot '%s' on '%s'",
-                       name, bdrv_get_device_or_node_name(bs));
-            return -1;
+            if (ret == -ENOENT) {
+                return 0;
+            } else {
+                error_setg_errno(errp, errno,
+                                 "Could not check snapshot '%s' on '%s'",
+                                 name, bdrv_get_device_or_node_name(bs));
+                return -1;
+            }
         }
 
         iterbdrvs = iterbdrvs->next;
     }
 
-    return 0;
+    return 1;
 }
 
 int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
diff --git a/include/block/snapshot.h b/include/block/snapshot.h
index 8a6a37240d..940345692f 100644
--- a/include/block/snapshot.h
+++ b/include/block/snapshot.h
@@ -85,9 +85,9 @@ int bdrv_all_delete_snapshot(const char *name,
 int bdrv_all_goto_snapshot(const char *name,
                            bool has_devices, strList *devices,
                            Error **errp);
-int bdrv_all_find_snapshot(const char *name,
-                           bool has_devices, strList *devices,
-                           Error **errp);
+int bdrv_all_has_snapshot(const char *name,
+                          bool has_devices, strList *devices,
+                          Error **errp);
 int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
                              BlockDriverState *vm_state_bs,
                              uint64_t vm_state_size,
diff --git a/migration/savevm.c b/migration/savevm.c
index 1fc4bffe8b..5cd3408dfe 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2942,10 +2942,15 @@ bool load_snapshot(const char *name, Error **errp)
     if (!bdrv_all_can_snapshot(false, NULL, errp)) {
         return false;
     }
-    ret = bdrv_all_find_snapshot(name, false, NULL, errp);
+    ret = bdrv_all_has_snapshot(name, false, NULL, errp);
     if (ret < 0) {
         return false;
     }
+    if (ret == 0) {
+        error_setg(errp, "Snapshot '%s' does not exist in one or more devices",
+                   name);
+        return false;
+    }
 
     bs_vm_state = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp);
     if (!bs_vm_state) {
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index ca37cf4025..098ef8e0f5 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -143,6 +143,7 @@ static char *replay_find_nearest_snapshot(int64_t icount,
     QEMUSnapshotInfo *sn_tab;
     QEMUSnapshotInfo *nearest = NULL;
     char *ret = NULL;
+    int rv;
     int nb_sns, i;
     AioContext *aio_context;
 
@@ -159,7 +160,10 @@ static char *replay_find_nearest_snapshot(int64_t icount,
     aio_context_release(aio_context);
 
     for (i = 0; i < nb_sns; i++) {
-        if (bdrv_all_find_snapshot(sn_tab[i].name, false, NULL, NULL) == 0) {
+        rv = bdrv_all_has_snapshot(sn_tab[i].name, false, NULL, NULL);
+        if (rv < 0)
+            goto fail;
+        if (rv == 1) {
             if (sn_tab[i].icount != -1ULL
                 && sn_tab[i].icount <= icount
                 && (!nearest || nearest->icount < sn_tab[i].icount)) {
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 07/12] migration: control whether snapshots are ovewritten
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (5 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 06/12] block: rename and alter bdrv_all_find_snapshot semantics Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 08/12] migration: wire up support for snapshot device selection Daniel P. Berrangé
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

The traditional HMP "savevm" command will overwrite an existing snapshot
if it already exists with the requested name. This new flag allows this
to be controlled allowing for safer behaviour with a future QMP command.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 include/migration/snapshot.h |  3 ++-
 migration/savevm.c           | 19 ++++++++++++++++---
 monitor/hmp-cmds.c           |  2 +-
 replay/replay-debugging.c    |  2 +-
 replay/replay-snapshot.c     |  2 +-
 5 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h
index d7d210820c..d8c22d343c 100644
--- a/include/migration/snapshot.h
+++ b/include/migration/snapshot.h
@@ -18,11 +18,12 @@
 /**
  * save_snapshot: Save an internal snapshot.
  * @name: name of internal snapshot
+ * @overwrite: replace existing snapshot with @name
  * @errp: pointer to error object
  * On success, return %true.
  * On failure, store an error through @errp and return %false.
  */
-bool save_snapshot(const char *name, Error **errp);
+bool save_snapshot(const char *name, bool overwrite, Error **errp);
 
 /**
  * load_snapshot: Load an internal snapshot.
diff --git a/migration/savevm.c b/migration/savevm.c
index 5cd3408dfe..a98f65c165 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2729,7 +2729,7 @@ int qemu_load_device_state(QEMUFile *f)
     return 0;
 }
 
-bool save_snapshot(const char *name, Error **errp)
+bool save_snapshot(const char *name, bool overwrite, Error **errp)
 {
     BlockDriverState *bs;
     QEMUSnapshotInfo sn1, *sn = &sn1;
@@ -2757,8 +2757,21 @@ bool save_snapshot(const char *name, Error **errp)
 
     /* Delete old snapshots of the same name */
     if (name) {
-        if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) {
-            return false;
+        if (overwrite) {
+            if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) {
+                return false;
+            }
+        } else {
+            ret2 = bdrv_all_has_snapshot(name, false, NULL, errp);
+            if (ret2 < 0) {
+                return false;
+            }
+            if (ret2 == 1) {
+                error_setg(errp,
+                           "Snapshot '%s' already exists in one or more devices",
+                           name);
+                return false;
+            }
         }
     }
 
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index d382918b23..8a3387b72e 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -1140,7 +1140,7 @@ void hmp_savevm(Monitor *mon, const QDict *qdict)
 {
     Error *err = NULL;
 
-    save_snapshot(qdict_get_try_str(qdict, "name"), &err);
+    save_snapshot(qdict_get_try_str(qdict, "name"), true, &err);
     hmp_handle_error(mon, err);
 }
 
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index 098ef8e0f5..0ae6785b3b 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -327,7 +327,7 @@ void replay_gdb_attached(void)
      */
     if (replay_mode == REPLAY_MODE_PLAY
         && !replay_snapshot) {
-        if (!save_snapshot("start_debugging", NULL)) {
+        if (!save_snapshot("start_debugging", true, NULL)) {
             /* Can't create the snapshot. Continue conventional debugging. */
         }
     }
diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c
index b289365937..31c5a8702b 100644
--- a/replay/replay-snapshot.c
+++ b/replay/replay-snapshot.c
@@ -77,7 +77,7 @@ void replay_vmstate_init(void)
 
     if (replay_snapshot) {
         if (replay_mode == REPLAY_MODE_RECORD) {
-            if (!save_snapshot(replay_snapshot, &err)) {
+            if (!save_snapshot(replay_snapshot, true, &err)) {
                 error_report_err(err);
                 error_report("Could not create snapshot for icount record");
                 exit(1);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 08/12] migration: wire up support for snapshot device selection
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (6 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 07/12] migration: control whether snapshots are ovewritten Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 09/12] migration: introduce a delete_snapshot wrapper Daniel P. Berrangé
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

Modify load_snapshot/save_snapshot to accept the device list and vmstate
node name parameters previously added to the block layer.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 include/migration/snapshot.h | 18 ++++++++++++++++--
 migration/savevm.c           | 30 ++++++++++++++++++------------
 monitor/hmp-cmds.c           |  5 +++--
 replay/replay-debugging.c    |  4 ++--
 replay/replay-snapshot.c     |  5 +++--
 softmmu/vl.c                 |  2 +-
 6 files changed, 43 insertions(+), 21 deletions(-)

diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h
index d8c22d343c..3bdbef435b 100644
--- a/include/migration/snapshot.h
+++ b/include/migration/snapshot.h
@@ -15,23 +15,37 @@
 #ifndef QEMU_MIGRATION_SNAPSHOT_H
 #define QEMU_MIGRATION_SNAPSHOT_H
 
+#include "qapi/qapi-builtin-types.h"
+
 /**
  * save_snapshot: Save an internal snapshot.
  * @name: name of internal snapshot
  * @overwrite: replace existing snapshot with @name
+ * @vmstate: blockdev node name to store VM state in
+ * @has_devices: whether to use explicit device list
+ * @devices: explicit device list to snapshot
  * @errp: pointer to error object
  * On success, return %true.
  * On failure, store an error through @errp and return %false.
  */
-bool save_snapshot(const char *name, bool overwrite, Error **errp);
+bool save_snapshot(const char *name, bool overwrite,
+                   const char *vmstate,
+                   bool has_devices, strList *devices,
+                   Error **errp);
 
 /**
  * load_snapshot: Load an internal snapshot.
  * @name: name of internal snapshot
+ * @vmstate: blockdev node name to load VM state from
+ * @has_devices: whether to use explicit device list
+ * @devices: explicit device list to snapshot
  * @errp: pointer to error object
  * On success, return %true.
  * On failure, store an error through @errp and return %false.
  */
-bool load_snapshot(const char *name, Error **errp);
+bool load_snapshot(const char *name,
+                   const char *vmstate,
+                   bool has_devices, strList *devices,
+                   Error **errp);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index a98f65c165..fde680efc6 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -43,6 +43,8 @@
 #include "qapi/error.h"
 #include "qapi/qapi-commands-migration.h"
 #include "qapi/qmp/json-writer.h"
+#include "qapi/clone-visitor.h"
+#include "qapi/qapi-builtin-visit.h"
 #include "qapi/qmp/qerror.h"
 #include "qemu/error-report.h"
 #include "sysemu/cpus.h"
@@ -2729,7 +2731,8 @@ int qemu_load_device_state(QEMUFile *f)
     return 0;
 }
 
-bool save_snapshot(const char *name, bool overwrite, Error **errp)
+bool save_snapshot(const char *name, bool overwrite, const char *vmstate,
+                  bool has_devices, strList *devices, Error **errp)
 {
     BlockDriverState *bs;
     QEMUSnapshotInfo sn1, *sn = &sn1;
@@ -2751,18 +2754,19 @@ bool save_snapshot(const char *name, bool overwrite, Error **errp)
         return false;
     }
 
-    if (!bdrv_all_can_snapshot(false, NULL, errp)) {
+    if (!bdrv_all_can_snapshot(has_devices, devices, errp)) {
         return false;
     }
 
     /* Delete old snapshots of the same name */
     if (name) {
         if (overwrite) {
-            if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) {
+            if (bdrv_all_delete_snapshot(name, has_devices,
+                                         devices, errp) < 0) {
                 return false;
             }
         } else {
-            ret2 = bdrv_all_has_snapshot(name, false, NULL, errp);
+            ret2 = bdrv_all_has_snapshot(name, has_devices, devices, errp);
             if (ret2 < 0) {
                 return false;
             }
@@ -2775,7 +2779,7 @@ bool save_snapshot(const char *name, bool overwrite, Error **errp)
         }
     }
 
-    bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp);
+    bs = bdrv_all_find_vmstate_bs(vmstate, has_devices, devices, errp);
     if (bs == NULL) {
         return false;
     }
@@ -2840,9 +2844,10 @@ bool save_snapshot(const char *name, bool overwrite, Error **errp)
     aio_context_release(aio_context);
     aio_context = NULL;
 
-    ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, false, NULL, errp);
+    ret = bdrv_all_create_snapshot(sn, bs, vm_state_size,
+                                   has_devices, devices, errp);
     if (ret < 0) {
-        bdrv_all_delete_snapshot(sn->name, false, NULL, NULL);
+        bdrv_all_delete_snapshot(sn->name, has_devices, devices, NULL);
         goto the_end;
     }
 
@@ -2943,7 +2948,8 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp)
     migration_incoming_state_destroy();
 }
 
-bool load_snapshot(const char *name, Error **errp)
+bool load_snapshot(const char *name, const char *vmstate,
+                   bool has_devices, strList *devices, Error **errp)
 {
     BlockDriverState *bs_vm_state;
     QEMUSnapshotInfo sn;
@@ -2952,10 +2958,10 @@ bool load_snapshot(const char *name, Error **errp)
     AioContext *aio_context;
     MigrationIncomingState *mis = migration_incoming_get_current();
 
-    if (!bdrv_all_can_snapshot(false, NULL, errp)) {
+    if (!bdrv_all_can_snapshot(has_devices, devices, errp)) {
         return false;
     }
-    ret = bdrv_all_has_snapshot(name, false, NULL, errp);
+    ret = bdrv_all_has_snapshot(name, has_devices, devices, errp);
     if (ret < 0) {
         return false;
     }
@@ -2965,7 +2971,7 @@ bool load_snapshot(const char *name, Error **errp)
         return false;
     }
 
-    bs_vm_state = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp);
+    bs_vm_state = bdrv_all_find_vmstate_bs(vmstate, has_devices, devices, errp);
     if (!bs_vm_state) {
         return false;
     }
@@ -2992,7 +2998,7 @@ bool load_snapshot(const char *name, Error **errp)
     /* Flush all IO requests so they don't interfere with the new state.  */
     bdrv_drain_all_begin();
 
-    ret = bdrv_all_goto_snapshot(name, false, NULL, errp);
+    ret = bdrv_all_goto_snapshot(name, has_devices, devices, errp);
     if (ret < 0) {
         goto err_drain;
     }
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index 8a3387b72e..ad8bf23577 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -1130,7 +1130,7 @@ void hmp_loadvm(Monitor *mon, const QDict *qdict)
 
     vm_stop(RUN_STATE_RESTORE_VM);
 
-    if (!load_snapshot(name, &err) && saved_vm_running) {
+    if (!load_snapshot(name, NULL, false, NULL, &err) && saved_vm_running) {
         vm_start();
     }
     hmp_handle_error(mon, err);
@@ -1140,7 +1140,8 @@ void hmp_savevm(Monitor *mon, const QDict *qdict)
 {
     Error *err = NULL;
 
-    save_snapshot(qdict_get_try_str(qdict, "name"), true, &err);
+    save_snapshot(qdict_get_try_str(qdict, "name"),
+                  true, NULL, false, NULL, &err);
     hmp_handle_error(mon, err);
 }
 
diff --git a/replay/replay-debugging.c b/replay/replay-debugging.c
index 0ae6785b3b..1cde50e9f3 100644
--- a/replay/replay-debugging.c
+++ b/replay/replay-debugging.c
@@ -196,7 +196,7 @@ static void replay_seek(int64_t icount, QEMUTimerCB callback, Error **errp)
         if (icount < replay_get_current_icount()
             || replay_get_current_icount() < snapshot_icount) {
             vm_stop(RUN_STATE_RESTORE_VM);
-            load_snapshot(snapshot, errp);
+            load_snapshot(snapshot, NULL, false, NULL, errp);
         }
         g_free(snapshot);
     }
@@ -327,7 +327,7 @@ void replay_gdb_attached(void)
      */
     if (replay_mode == REPLAY_MODE_PLAY
         && !replay_snapshot) {
-        if (!save_snapshot("start_debugging", true, NULL)) {
+        if (!save_snapshot("start_debugging", true, NULL, false, NULL, NULL)) {
             /* Can't create the snapshot. Continue conventional debugging. */
         }
     }
diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c
index 31c5a8702b..e8767a1937 100644
--- a/replay/replay-snapshot.c
+++ b/replay/replay-snapshot.c
@@ -77,13 +77,14 @@ void replay_vmstate_init(void)
 
     if (replay_snapshot) {
         if (replay_mode == REPLAY_MODE_RECORD) {
-            if (!save_snapshot(replay_snapshot, true, &err)) {
+            if (!save_snapshot(replay_snapshot,
+                               true, NULL, false, NULL, &err)) {
                 error_report_err(err);
                 error_report("Could not create snapshot for icount record");
                 exit(1);
             }
         } else if (replay_mode == REPLAY_MODE_PLAY) {
-            if (!load_snapshot(replay_snapshot, &err)) {
+            if (!load_snapshot(replay_snapshot, NULL, false, NULL, &err)) {
                 error_report_err(err);
                 error_report("Could not load snapshot for icount replay");
                 exit(1);
diff --git a/softmmu/vl.c b/softmmu/vl.c
index 8f655086b7..32b353752a 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -2529,7 +2529,7 @@ void qmp_x_exit_preconfig(Error **errp)
 
     if (loadvm) {
         Error *local_err = NULL;
-        if (!load_snapshot(loadvm, &local_err)) {
+        if (!load_snapshot(loadvm, NULL, false, NULL, &local_err)) {
             error_report_err(local_err);
             autostart = 0;
             exit(1);
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 09/12] migration: introduce a delete_snapshot wrapper
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (7 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 08/12] migration: wire up support for snapshot device selection Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 10/12] iotests: add support for capturing and matching QMP events Daniel P. Berrangé
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

Make snapshot deletion consistent with the snapshot save
and load commands by using a wrapper around the blockdev
layer. The main difference is that we get upfront validation
of the passed in device list (if any).

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 include/migration/snapshot.h | 13 +++++++++++++
 migration/savevm.c           | 14 ++++++++++++++
 monitor/hmp-cmds.c           |  2 +-
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h
index 3bdbef435b..e72083b117 100644
--- a/include/migration/snapshot.h
+++ b/include/migration/snapshot.h
@@ -48,4 +48,17 @@ bool load_snapshot(const char *name,
                    bool has_devices, strList *devices,
                    Error **errp);
 
+/**
+ * delete_snapshot: Delete a snapshot.
+ * @name: path to snapshot
+ * @has_devices: whether to use explicit device list
+ * @devices: explicit device list to snapshot
+ * @errp: pointer to error object
+ * On success, return %true.
+ * On failure, store an error through @errp and return %false.
+ */
+bool delete_snapshot(const char *name,
+                    bool has_devices, strList *devices,
+                    Error **errp);
+
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index fde680efc6..48186918a3 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -3036,6 +3036,20 @@ err_drain:
     return false;
 }
 
+bool delete_snapshot(const char *name, bool has_devices,
+                     strList *devices, Error **errp)
+{
+    if (!bdrv_all_can_snapshot(has_devices, devices, errp)) {
+        return false;
+    }
+
+    if (bdrv_all_delete_snapshot(name, has_devices, devices, errp) < 0) {
+        return false;
+    }
+
+    return true;
+}
+
 void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev)
 {
     qemu_ram_set_idstr(mr->ram_block,
diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
index ad8bf23577..f8dc3861a6 100644
--- a/monitor/hmp-cmds.c
+++ b/monitor/hmp-cmds.c
@@ -1150,7 +1150,7 @@ void hmp_delvm(Monitor *mon, const QDict *qdict)
     Error *err = NULL;
     const char *name = qdict_get_str(qdict, "name");
 
-    bdrv_all_delete_snapshot(name, false, NULL, &err);
+    delete_snapshot(name, false, NULL, &err);
     hmp_handle_error(mon, err);
 }
 
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 10/12] iotests: add support for capturing and matching QMP events
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (8 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 09/12] migration: introduce a delete_snapshot wrapper Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 11/12] iotests: fix loading of common.config from tests/ subdir Daniel P. Berrangé
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

When using the _launch_qemu and _send_qemu_cmd functions from
common.qemu, any QMP events get mixed in with the output from
the commands and responses.

This makes it difficult to write a test case as the ordering
of events in the output is not stable.

This introduces a variable 'capture_events' which can be set
to a list of event names. Any events listed in this variable
will not be printed, instead collected in the $QEMU_EVENTS
environment variable.

A new '_wait_event' function can be invoked to collect events
at a fixed point in time. The function will first pull events
cached in $QEMU_EVENTS variable, and if none are found, will
then read more from QMP.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 tests/qemu-iotests/common.qemu | 106 ++++++++++++++++++++++++++++++++-
 1 file changed, 105 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/common.qemu b/tests/qemu-iotests/common.qemu
index ef105dfc39..0fc52d20d7 100644
--- a/tests/qemu-iotests/common.qemu
+++ b/tests/qemu-iotests/common.qemu
@@ -53,6 +53,15 @@ _in_fd=4
 # If $mismatch_only is set, only non-matching responses will
 # be echoed.
 #
+# If $capture_events is non-empty, then any QMP event names it lists
+# will not be echoed out, but instead collected in the $QEMU_EVENTS
+# variable. The _wait_event function can later be used to receive
+# the cached events.
+#
+# If $only_capture_events is set to anything but an empty string,
+# then an error will be raised if a QMP message is seen which is
+# not an event listed in $capture_events.
+#
 # If $success_or_failure is set, the meaning of the arguments is
 # changed as follows:
 # $2: A string to search for in the response; if found, this indicates
@@ -78,6 +87,31 @@ _timed_wait_for()
     QEMU_STATUS[$h]=0
     while IFS= read -t ${QEMU_COMM_TIMEOUT} resp <&${QEMU_OUT[$h]}
     do
+        if [ -n "$capture_events" ]; then
+            capture=0
+            local evname
+            for evname in $capture_events
+            do
+                case ${resp} in
+                    *\"event\":\ \"${evname}\"* ) capture=1 ;;
+                esac
+            done
+            if [ $capture = 1 ];
+            then
+                ev=$(echo "${resp}" | tr -d '\r' | tr % .)
+                QEMU_EVENTS="${QEMU_EVENTS:+${QEMU_EVENTS}%}${ev}"
+                if [ -n "$only_capture_events" ]; then
+                    return
+                else
+                    continue
+                fi
+            fi
+        fi
+        if [ -n "$only_capture_events" ]; then
+            echo "Only expected $capture_events but got ${resp}"
+            exit 1
+        fi
+
         if [ -z "${silent}" ] && [ -z "${mismatch_only}" ]; then
             echo "${resp}" | _filter_testdir | _filter_qemu \
                            | _filter_qemu_io | _filter_qmp | _filter_hmp
@@ -172,12 +206,82 @@ _send_qemu_cmd()
         let count--;
     done
     if [ ${QEMU_STATUS[$h]} -ne 0 ] && [ -z "${qemu_error_no_exit}" ]; then
-        echo "Timeout waiting for ${1} on handle ${h}"
+        echo "Timeout waiting for command ${1} response on handle ${h}"
         exit 1 #Timeout means the test failed
     fi
 }
 
 
+# Check event cache for a named QMP event
+#
+# Input parameters:
+# $1:       Name of the QMP event to check for
+#
+# Checks if the named QMP event that was previously captured
+# into $QEMU_EVENTS. When matched, the QMP event will be echoed
+# and the $matched variable set to 1.
+#
+# _wait_event is more suitable for test usage in most cases
+_check_cached_events()
+{
+    local evname=${1}
+
+    local match="\"event\": \"$evname\""
+
+    matched=0
+    if [ -n "$QEMU_EVENTS" ]; then
+        CURRENT_QEMU_EVENTS=$QEMU_EVENTS
+        QEMU_EVENTS=
+        old_IFS=$IFS
+        IFS="%"
+        for ev in $CURRENT_QEMU_EVENTS
+        do
+            grep -q "$match" < <(echo "${ev}")
+            if [ $? -eq 0 ] && [ $matched = 0 ]; then
+                echo "${ev}" | _filter_testdir | _filter_qemu \
+                           | _filter_qemu_io | _filter_qmp | _filter_hmp
+                matched=1
+            else
+                QEMU_EVENTS="${QEMU_EVENTS:+${QEMU_EVENTS}%}${ev}"
+            fi
+        done
+        IFS=$old_IFS
+    fi
+}
+
+# Wait for a named QMP event
+#
+# Input parameters:
+# $1:       QEMU handle to use
+# $2:       Name of the QMP event to wait for
+#
+# Checks if the named QMP even was previously captured
+# into $QEMU_EVENTS. If none are present, then waits for the
+# event to arrive on the QMP channel. When matched, the QMP
+# event will be echoed
+_wait_event()
+{
+    local h=${1}
+    local evname=${2}
+
+    while true
+    do
+        _check_cached_events $evname
+
+        if [ $matched = 1 ];
+        then
+            return
+        fi
+
+        only_capture_events=1 qemu_error_no_exit=1 _timed_wait_for ${h}
+
+        if [ ${QEMU_STATUS[$h]} -ne 0 ] ; then
+            echo "Timeout waiting for event ${evname} on handle ${h}"
+            exit 1 #Timeout means the test failed
+        fi
+    done
+}
+
 # Launch a QEMU process.
 #
 # Input parameters:
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 11/12] iotests: fix loading of common.config from tests/ subdir
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (9 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 10/12] iotests: add support for capturing and matching QMP events Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 12:48 ` [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands Daniel P. Berrangé
  2021-02-04 15:17 ` [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Dr. David Alan Gilbert
  12 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Philippe Mathieu-Daudé,
	Markus Armbruster, Dr. David Alan Gilbert, Pavel Dovgalyuk,
	Paolo Bonzini, Max Reitz

common.rc assumes it is being sourced from the same directory and
so also tries to source common.config from the current working
directory. With the ability to now have named tests in the tests/
subdir we need to check two locations for common.config.

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 tests/qemu-iotests/common.rc | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/common.rc b/tests/qemu-iotests/common.rc
index 297acf9b6a..77c37e8312 100644
--- a/tests/qemu-iotests/common.rc
+++ b/tests/qemu-iotests/common.rc
@@ -109,8 +109,14 @@ peek_file_raw()
     dd if="$1" bs=1 skip="$2" count="$3" status=none
 }
 
-
-if ! . ./common.config
+config=common.config
+test -f $config || config=../common.config
+if ! test -f $config
+then
+    echo "$0: failed to find common.config"
+    exit 1
+fi
+if ! . $config
     then
     echo "$0: failed to source common.config"
     exit 1
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (10 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 11/12] iotests: fix loading of common.config from tests/ subdir Daniel P. Berrangé
@ 2021-02-04 12:48 ` Daniel P. Berrangé
  2021-02-04 15:34   ` Dr. David Alan Gilbert
                     ` (2 more replies)
  2021-02-04 15:17 ` [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Dr. David Alan Gilbert
  12 siblings, 3 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 12:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, Daniel P. Berrangé,
	qemu-block, Juan Quintela, John Snow, Markus Armbruster,
	Dr. David Alan Gilbert, Pavel Dovgalyuk, Paolo Bonzini,
	Max Reitz

savevm, loadvm and delvm are some of the few HMP commands that have never
been converted to use QMP. The reasons for the lack of conversion are
that they blocked execution of the event thread, and the semantics
around choice of disks were ill-defined.

Despite this downside, however, libvirt and applications using libvirt
have used these commands for as long as QMP has existed, via the
"human-monitor-command" passthrough command. IOW, while it is clearly
desirable to be able to fix the problems, they are not a blocker to
all real world usage.

Meanwhile there is a need for other features which involve adding new
parameters to the commands. This is possible with HMP passthrough, but
it provides no reliable way for apps to introspect features, so using
QAPI modelling is highly desirable.

This patch thus introduces new snapshot-{load,save,delete} commands to
QMP that are intended to replace the old HMP counterparts. The new
commands are given different names, because they will be using the new
QEMU job framework and thus will have diverging behaviour from the HMP
originals. It would thus be misleading to keep the same name.

While this design uses the generic job framework, the current impl is
still blocking. The intention that the blocking problem is fixed later.
None the less applications using these new commands should assume that
they are asynchronous and thus wait for the job status change event to
indicate completion.

In addition to using the job framework, the new commands require the
caller to be explicit about all the block device nodes used in the
snapshot operations, with no built-in default heuristics in use.

Note that the existing "query-named-block-nodes" can be used to query
what snapshots currently exist for block nodes.

Acked-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 migration/savevm.c                            | 184 +++++++
 qapi/job.json                                 |   9 +-
 qapi/migration.json                           | 173 ++++++
 .../tests/internal-snapshots-qapi             | 386 +++++++++++++
 .../tests/internal-snapshots-qapi.out         | 520 ++++++++++++++++++
 5 files changed, 1271 insertions(+), 1 deletion(-)
 create mode 100755 tests/qemu-iotests/tests/internal-snapshots-qapi
 create mode 100644 tests/qemu-iotests/tests/internal-snapshots-qapi.out

diff --git a/migration/savevm.c b/migration/savevm.c
index 48186918a3..6b320423c7 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -3077,3 +3077,187 @@ bool vmstate_check_only_migratable(const VMStateDescription *vmsd)
 
     return !(vmsd && vmsd->unmigratable);
 }
+
+typedef struct SnapshotJob {
+    Job common;
+    char *tag;
+    char *vmstate;
+    strList *devices;
+    Coroutine *co;
+    Error **errp;
+    bool ret;
+} SnapshotJob;
+
+static void qmp_snapshot_job_free(SnapshotJob *s)
+{
+    g_free(s->tag);
+    g_free(s->vmstate);
+    qapi_free_strList(s->devices);
+}
+
+
+static void snapshot_load_job_bh(void *opaque)
+{
+    Job *job = opaque;
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    int orig_vm_running;
+
+    job_progress_set_remaining(&s->common, 1);
+
+    orig_vm_running = runstate_is_running();
+    vm_stop(RUN_STATE_RESTORE_VM);
+
+    s->ret = load_snapshot(s->tag, s->vmstate, true, s->devices, s->errp);
+    if (s->ret && orig_vm_running) {
+        vm_start();
+    }
+
+    job_progress_update(&s->common, 1);
+
+    qmp_snapshot_job_free(s);
+    aio_co_wake(s->co);
+}
+
+static void snapshot_save_job_bh(void *opaque)
+{
+    Job *job = opaque;
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+
+    job_progress_set_remaining(&s->common, 1);
+    s->ret = save_snapshot(s->tag, false, s->vmstate,
+                           true, s->devices, s->errp);
+    job_progress_update(&s->common, 1);
+
+    qmp_snapshot_job_free(s);
+    aio_co_wake(s->co);
+}
+
+static void snapshot_delete_job_bh(void *opaque)
+{
+    Job *job = opaque;
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+
+    job_progress_set_remaining(&s->common, 1);
+    s->ret = delete_snapshot(s->tag, true, s->devices, s->errp);
+    job_progress_update(&s->common, 1);
+
+    qmp_snapshot_job_free(s);
+    aio_co_wake(s->co);
+}
+
+static int coroutine_fn snapshot_save_job_run(Job *job, Error **errp)
+{
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    s->errp = errp;
+    s->co = qemu_coroutine_self();
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            snapshot_save_job_bh, job);
+    qemu_coroutine_yield();
+    return s->ret ? 0 : -1;
+}
+
+static int coroutine_fn snapshot_load_job_run(Job *job, Error **errp)
+{
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    s->errp = errp;
+    s->co = qemu_coroutine_self();
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            snapshot_load_job_bh, job);
+    qemu_coroutine_yield();
+    return s->ret ? 0 : -1;
+}
+
+static int coroutine_fn snapshot_delete_job_run(Job *job, Error **errp)
+{
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    s->errp = errp;
+    s->co = qemu_coroutine_self();
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            snapshot_delete_job_bh, job);
+    qemu_coroutine_yield();
+    return s->ret ? 0 : -1;
+}
+
+
+static const JobDriver snapshot_load_job_driver = {
+    .instance_size = sizeof(SnapshotJob),
+    .job_type      = JOB_TYPE_SNAPSHOT_LOAD,
+    .run           = snapshot_load_job_run,
+};
+
+static const JobDriver snapshot_save_job_driver = {
+    .instance_size = sizeof(SnapshotJob),
+    .job_type      = JOB_TYPE_SNAPSHOT_SAVE,
+    .run           = snapshot_save_job_run,
+};
+
+static const JobDriver snapshot_delete_job_driver = {
+    .instance_size = sizeof(SnapshotJob),
+    .job_type      = JOB_TYPE_SNAPSHOT_DELETE,
+    .run           = snapshot_delete_job_run,
+};
+
+
+void qmp_snapshot_save(const char *job_id,
+                       const char *tag,
+                       const char *vmstate,
+                       strList *devices,
+                       Error **errp)
+{
+    SnapshotJob *s;
+
+    s = job_create(job_id, &snapshot_save_job_driver, NULL,
+                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
+                   NULL, NULL, errp);
+    if (!s) {
+        return;
+    }
+
+    s->tag = g_strdup(tag);
+    s->vmstate = g_strdup(vmstate);
+    s->devices = QAPI_CLONE(strList, devices);
+
+    job_start(&s->common);
+}
+
+void qmp_snapshot_load(const char *job_id,
+                       const char *tag,
+                       const char *vmstate,
+                       strList *devices,
+                       Error **errp)
+{
+    SnapshotJob *s;
+
+    s = job_create(job_id, &snapshot_load_job_driver, NULL,
+                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
+                   NULL, NULL, errp);
+    if (!s) {
+        return;
+    }
+
+    s->tag = g_strdup(tag);
+    s->vmstate = g_strdup(vmstate);
+    s->devices = QAPI_CLONE(strList, devices);
+
+    job_start(&s->common);
+}
+
+void qmp_snapshot_delete(const char *job_id,
+                         const char *tag,
+                         strList *devices,
+                         Error **errp)
+{
+    SnapshotJob *s;
+
+    s = job_create(job_id, &snapshot_delete_job_driver, NULL,
+                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
+                   NULL, NULL, errp);
+    if (!s) {
+        return;
+    }
+
+    s->tag = g_strdup(tag);
+    s->devices = QAPI_CLONE(strList, devices);
+
+    job_start(&s->common);
+}
diff --git a/qapi/job.json b/qapi/job.json
index 280c2f76f1..1a6ef03451 100644
--- a/qapi/job.json
+++ b/qapi/job.json
@@ -22,10 +22,17 @@
 #
 # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)
 #
+# @snapshot-load: snapshot load job type, see "snapshot-load" (since 6.0)
+#
+# @snapshot-save: snapshot save job type, see "snapshot-save" (since 6.0)
+#
+# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 6.0)
+#
 # Since: 1.7
 ##
 { 'enum': 'JobType',
-  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }
+  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',
+           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }
 
 ##
 # @JobStatus:
diff --git a/qapi/migration.json b/qapi/migration.json
index d1d9632c2a..5ca0ff9bed 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1843,3 +1843,176 @@
 # Since: 5.2
 ##
 { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }
+
+##
+# @snapshot-save:
+#
+# Save a VM snapshot
+#
+# @job-id: identifier for the newly created job
+# @tag: name of the snapshot to create
+# @vmstate: block device node name to save vmstate to
+# @devices: list of block device node names to save a snapshot to
+#
+# Applications should not assume that the snapshot save is complete
+# when this command returns. The job commands / events must be used
+# to determine completion and to fetch details of any errors that arise.
+#
+# Note that execution of the guest CPUs may be stopped during the
+# time it takes to save the snapshot. A future version of QEMU
+# may ensure CPUs are executing continuously.
+#
+# It is strongly recommended that @devices contain all writable
+# block device nodes if a consistent snapshot is required.
+#
+# If @tag already exists, an error will be reported
+#
+# Returns: nothing
+#
+# Example:
+#
+# -> { "execute": "snapshot-save",
+#      "data": {
+#         "job-id": "snapsave0",
+#         "tag": "my-snap",
+#         "vmstate": "disk0",
+#         "devices": ["disk0", "disk1"]
+#      }
+#    }
+# <- { "return": { } }
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "created", "id": "snapsave0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "running", "id": "snapsave0"}}
+# <- {"event": "STOP"}
+# <- {"event": "RESUME"}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "waiting", "id": "snapsave0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "pending", "id": "snapsave0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "concluded", "id": "snapsave0"}}
+# -> {"execute": "query-jobs"}
+# <- {"return": [{"current-progress": 1,
+#                 "status": "concluded",
+#                 "total-progress": 1,
+#                 "type": "snapshot-save",
+#                 "id": "snapsave0"}]}
+#
+# Since: 6.0
+##
+{ 'command': 'snapshot-save',
+  'data': { 'job-id': 'str',
+            'tag': 'str',
+            'vmstate': 'str',
+            'devices': ['str'] } }
+
+##
+# @snapshot-load:
+#
+# Load a VM snapshot
+#
+# @job-id: identifier for the newly created job
+# @tag: name of the snapshot to load.
+# @vmstate: block device node name to load vmstate from
+# @devices: list of block device node names to load a snapshot from
+#
+# Applications should not assume that the snapshot load is complete
+# when this command returns. The job commands / events must be used
+# to determine completion and to fetch details of any errors that arise.
+#
+# Note that execution of the guest CPUs will be stopped during the
+# time it takes to load the snapshot.
+#
+# It is strongly recommended that @devices contain all writable
+# block device nodes that can have changed since the original
+# @snapshot-save command execution.
+#
+# Returns: nothing
+#
+# Example:
+#
+# -> { "execute": "snapshot-load",
+#      "data": {
+#         "job-id": "snapload0",
+#         "tag": "my-snap",
+#         "vmstate": "disk0",
+#         "devices": ["disk0", "disk1"]
+#      }
+#    }
+# <- { "return": { } }
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "created", "id": "snapload0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "running", "id": "snapload0"}}
+# <- {"event": "STOP"}
+# <- {"event": "RESUME"}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "waiting", "id": "snapload0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "pending", "id": "snapload0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "concluded", "id": "snapload0"}}
+# -> {"execute": "query-jobs"}
+# <- {"return": [{"current-progress": 1,
+#                 "status": "concluded",
+#                 "total-progress": 1,
+#                 "type": "snapshot-load",
+#                 "id": "snapload0"}]}
+#
+# Since: 6.0
+##
+{ 'command': 'snapshot-load',
+  'data': { 'job-id': 'str',
+            'tag': 'str',
+            'vmstate': 'str',
+            'devices': ['str'] } }
+
+##
+# @snapshot-delete:
+#
+# Delete a VM snapshot
+#
+# @job-id: identifier for the newly created job
+# @tag: name of the snapshot to delete.
+# @devices: list of block device node names to delete a snapshot from
+#
+# Applications should not assume that the snapshot delete is complete
+# when this command returns. The job commands / events must be used
+# to determine completion and to fetch details of any errors that arise.
+#
+# Returns: nothing
+#
+# Example:
+#
+# -> { "execute": "snapshot-delete",
+#      "data": {
+#         "job-id": "snapdelete0",
+#         "tag": "my-snap",
+#         "devices": ["disk0", "disk1"]
+#      }
+#    }
+# <- { "return": { } }
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "created", "id": "snapdelete0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "running", "id": "snapdelete0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "waiting", "id": "snapdelete0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "pending", "id": "snapdelete0"}}
+# <- {"event": "JOB_STATUS_CHANGE",
+#     "data": {"status": "concluded", "id": "snapdelete0"}}
+# -> {"execute": "query-jobs"}
+# <- {"return": [{"current-progress": 1,
+#                 "status": "concluded",
+#                 "total-progress": 1,
+#                 "type": "snapshot-delete",
+#                 "id": "snapdelete0"}]}
+#
+# Since: 6.0
+##
+{ 'command': 'snapshot-delete',
+  'data': { 'job-id': 'str',
+            'tag': 'str',
+            'devices': ['str'] } }
diff --git a/tests/qemu-iotests/tests/internal-snapshots-qapi b/tests/qemu-iotests/tests/internal-snapshots-qapi
new file mode 100755
index 0000000000..6467eaaac0
--- /dev/null
+++ b/tests/qemu-iotests/tests/internal-snapshots-qapi
@@ -0,0 +1,386 @@
+#!/usr/bin/env bash
+# group: rw auto quick snapshot
+#
+# Test which nodes are involved in internal snapshots
+#
+# Copyright (C) 2020-2021 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+# creator
+owner=berrange@redhat.com
+
+seq=`basename $0`
+echo "QA output created by $seq"
+
+status=1        # failure is the default!
+
+_cleanup()
+{
+    _cleanup_qemu
+    _cleanup_test_img
+    TEST_IMG="$TEST_IMG.alt1" _cleanup_test_img
+    TEST_IMG="$TEST_IMG.alt2" _cleanup_test_img
+    rm -f "$SOCK_DIR/nbd"
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+. ../common.rc
+. ../common.filter
+. ../common.qemu
+
+_supported_fmt qcow2
+_supported_proto file
+_supported_os Linux
+_require_drivers copy-on-read
+
+# Internal snapshots are (currently) impossible with refcount_bits=1,
+# and generally impossible with external data files
+_unsupported_imgopts 'refcount_bits=1[^0-9]' data_file
+
+_require_devices virtio-blk
+
+
+size=128M
+
+if [ -n "$BACKING_FILE" ]; then
+    _make_test_img -b "$BACKING_FILE" -F $IMGFMT $size
+else
+    _make_test_img $size
+fi
+TEST_IMG="$TEST_IMG.alt1" _make_test_img $size
+IMGOPTS= IMGFMT=raw TEST_IMG="$TEST_IMG.alt2" _make_test_img $size
+
+export capture_events="JOB_STATUS_CHANGE STOP RESUME"
+
+wait_job()
+{
+    local job=$1
+    shift
+
+    # All jobs start with two events...
+    #
+    # created
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+    # running
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+
+    # Next events vary depending on job type and
+    # whether it succeeds or not.
+    for evname in $@
+    do
+        _wait_event $QEMU_HANDLE $evname
+    done
+
+    # All jobs finish off with two more events...
+    # concluded
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"query-jobs\"}" "return"
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"job-dismiss\", \"arguments\": {\"id\": \"$job\"}}" "return"
+    # null
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+}
+
+run_save()
+{
+    local job=$1
+    local vmstate=$2
+    local devices=$3
+    local fail=$4
+
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-save\",
+                                  \"arguments\": {
+                                     \"job-id\": \"$job\",
+                                     \"tag\": \"snap0\",
+                                     \"vmstate\": \"$vmstate\",
+                                     \"devices\": $devices}}" "return"
+
+    if [ $fail = 0 ]; then
+        # job status: waiting, pending
+        wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
+    else
+        # job status: aborting
+        wait_job $job "JOB_STATUS_CHANGE"
+    fi
+}
+
+run_load()
+{
+    local job=$1
+    local vmstate=$2
+    local devices=$3
+    local fail=$4
+
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-load\",
+                                  \"arguments\": {
+                                     \"job-id\": \"$job\",
+                                     \"tag\": \"snap0\",
+                                     \"vmstate\": \"$vmstate\",
+                                     \"devices\": $devices}}" "return"
+    if [ $fail = 0 ]; then
+        # job status: waiting, pending
+        wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
+    else
+        # job status: aborting
+        wait_job $job "STOP" "JOB_STATUS_CHANGE"
+    fi
+}
+
+run_delete()
+{
+    local job=$1
+    local devices=$2
+    local fail=$3
+
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-delete\",
+                                  \"arguments\": {
+                                     \"job-id\": \"$job\",
+                                     \"tag\": \"snap0\",
+                                     \"devices\": $devices}}" "return"
+    if [ $fail = 0 ]; then
+        # job status: waiting, pending
+        wait_job $job "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
+    else
+        # job status: aborting
+        wait_job $job "JOB_STATUS_CHANGE"
+    fi
+}
+
+start_qemu()
+{
+    keep_stderr=y
+    _launch_qemu -nodefaults -nographic "$@"
+
+    _send_qemu_cmd $QEMU_HANDLE '{"execute": "qmp_capabilities"}' 'return'
+}
+
+stop_qemu()
+{
+    _send_qemu_cmd $QEMU_HANDLE '{"execute": "quit"}' 'return'
+
+    wait=1 _cleanup_qemu
+}
+
+
+echo
+echo "=====  Snapshot single qcow2 image ====="
+echo
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-simple" "diskfmt0" "[\"diskfmt0\"]" 0
+run_load "load-simple" "diskfmt0" "[\"diskfmt0\"]" 0
+run_delete "delete-simple" "[\"diskfmt0\"]" 0
+stop_qemu
+
+
+echo
+echo "=====  Snapshot no image ====="
+echo
+
+# When snapshotting we need to pass at least one writable disk
+# otherwise there's no work to do
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-no-image" "diskfmt0" "[]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot missing image ====="
+echo
+
+# The block node names we pass need to actually exist
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-missing-image" "diskfmt1729" "[\"diskfmt1729\"]" 1
+stop_qemu
+
+echo
+echo "=====  Snapshot vmstate not in devices list ====="
+echo
+
+# The node name referred to for vmstate must be one of the nodes
+# being included in the snapshot, otherwise the vmstate that is
+# captured is liable to be overwritten making subsequent load
+# impossible
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
+run_save "save-excluded-vmstate" "diskfmt0" "[\"diskfmt1\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot protocol instead of format ====="
+echo
+
+# The snapshot has to be done against the qcow2 format layer
+# not the underlying file protocol layer
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-proto-not-fmt" "disk0" "[\"disk0\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot dual qcow2 image ====="
+echo
+
+# We can snapshot multiple qcow2 disks at the same time
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
+run_save "save-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_load "load-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_delete "delete-dual-image" "[\"diskfmt0\", \"diskfmt1\"]" 0
+stop_qemu
+
+
+echo
+echo "=====  Snapshot error with raw image ====="
+echo
+
+# If we're snapshotting multiple disks, all must be capable
+# of supporting snapshots. A raw disk in the list must cause
+# an error.
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
+    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
+run_save "save-raw-fmt" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\", \"diskfmt2\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot with raw image excluded ====="
+echo
+
+# If we're snapshotting multiple disks, all must be capable
+# of supporting snapshots. A writable raw disk can be excluded
+# from the snapshot, though it means its data won't be restored
+# by later snapshot load operation.
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
+    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
+run_save "save-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_load "load-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_delete "delete-skip-raw" "[\"diskfmt0\", \"diskfmt1\"]" 0
+stop_qemu
+
+echo
+echo "=====  Snapshot bad error reporting to stderr ====="
+echo
+
+# This demonstrates that we're not capturing vmstate loading failures
+# into QMP errors, they're ending up in stderr instead. vmstate needs
+# to report errors via Error object but that is a major piece of work
+# for the future. This test case's expected output log will need
+# adjusting when that is done.
+
+start_qemu \
+    -device virtio-rng \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+
+run_save "save-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 0
+stop_qemu
+
+# leave off virtio-rng to provoke vmstate failure
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+
+run_load "load-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 1
+run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
+
+stop_qemu
+
+
+echo
+echo "=====  Snapshot reuse same tag ====="
+echo
+
+# Validates that we get an error when reusing a snapshot tag that
+# already exists
+
+start_qemu \
+    -device virtio-rng \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+
+run_save "save-err-stderr-initial" "diskfmt0" "[\"diskfmt0\"]" 0
+run_save "save-err-stderr-repeat1" "diskfmt0" "[\"diskfmt0\"]" 1
+run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
+run_save "save-err-stderr-repeat2" "diskfmt0" "[\"diskfmt0\"]" 0
+run_delete "delete-err-stderr-repeat2" "[\"diskfmt0\"]" 0
+
+stop_qemu
+
+echo
+echo "=====  Snapshot load does not exist ====="
+echo
+
+# Validates that we get an error when loading a snapshot that does
+# not exist
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_load "load-missing-snapshot" "diskfmt0" "[\"diskfmt0\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot delete does not exist ====="
+echo
+
+# Validates that we don't get an error when deleting a snapshot that
+# does not exist
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_delete "delete-missing-snapshot" "[\"diskfmt0\"]" 0
+stop_qemu
+
+
+# success, all done
+echo "*** done"
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/tests/internal-snapshots-qapi.out b/tests/qemu-iotests/tests/internal-snapshots-qapi.out
new file mode 100644
index 0000000000..26ff4a838c
--- /dev/null
+++ b/tests/qemu-iotests/tests/internal-snapshots-qapi.out
@@ -0,0 +1,520 @@
+QA output created by internal-snapshots-qapi
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
+Formatting 'TEST_DIR/t.IMGFMT.alt1', fmt=IMGFMT size=134217728
+Formatting 'TEST_DIR/t.qcow2.alt2', fmt=IMGFMT size=134217728
+
+=====  Snapshot single qcow2 image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-simple",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-simple"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-simple"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-simple"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-simple"}}
+{"execute": "snapshot-load",
+                                  "arguments": {
+                                     "job-id": "load-simple",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-simple"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-simple"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-simple"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-simple"}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-simple",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-simple"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-simple"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-simple"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-simple"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot no image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-no-image",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": []}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-no-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-no-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-no-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-no-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-no-image", "error": "At least one device is required for snapshot"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-no-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-no-image"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot missing image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-missing-image",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt1729",
+                                     "devices": ["diskfmt1729"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-missing-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-missing-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-missing-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-missing-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-missing-image", "error": "No block device node 'diskfmt1729'"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-missing-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-missing-image"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot vmstate not in devices list =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-excluded-vmstate",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-excluded-vmstate"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-excluded-vmstate"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-excluded-vmstate"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-excluded-vmstate"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-excluded-vmstate", "error": "vmstate block device 'diskfmt0' does not exist"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-excluded-vmstate"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-excluded-vmstate"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot protocol instead of format =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-proto-not-fmt",
+                                     "tag": "snap0",
+                                     "vmstate": "disk0",
+                                     "devices": ["disk0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-proto-not-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-proto-not-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-proto-not-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-proto-not-fmt"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-proto-not-fmt"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-proto-not-fmt"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot dual qcow2 image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-dual-image",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-dual-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-dual-image"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-dual-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-dual-image"}}
+{"execute": "snapshot-load",
+                                  "arguments": {
+                                     "job-id": "load-dual-image",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-dual-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-dual-image"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-dual-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-dual-image"}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-dual-image",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-dual-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-dual-image"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-dual-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-dual-image"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot error with raw image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-raw-fmt",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-raw-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-raw-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-raw-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-raw-fmt"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-raw-fmt"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-raw-fmt"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot with raw image excluded =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-skip-raw",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-skip-raw"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-skip-raw"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-skip-raw"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-skip-raw"}}
+{"execute": "snapshot-load",
+                                  "arguments": {
+                                     "job-id": "load-skip-raw",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-skip-raw"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-skip-raw"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-skip-raw"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-skip-raw"}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-skip-raw",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-skip-raw"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-skip-raw"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-skip-raw"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-skip-raw"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot bad error reporting to stderr =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-err-stderr",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-load",
+                                  "arguments": {
+                                     "job-id": "load-err-stderr",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}}
+qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-err-stderr", "error": "Error -22 while loading VM state"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-err-stderr"}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-err-stderr",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot reuse same tag =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-err-stderr-initial",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-initial"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-initial"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-initial"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-initial"}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-err-stderr-repeat1",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat1"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat1"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-err-stderr-repeat1"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat1"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat1", "error": "Snapshot 'snap0' already exists in one or more devices"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat1"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat1"}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-err-stderr",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
+{"execute": "snapshot-save",
+                                  "arguments": {
+                                     "job-id": "save-err-stderr-repeat2",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat2"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat2"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat2"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat2"}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-err-stderr-repeat2",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr-repeat2"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr-repeat2"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr-repeat2"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr-repeat2"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot load does not exist =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-load",
+                                  "arguments": {
+                                     "job-id": "load-missing-snapshot",
+                                     "tag": "snap0",
+                                     "vmstate": "diskfmt0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-missing-snapshot"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-missing-snapshot", "error": "Snapshot 'snap0' does not exist in one or more devices"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-missing-snapshot"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-missing-snapshot"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot delete does not exist =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-delete",
+                                  "arguments": {
+                                     "job-id": "delete-missing-snapshot",
+                                     "tag": "snap0",
+                                     "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-missing-snapshot"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-missing-snapshot"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-missing-snapshot"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-missing-snapshot"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+*** done
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP
  2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
                   ` (11 preceding siblings ...)
  2021-02-04 12:48 ` [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands Daniel P. Berrangé
@ 2021-02-04 15:17 ` Dr. David Alan Gilbert
  12 siblings, 0 replies; 18+ messages in thread
From: Dr. David Alan Gilbert @ 2021-02-04 15:17 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, qemu-block,
	Juan Quintela, John Snow, qemu-devel, Markus Armbruster,
	Pavel Dovgalyuk, Paolo Bonzini, Max Reitz

* Daniel P. Berrangé (berrange@redhat.com) wrote:
>  v1: https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg00866.html
>  v2: https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg07523.html
>  v3: https://lists.gnu.org/archive/html/qemu-devel/2020-08/msg07076.html
>  v4: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg05221.html
>  v5: https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg00587.html
>  v6: https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg02158.html
>  v7: https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg06205.html
>  v8: https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg06464.html
>  v9: https://lists.gnu.org/archive/html/qemu-devel/2021-01/msg05016.html
>  vA: https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg00620.html
> 
> This series aims to provide a better designed replacement for the
> savevm/loadvm/delvm HMP commands, which despite their flaws continue
> to be actively used in the QMP world via the HMP command passthrough
> facility.

Queued.

> The main problems addressed are:
> 
>  - The logic to pick which disk to store the vmstate in is not
>    satsifactory.
> 
>    The first block driver state cannot be assumed to be the root disk
>    image, it might be OVMF varstore and we don't want to store vmstate
>    in there.
> 
>  - The logic to decide which disks must be snapshotted is hardwired
>    to all disks which are writable
> 
>    Again with OVMF there might be a writable varstore, but this can be
>    raw rather than qcow2 format, and thus unable to be snapshotted.
>    While users might wish to snapshot their varstore, in some/many/most
>    cases it is entirely uneccessary. Users are blocked from snapshotting
>    their VM though due to this varstore.
> 
>  - The commands are synchronous blocking execution and returning
>    errors immediately.
> 
>    This is partially addressed by integrating with the job framework.
>    This forces the client to use the async commands to determine
>    the completion status or error message from the operations.
> 
> In the block code I've only dealt with node names for block devices, as
> IIUC, this is all that libvirt should need in the -blockdev world it now
> lives in. IOW, I've made not attempt to cope with people wanting to use
> these QMP commands in combination with -drive args, as libvirt will
> never use -drive with a QEMU new enough to have these new commands.
> 
> The main limitations of this current impl
> 
>  - The snapshot process runs serialized in the main thread. ie QEMU
>    guest execution is blocked for the duration. The job framework
>    lets us fix this in future without changing the QMP semantics
>    exposed to the apps.
> 
>  - Most vmstate loading errors just go to stderr, as they are not
>    using Error **errp reporting. Thus the job framework just
>    reports a fairly generic message
> 
>      "Error -22 while loading VM state"
> 
>    Again this can be fixed later without changing the QMP semantics
>    exposed to apps.
> 
> I've done some minimal work in libvirt to start to make use of the new
> commands to validate their functionality, but this isn't finished yet.
> 
> My ultimate goal is to make the GNOME Boxes maintainer happy again by
> having internal snapshots work with OVMF:
> 
>   https://gitlab.gnome.org/GNOME/gnome-boxes/-/commit/c486da262f6566326fbcb5e=
> f45c5f64048f16a6e
> 
> Changed in v11:
> 
>  - Add missing docs for events for snapshot-delete
>  - Fix mistaken operation name in snapshot-delete docs
> 
> Changed in v10:
> 
>  - Fix some mis-placed patch chunks
>  - Update qapi version number annotations
>  - Move iotests to new naming scheme
>  - Fix shell based iotests in tests/qemu-iotests/tests subdir
>  - Expand QAPI examples
>  - Remove bogus submodule commit update
>  - Optimize shell pattern matching code
>  - Misc other typo/whitespace fixes
> 
> Changed in v9:
> 
>  - Rebase to git master to resolve conflicts
>  - Fixed accidental regression in error handling in previous v8
>  - Fixed formatting of iotest expected output now that we switched
>    to preserving whitespace in QMP input
> 
> Changed in v8:
> 
>  - Rebase to git master to resolve conflicts
>  - Updated QAPI since versions to 6.0
> 
> Changed in v7:
> 
>  - Incorporate changes from:
> 
>      https://lists.gnu.org/archive/html/qemu-devel/2020-10/msg03165.html
> 
>  - Tweaked error message
> 
> Changed in v6:
> 
>  - Resolve many conflicts with recent replay changes
>  - Misc typos in QAPI
> 
> Changed in v5:
> 
>  - Fix prevention of tag overwriting
>  - Refactor and expand test suite coverage to validate
>    more negative scenarios
> 
> Changed in v4:
> 
>  - Make the device lists mandatory, dropping all support for
>    QEMU's built-in heuristics to select devices.
> 
>  - Improve some error reporting and I/O test coverage
> 
> Changed in v3:
> 
>  - Schedule a bottom half to escape from coroutine context in
>    the jobs. This is needed because the locking in the snapshot
>    code goes horribly wrong when run from a background coroutine
>    instead of the main event thread.
> 
>  - Re-factor way we iterate over devices, so that we correctly
>    report non-existant devices passed by the user over QMP.
> 
>  - Add QAPI docs notes about limitations wrt vmstate error
>    reporting (it all goes to stderr not an Error **errp)
>    so QMP only gets a fairly generic error message currently.
> 
>  - Add I/O test to validate many usage scenarios / errors
> 
>  - Add I/O test helpers to handle QMP events with a deterministic
>    ordering
> 
>  - Ensure 'delete-snapshot' reports an error if requesting
>    delete from devices that don't support snapshot, instead of
>    silently succeeding with no erro.
> 
> Changed in v2:
> 
>  - Use new command names "snapshot-{load,save,delete}" to make it
>    clear that these are different from the "savevm|loadvm|delvm"
>    as they use the Job framework
> 
>  - Use an include list for block devs, not an exclude list
> 
> Daniel P. Berrang=C3=A9 (11):
>   block: push error reporting into bdrv_all_*_snapshot functions
>   migration: stop returning errno from load_snapshot()
>   block: add ability to specify list of blockdevs during snapshot
>   block: allow specifying name of block device for vmstate storage
>   block: rename and alter bdrv_all_find_snapshot semantics
>   migration: control whether snapshots are ovewritten
>   migration: wire up support for snapshot device selection
>   migration: introduce a delete_snapshot wrapper
>   iotests: add support for capturing and matching QMP events
>   iotests: fix loading of common.config from tests/ subdir
>   migration: introduce snapshot-{save,load,delete} QMP commands
> 
> Philippe Mathieu-Daud=C3=A9 (1):
>   migration: Make save_snapshot() return bool, not 0/-1
> 
>  block/monitor/block-hmp-cmds.c                |   7 +-
>  block/snapshot.c                              | 256 ++++++---
>  include/block/snapshot.h                      |  23 +-
>  include/migration/snapshot.h                  |  47 +-
>  migration/savevm.c                            | 296 ++++++++--
>  monitor/hmp-cmds.c                            |  12 +-
>  qapi/job.json                                 |   9 +-
>  qapi/migration.json                           | 173 ++++++
>  replay/replay-debugging.c                     |  12 +-
>  replay/replay-snapshot.c                      |   5 +-
>  softmmu/vl.c                                  |   2 +-
>  tests/qemu-iotests/267.out                    |  12 +-
>  tests/qemu-iotests/common.qemu                | 106 +++-
>  tests/qemu-iotests/common.rc                  |  10 +-
>  .../tests/internal-snapshots-qapi             | 386 +++++++++++++
>  .../tests/internal-snapshots-qapi.out         | 520 ++++++++++++++++++
>  16 files changed, 1721 insertions(+), 155 deletions(-)
>  create mode 100755 tests/qemu-iotests/tests/internal-snapshots-qapi
>  create mode 100644 tests/qemu-iotests/tests/internal-snapshots-qapi.out
> 
> --=20
> 2.29.2
> 
> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands
  2021-02-04 12:48 ` [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands Daniel P. Berrangé
@ 2021-02-04 15:34   ` Dr. David Alan Gilbert
  2021-02-04 15:38     ` Daniel P. Berrangé
  2021-02-04 15:40   ` [PATCH v11 12/12] migration: introduce snapshot-{save,load,delete} " Eric Blake
  2021-02-16 18:58   ` John Snow
  2 siblings, 1 reply; 18+ messages in thread
From: Dr. David Alan Gilbert @ 2021-02-04 15:34 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, qemu-block,
	Juan Quintela, John Snow, qemu-devel, Markus Armbruster,
	Pavel Dovgalyuk, Paolo Bonzini, Max Reitz

This is (intermittently?) failing for me because of ordering issues:

--- /home/dgilbert/git/migpull/tests/qemu-iotests/tests/internal-snapshots-qapi.out
+++ internal-snapshots-qapi.out.bad
@@ -344,8 +344,8 @@
                                      "vmstate": "diskfmt0",
                                      "devices": ["diskfmt0"]}}
 {"return": {}}
+qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}}
-qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
 {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}}
Not run: 259
Failures: internal-snapshots-qapi
Failed 1 of 124 iotests

I'll disable the test for now.

Dave

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> savevm, loadvm and delvm are some of the few HMP commands that have never
> been converted to use QMP. The reasons for the lack of conversion are
> that they blocked execution of the event thread, and the semantics
> around choice of disks were ill-defined.
> 
> Despite this downside, however, libvirt and applications using libvirt
> have used these commands for as long as QMP has existed, via the
> "human-monitor-command" passthrough command. IOW, while it is clearly
> desirable to be able to fix the problems, they are not a blocker to
> all real world usage.
> 
> Meanwhile there is a need for other features which involve adding new
> parameters to the commands. This is possible with HMP passthrough, but
> it provides no reliable way for apps to introspect features, so using
> QAPI modelling is highly desirable.
> 
> This patch thus introduces new snapshot-{load,save,delete} commands to
> QMP that are intended to replace the old HMP counterparts. The new
> commands are given different names, because they will be using the new
> QEMU job framework and thus will have diverging behaviour from the HMP
> originals. It would thus be misleading to keep the same name.
> 
> While this design uses the generic job framework, the current impl is
> still blocking. The intention that the blocking problem is fixed later.
> None the less applications using these new commands should assume that
> they are asynchronous and thus wait for the job status change event to
> indicate completion.
> 
> In addition to using the job framework, the new commands require the
> caller to be explicit about all the block device nodes used in the
> snapshot operations, with no built-in default heuristics in use.
> 
> Note that the existing "query-named-block-nodes" can be used to query
> what snapshots currently exist for block nodes.
> 
> Acked-by: Markus Armbruster <armbru@redhat.com>
> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
> ---
>  migration/savevm.c                            | 184 +++++++
>  qapi/job.json                                 |   9 +-
>  qapi/migration.json                           | 173 ++++++
>  .../tests/internal-snapshots-qapi             | 386 +++++++++++++
>  .../tests/internal-snapshots-qapi.out         | 520 ++++++++++++++++++
>  5 files changed, 1271 insertions(+), 1 deletion(-)
>  create mode 100755 tests/qemu-iotests/tests/internal-snapshots-qapi
>  create mode 100644 tests/qemu-iotests/tests/internal-snapshots-qapi.out
> 
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 48186918a3..6b320423c7 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -3077,3 +3077,187 @@ bool vmstate_check_only_migratable(const VMStateDescription *vmsd)
>  
>      return !(vmsd && vmsd->unmigratable);
>  }
> +
> +typedef struct SnapshotJob {
> +    Job common;
> +    char *tag;
> +    char *vmstate;
> +    strList *devices;
> +    Coroutine *co;
> +    Error **errp;
> +    bool ret;
> +} SnapshotJob;
> +
> +static void qmp_snapshot_job_free(SnapshotJob *s)
> +{
> +    g_free(s->tag);
> +    g_free(s->vmstate);
> +    qapi_free_strList(s->devices);
> +}
> +
> +
> +static void snapshot_load_job_bh(void *opaque)
> +{
> +    Job *job = opaque;
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    int orig_vm_running;
> +
> +    job_progress_set_remaining(&s->common, 1);
> +
> +    orig_vm_running = runstate_is_running();
> +    vm_stop(RUN_STATE_RESTORE_VM);
> +
> +    s->ret = load_snapshot(s->tag, s->vmstate, true, s->devices, s->errp);
> +    if (s->ret && orig_vm_running) {
> +        vm_start();
> +    }
> +
> +    job_progress_update(&s->common, 1);
> +
> +    qmp_snapshot_job_free(s);
> +    aio_co_wake(s->co);
> +}
> +
> +static void snapshot_save_job_bh(void *opaque)
> +{
> +    Job *job = opaque;
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +
> +    job_progress_set_remaining(&s->common, 1);
> +    s->ret = save_snapshot(s->tag, false, s->vmstate,
> +                           true, s->devices, s->errp);
> +    job_progress_update(&s->common, 1);
> +
> +    qmp_snapshot_job_free(s);
> +    aio_co_wake(s->co);
> +}
> +
> +static void snapshot_delete_job_bh(void *opaque)
> +{
> +    Job *job = opaque;
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +
> +    job_progress_set_remaining(&s->common, 1);
> +    s->ret = delete_snapshot(s->tag, true, s->devices, s->errp);
> +    job_progress_update(&s->common, 1);
> +
> +    qmp_snapshot_job_free(s);
> +    aio_co_wake(s->co);
> +}
> +
> +static int coroutine_fn snapshot_save_job_run(Job *job, Error **errp)
> +{
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    s->errp = errp;
> +    s->co = qemu_coroutine_self();
> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> +                            snapshot_save_job_bh, job);
> +    qemu_coroutine_yield();
> +    return s->ret ? 0 : -1;
> +}
> +
> +static int coroutine_fn snapshot_load_job_run(Job *job, Error **errp)
> +{
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    s->errp = errp;
> +    s->co = qemu_coroutine_self();
> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> +                            snapshot_load_job_bh, job);
> +    qemu_coroutine_yield();
> +    return s->ret ? 0 : -1;
> +}
> +
> +static int coroutine_fn snapshot_delete_job_run(Job *job, Error **errp)
> +{
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    s->errp = errp;
> +    s->co = qemu_coroutine_self();
> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> +                            snapshot_delete_job_bh, job);
> +    qemu_coroutine_yield();
> +    return s->ret ? 0 : -1;
> +}
> +
> +
> +static const JobDriver snapshot_load_job_driver = {
> +    .instance_size = sizeof(SnapshotJob),
> +    .job_type      = JOB_TYPE_SNAPSHOT_LOAD,
> +    .run           = snapshot_load_job_run,
> +};
> +
> +static const JobDriver snapshot_save_job_driver = {
> +    .instance_size = sizeof(SnapshotJob),
> +    .job_type      = JOB_TYPE_SNAPSHOT_SAVE,
> +    .run           = snapshot_save_job_run,
> +};
> +
> +static const JobDriver snapshot_delete_job_driver = {
> +    .instance_size = sizeof(SnapshotJob),
> +    .job_type      = JOB_TYPE_SNAPSHOT_DELETE,
> +    .run           = snapshot_delete_job_run,
> +};
> +
> +
> +void qmp_snapshot_save(const char *job_id,
> +                       const char *tag,
> +                       const char *vmstate,
> +                       strList *devices,
> +                       Error **errp)
> +{
> +    SnapshotJob *s;
> +
> +    s = job_create(job_id, &snapshot_save_job_driver, NULL,
> +                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
> +                   NULL, NULL, errp);
> +    if (!s) {
> +        return;
> +    }
> +
> +    s->tag = g_strdup(tag);
> +    s->vmstate = g_strdup(vmstate);
> +    s->devices = QAPI_CLONE(strList, devices);
> +
> +    job_start(&s->common);
> +}
> +
> +void qmp_snapshot_load(const char *job_id,
> +                       const char *tag,
> +                       const char *vmstate,
> +                       strList *devices,
> +                       Error **errp)
> +{
> +    SnapshotJob *s;
> +
> +    s = job_create(job_id, &snapshot_load_job_driver, NULL,
> +                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
> +                   NULL, NULL, errp);
> +    if (!s) {
> +        return;
> +    }
> +
> +    s->tag = g_strdup(tag);
> +    s->vmstate = g_strdup(vmstate);
> +    s->devices = QAPI_CLONE(strList, devices);
> +
> +    job_start(&s->common);
> +}
> +
> +void qmp_snapshot_delete(const char *job_id,
> +                         const char *tag,
> +                         strList *devices,
> +                         Error **errp)
> +{
> +    SnapshotJob *s;
> +
> +    s = job_create(job_id, &snapshot_delete_job_driver, NULL,
> +                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
> +                   NULL, NULL, errp);
> +    if (!s) {
> +        return;
> +    }
> +
> +    s->tag = g_strdup(tag);
> +    s->devices = QAPI_CLONE(strList, devices);
> +
> +    job_start(&s->common);
> +}
> diff --git a/qapi/job.json b/qapi/job.json
> index 280c2f76f1..1a6ef03451 100644
> --- a/qapi/job.json
> +++ b/qapi/job.json
> @@ -22,10 +22,17 @@
>  #
>  # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)
>  #
> +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 6.0)
> +#
> +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 6.0)
> +#
> +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 6.0)
> +#
>  # Since: 1.7
>  ##
>  { 'enum': 'JobType',
> -  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }
> +  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',
> +           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }
>  
>  ##
>  # @JobStatus:
> diff --git a/qapi/migration.json b/qapi/migration.json
> index d1d9632c2a..5ca0ff9bed 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -1843,3 +1843,176 @@
>  # Since: 5.2
>  ##
>  { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }
> +
> +##
> +# @snapshot-save:
> +#
> +# Save a VM snapshot
> +#
> +# @job-id: identifier for the newly created job
> +# @tag: name of the snapshot to create
> +# @vmstate: block device node name to save vmstate to
> +# @devices: list of block device node names to save a snapshot to
> +#
> +# Applications should not assume that the snapshot save is complete
> +# when this command returns. The job commands / events must be used
> +# to determine completion and to fetch details of any errors that arise.
> +#
> +# Note that execution of the guest CPUs may be stopped during the
> +# time it takes to save the snapshot. A future version of QEMU
> +# may ensure CPUs are executing continuously.
> +#
> +# It is strongly recommended that @devices contain all writable
> +# block device nodes if a consistent snapshot is required.
> +#
> +# If @tag already exists, an error will be reported
> +#
> +# Returns: nothing
> +#
> +# Example:
> +#
> +# -> { "execute": "snapshot-save",
> +#      "data": {
> +#         "job-id": "snapsave0",
> +#         "tag": "my-snap",
> +#         "vmstate": "disk0",
> +#         "devices": ["disk0", "disk1"]
> +#      }
> +#    }
> +# <- { "return": { } }
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "created", "id": "snapsave0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "running", "id": "snapsave0"}}
> +# <- {"event": "STOP"}
> +# <- {"event": "RESUME"}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "waiting", "id": "snapsave0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "pending", "id": "snapsave0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "concluded", "id": "snapsave0"}}
> +# -> {"execute": "query-jobs"}
> +# <- {"return": [{"current-progress": 1,
> +#                 "status": "concluded",
> +#                 "total-progress": 1,
> +#                 "type": "snapshot-save",
> +#                 "id": "snapsave0"}]}
> +#
> +# Since: 6.0
> +##
> +{ 'command': 'snapshot-save',
> +  'data': { 'job-id': 'str',
> +            'tag': 'str',
> +            'vmstate': 'str',
> +            'devices': ['str'] } }
> +
> +##
> +# @snapshot-load:
> +#
> +# Load a VM snapshot
> +#
> +# @job-id: identifier for the newly created job
> +# @tag: name of the snapshot to load.
> +# @vmstate: block device node name to load vmstate from
> +# @devices: list of block device node names to load a snapshot from
> +#
> +# Applications should not assume that the snapshot load is complete
> +# when this command returns. The job commands / events must be used
> +# to determine completion and to fetch details of any errors that arise.
> +#
> +# Note that execution of the guest CPUs will be stopped during the
> +# time it takes to load the snapshot.
> +#
> +# It is strongly recommended that @devices contain all writable
> +# block device nodes that can have changed since the original
> +# @snapshot-save command execution.
> +#
> +# Returns: nothing
> +#
> +# Example:
> +#
> +# -> { "execute": "snapshot-load",
> +#      "data": {
> +#         "job-id": "snapload0",
> +#         "tag": "my-snap",
> +#         "vmstate": "disk0",
> +#         "devices": ["disk0", "disk1"]
> +#      }
> +#    }
> +# <- { "return": { } }
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "created", "id": "snapload0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "running", "id": "snapload0"}}
> +# <- {"event": "STOP"}
> +# <- {"event": "RESUME"}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "waiting", "id": "snapload0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "pending", "id": "snapload0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "concluded", "id": "snapload0"}}
> +# -> {"execute": "query-jobs"}
> +# <- {"return": [{"current-progress": 1,
> +#                 "status": "concluded",
> +#                 "total-progress": 1,
> +#                 "type": "snapshot-load",
> +#                 "id": "snapload0"}]}
> +#
> +# Since: 6.0
> +##
> +{ 'command': 'snapshot-load',
> +  'data': { 'job-id': 'str',
> +            'tag': 'str',
> +            'vmstate': 'str',
> +            'devices': ['str'] } }
> +
> +##
> +# @snapshot-delete:
> +#
> +# Delete a VM snapshot
> +#
> +# @job-id: identifier for the newly created job
> +# @tag: name of the snapshot to delete.
> +# @devices: list of block device node names to delete a snapshot from
> +#
> +# Applications should not assume that the snapshot delete is complete
> +# when this command returns. The job commands / events must be used
> +# to determine completion and to fetch details of any errors that arise.
> +#
> +# Returns: nothing
> +#
> +# Example:
> +#
> +# -> { "execute": "snapshot-delete",
> +#      "data": {
> +#         "job-id": "snapdelete0",
> +#         "tag": "my-snap",
> +#         "devices": ["disk0", "disk1"]
> +#      }
> +#    }
> +# <- { "return": { } }
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "created", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "running", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "waiting", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "pending", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "concluded", "id": "snapdelete0"}}
> +# -> {"execute": "query-jobs"}
> +# <- {"return": [{"current-progress": 1,
> +#                 "status": "concluded",
> +#                 "total-progress": 1,
> +#                 "type": "snapshot-delete",
> +#                 "id": "snapdelete0"}]}
> +#
> +# Since: 6.0
> +##
> +{ 'command': 'snapshot-delete',
> +  'data': { 'job-id': 'str',
> +            'tag': 'str',
> +            'devices': ['str'] } }
> diff --git a/tests/qemu-iotests/tests/internal-snapshots-qapi b/tests/qemu-iotests/tests/internal-snapshots-qapi
> new file mode 100755
> index 0000000000..6467eaaac0
> --- /dev/null
> +++ b/tests/qemu-iotests/tests/internal-snapshots-qapi
> @@ -0,0 +1,386 @@
> +#!/usr/bin/env bash
> +# group: rw auto quick snapshot
> +#
> +# Test which nodes are involved in internal snapshots
> +#
> +# Copyright (C) 2020-2021 Red Hat, Inc.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +#
> +
> +# creator
> +owner=berrange@redhat.com
> +
> +seq=`basename $0`
> +echo "QA output created by $seq"
> +
> +status=1        # failure is the default!
> +
> +_cleanup()
> +{
> +    _cleanup_qemu
> +    _cleanup_test_img
> +    TEST_IMG="$TEST_IMG.alt1" _cleanup_test_img
> +    TEST_IMG="$TEST_IMG.alt2" _cleanup_test_img
> +    rm -f "$SOCK_DIR/nbd"
> +}
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +# get standard environment, filters and checks
> +. ../common.rc
> +. ../common.filter
> +. ../common.qemu
> +
> +_supported_fmt qcow2
> +_supported_proto file
> +_supported_os Linux
> +_require_drivers copy-on-read
> +
> +# Internal snapshots are (currently) impossible with refcount_bits=1,
> +# and generally impossible with external data files
> +_unsupported_imgopts 'refcount_bits=1[^0-9]' data_file
> +
> +_require_devices virtio-blk
> +
> +
> +size=128M
> +
> +if [ -n "$BACKING_FILE" ]; then
> +    _make_test_img -b "$BACKING_FILE" -F $IMGFMT $size
> +else
> +    _make_test_img $size
> +fi
> +TEST_IMG="$TEST_IMG.alt1" _make_test_img $size
> +IMGOPTS= IMGFMT=raw TEST_IMG="$TEST_IMG.alt2" _make_test_img $size
> +
> +export capture_events="JOB_STATUS_CHANGE STOP RESUME"
> +
> +wait_job()
> +{
> +    local job=$1
> +    shift
> +
> +    # All jobs start with two events...
> +    #
> +    # created
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +    # running
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +
> +    # Next events vary depending on job type and
> +    # whether it succeeds or not.
> +    for evname in $@
> +    do
> +        _wait_event $QEMU_HANDLE $evname
> +    done
> +
> +    # All jobs finish off with two more events...
> +    # concluded
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"query-jobs\"}" "return"
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"job-dismiss\", \"arguments\": {\"id\": \"$job\"}}" "return"
> +    # null
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +}
> +
> +run_save()
> +{
> +    local job=$1
> +    local vmstate=$2
> +    local devices=$3
> +    local fail=$4
> +
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-save\",
> +                                  \"arguments\": {
> +                                     \"job-id\": \"$job\",
> +                                     \"tag\": \"snap0\",
> +                                     \"vmstate\": \"$vmstate\",
> +                                     \"devices\": $devices}}" "return"
> +
> +    if [ $fail = 0 ]; then
> +        # job status: waiting, pending
> +        wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
> +    else
> +        # job status: aborting
> +        wait_job $job "JOB_STATUS_CHANGE"
> +    fi
> +}
> +
> +run_load()
> +{
> +    local job=$1
> +    local vmstate=$2
> +    local devices=$3
> +    local fail=$4
> +
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-load\",
> +                                  \"arguments\": {
> +                                     \"job-id\": \"$job\",
> +                                     \"tag\": \"snap0\",
> +                                     \"vmstate\": \"$vmstate\",
> +                                     \"devices\": $devices}}" "return"
> +    if [ $fail = 0 ]; then
> +        # job status: waiting, pending
> +        wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
> +    else
> +        # job status: aborting
> +        wait_job $job "STOP" "JOB_STATUS_CHANGE"
> +    fi
> +}
> +
> +run_delete()
> +{
> +    local job=$1
> +    local devices=$2
> +    local fail=$3
> +
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-delete\",
> +                                  \"arguments\": {
> +                                     \"job-id\": \"$job\",
> +                                     \"tag\": \"snap0\",
> +                                     \"devices\": $devices}}" "return"
> +    if [ $fail = 0 ]; then
> +        # job status: waiting, pending
> +        wait_job $job "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
> +    else
> +        # job status: aborting
> +        wait_job $job "JOB_STATUS_CHANGE"
> +    fi
> +}
> +
> +start_qemu()
> +{
> +    keep_stderr=y
> +    _launch_qemu -nodefaults -nographic "$@"
> +
> +    _send_qemu_cmd $QEMU_HANDLE '{"execute": "qmp_capabilities"}' 'return'
> +}
> +
> +stop_qemu()
> +{
> +    _send_qemu_cmd $QEMU_HANDLE '{"execute": "quit"}' 'return'
> +
> +    wait=1 _cleanup_qemu
> +}
> +
> +
> +echo
> +echo "=====  Snapshot single qcow2 image ====="
> +echo
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-simple" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_load "load-simple" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_delete "delete-simple" "[\"diskfmt0\"]" 0
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot no image ====="
> +echo
> +
> +# When snapshotting we need to pass at least one writable disk
> +# otherwise there's no work to do
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-no-image" "diskfmt0" "[]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot missing image ====="
> +echo
> +
> +# The block node names we pass need to actually exist
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-missing-image" "diskfmt1729" "[\"diskfmt1729\"]" 1
> +stop_qemu
> +
> +echo
> +echo "=====  Snapshot vmstate not in devices list ====="
> +echo
> +
> +# The node name referred to for vmstate must be one of the nodes
> +# being included in the snapshot, otherwise the vmstate that is
> +# captured is liable to be overwritten making subsequent load
> +# impossible
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
> +run_save "save-excluded-vmstate" "diskfmt0" "[\"diskfmt1\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot protocol instead of format ====="
> +echo
> +
> +# The snapshot has to be done against the qcow2 format layer
> +# not the underlying file protocol layer
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-proto-not-fmt" "disk0" "[\"disk0\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot dual qcow2 image ====="
> +echo
> +
> +# We can snapshot multiple qcow2 disks at the same time
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
> +run_save "save-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_load "load-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_delete "delete-dual-image" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot error with raw image ====="
> +echo
> +
> +# If we're snapshotting multiple disks, all must be capable
> +# of supporting snapshots. A raw disk in the list must cause
> +# an error.
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
> +    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
> +run_save "save-raw-fmt" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\", \"diskfmt2\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot with raw image excluded ====="
> +echo
> +
> +# If we're snapshotting multiple disks, all must be capable
> +# of supporting snapshots. A writable raw disk can be excluded
> +# from the snapshot, though it means its data won't be restored
> +# by later snapshot load operation.
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
> +    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
> +run_save "save-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_load "load-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_delete "delete-skip-raw" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +stop_qemu
> +
> +echo
> +echo "=====  Snapshot bad error reporting to stderr ====="
> +echo
> +
> +# This demonstrates that we're not capturing vmstate loading failures
> +# into QMP errors, they're ending up in stderr instead. vmstate needs
> +# to report errors via Error object but that is a major piece of work
> +# for the future. This test case's expected output log will need
> +# adjusting when that is done.
> +
> +start_qemu \
> +    -device virtio-rng \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +
> +run_save "save-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 0
> +stop_qemu
> +
> +# leave off virtio-rng to provoke vmstate failure
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +
> +run_load "load-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 1
> +run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
> +
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot reuse same tag ====="
> +echo
> +
> +# Validates that we get an error when reusing a snapshot tag that
> +# already exists
> +
> +start_qemu \
> +    -device virtio-rng \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +
> +run_save "save-err-stderr-initial" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_save "save-err-stderr-repeat1" "diskfmt0" "[\"diskfmt0\"]" 1
> +run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
> +run_save "save-err-stderr-repeat2" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_delete "delete-err-stderr-repeat2" "[\"diskfmt0\"]" 0
> +
> +stop_qemu
> +
> +echo
> +echo "=====  Snapshot load does not exist ====="
> +echo
> +
> +# Validates that we get an error when loading a snapshot that does
> +# not exist
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_load "load-missing-snapshot" "diskfmt0" "[\"diskfmt0\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot delete does not exist ====="
> +echo
> +
> +# Validates that we don't get an error when deleting a snapshot that
> +# does not exist
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_delete "delete-missing-snapshot" "[\"diskfmt0\"]" 0
> +stop_qemu
> +
> +
> +# success, all done
> +echo "*** done"
> +rm -f $seq.full
> +status=0
> diff --git a/tests/qemu-iotests/tests/internal-snapshots-qapi.out b/tests/qemu-iotests/tests/internal-snapshots-qapi.out
> new file mode 100644
> index 0000000000..26ff4a838c
> --- /dev/null
> +++ b/tests/qemu-iotests/tests/internal-snapshots-qapi.out
> @@ -0,0 +1,520 @@
> +QA output created by internal-snapshots-qapi
> +Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
> +Formatting 'TEST_DIR/t.IMGFMT.alt1', fmt=IMGFMT size=134217728
> +Formatting 'TEST_DIR/t.qcow2.alt2', fmt=IMGFMT size=134217728
> +
> +=====  Snapshot single qcow2 image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-simple",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-simple"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-simple"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-simple"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-simple"}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-simple",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-simple"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-simple"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-simple"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-simple"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-simple",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-simple"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-simple"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-simple"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-simple"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot no image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-no-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": []}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-no-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-no-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-no-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-no-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-no-image", "error": "At least one device is required for snapshot"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-no-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-no-image"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot missing image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-missing-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt1729",
> +                                     "devices": ["diskfmt1729"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-missing-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-missing-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-missing-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-missing-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-missing-image", "error": "No block device node 'diskfmt1729'"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-missing-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-missing-image"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot vmstate not in devices list =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-excluded-vmstate",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-excluded-vmstate"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-excluded-vmstate"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-excluded-vmstate"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-excluded-vmstate"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-excluded-vmstate", "error": "vmstate block device 'diskfmt0' does not exist"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-excluded-vmstate"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-excluded-vmstate"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot protocol instead of format =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-proto-not-fmt",
> +                                     "tag": "snap0",
> +                                     "vmstate": "disk0",
> +                                     "devices": ["disk0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-proto-not-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-proto-not-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-proto-not-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-proto-not-fmt"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-proto-not-fmt"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-proto-not-fmt"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot dual qcow2 image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-dual-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-dual-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-dual-image"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-dual-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-dual-image"}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-dual-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-dual-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-dual-image"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-dual-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-dual-image"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-dual-image",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-dual-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-dual-image"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-dual-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-dual-image"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot error with raw image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-raw-fmt",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-raw-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-raw-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-raw-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-raw-fmt"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-raw-fmt"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-raw-fmt"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot with raw image excluded =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-skip-raw",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-skip-raw"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-skip-raw"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-skip-raw"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-skip-raw"}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-skip-raw",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-skip-raw"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-skip-raw"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-skip-raw"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-skip-raw"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-skip-raw",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-skip-raw"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-skip-raw"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-skip-raw"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-skip-raw"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot bad error reporting to stderr =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-err-stderr",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}}
> +qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-err-stderr", "error": "Error -22 while loading VM state"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-err-stderr"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-err-stderr",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot reuse same tag =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr-initial",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-initial"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-initial"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-initial"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-initial"}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr-repeat1",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat1"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat1"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-err-stderr-repeat1"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat1"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat1", "error": "Snapshot 'snap0' already exists in one or more devices"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat1"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat1"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-err-stderr",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr-repeat2",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat2"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat2"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat2"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat2"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-err-stderr-repeat2",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr-repeat2"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr-repeat2"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr-repeat2"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr-repeat2"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot load does not exist =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-missing-snapshot",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-missing-snapshot"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-missing-snapshot", "error": "Snapshot 'snap0' does not exist in one or more devices"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-missing-snapshot"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-missing-snapshot"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot delete does not exist =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-missing-snapshot",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-missing-snapshot"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-missing-snapshot"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-missing-snapshot"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-missing-snapshot"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +*** done
> -- 
> 2.29.2
> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands
  2021-02-04 15:34   ` Dr. David Alan Gilbert
@ 2021-02-04 15:38     ` Daniel P. Berrangé
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2021-02-04 15:38 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, qemu-block,
	Juan Quintela, John Snow, qemu-devel, Markus Armbruster,
	Pavel Dovgalyuk, Paolo Bonzini, Max Reitz

On Thu, Feb 04, 2021 at 03:34:33PM +0000, Dr. David Alan Gilbert wrote:
> This is (intermittently?) failing for me because of ordering issues:
> 
> --- /home/dgilbert/git/migpull/tests/qemu-iotests/tests/internal-snapshots-qapi.out
> +++ internal-snapshots-qapi.out.bad
> @@ -344,8 +344,8 @@
>                                       "vmstate": "diskfmt0",
>                                       "devices": ["diskfmt0"]}}
>  {"return": {}}
> +qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}}
> -qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}}
>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}}
> Not run: 259
> Failures: internal-snapshots-qapi
> Failed 1 of 124 iotests
> 
> I'll disable the test for now.

Ok. I'm working on a patch series to make migration code use "Error **errp"
that ought to fix this properly.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v11 12/12] migration: introduce snapshot-{save,load,delete} QMP commands
  2021-02-04 12:48 ` [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands Daniel P. Berrangé
  2021-02-04 15:34   ` Dr. David Alan Gilbert
@ 2021-02-04 15:40   ` Eric Blake
  2021-02-16 18:58   ` John Snow
  2 siblings, 0 replies; 18+ messages in thread
From: Eric Blake @ 2021-02-04 15:40 UTC (permalink / raw)
  To: Daniel P. Berrangé, qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, qemu-block,
	Juan Quintela, Markus Armbruster, Dr. David Alan Gilbert,
	Pavel Dovgalyuk, Paolo Bonzini, Max Reitz, John Snow

On 2/4/21 6:48 AM, Daniel P. Berrangé wrote:
> savevm, loadvm and delvm are some of the few HMP commands that have never
> been converted to use QMP. The reasons for the lack of conversion are
> that they blocked execution of the event thread, and the semantics
> around choice of disks were ill-defined.
> 

> 
> Note that the existing "query-named-block-nodes" can be used to query
> what snapshots currently exist for block nodes.
> 
> Acked-by: Markus Armbruster <armbru@redhat.com>
> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
> ---
>  migration/savevm.c                            | 184 +++++++
>  qapi/job.json                                 |   9 +-
>  qapi/migration.json                           | 173 ++++++
>  .../tests/internal-snapshots-qapi             | 386 +++++++++++++
>  .../tests/internal-snapshots-qapi.out         | 520 ++++++++++++++++++
>  5 files changed, 1271 insertions(+), 1 deletion(-)
>  create mode 100755 tests/qemu-iotests/tests/internal-snapshots-qapi
>  create mode 100644 tests/qemu-iotests/tests/internal-snapshots-qapi.out

I compared v10 and v11, and see that you addressed my concerns.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v11 12/12] migration: introduce snapshot-{save,load,delete} QMP commands
  2021-02-04 12:48 ` [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands Daniel P. Berrangé
  2021-02-04 15:34   ` Dr. David Alan Gilbert
  2021-02-04 15:40   ` [PATCH v11 12/12] migration: introduce snapshot-{save,load,delete} " Eric Blake
@ 2021-02-16 18:58   ` John Snow
  2 siblings, 0 replies; 18+ messages in thread
From: John Snow @ 2021-02-16 18:58 UTC (permalink / raw)
  To: Daniel P. Berrangé, qemu-devel
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, qemu-block,
	Juan Quintela, Markus Armbruster, Dr. David Alan Gilbert,
	Pavel Dovgalyuk, Paolo Bonzini, Max Reitz

On 2/4/21 7:48 AM, Daniel P. Berrangé wrote:
> savevm, loadvm and delvm are some of the few HMP commands that have never
> been converted to use QMP. The reasons for the lack of conversion are
> that they blocked execution of the event thread, and the semantics
> around choice of disks were ill-defined.
> 
> Despite this downside, however, libvirt and applications using libvirt
> have used these commands for as long as QMP has existed, via the
> "human-monitor-command" passthrough command. IOW, while it is clearly
> desirable to be able to fix the problems, they are not a blocker to
> all real world usage.
> 
> Meanwhile there is a need for other features which involve adding new
> parameters to the commands. This is possible with HMP passthrough, but
> it provides no reliable way for apps to introspect features, so using
> QAPI modelling is highly desirable.
> 
> This patch thus introduces new snapshot-{load,save,delete} commands to
> QMP that are intended to replace the old HMP counterparts. The new
> commands are given different names, because they will be using the new
> QEMU job framework and thus will have diverging behaviour from the HMP
> originals. It would thus be misleading to keep the same name.
> 
> While this design uses the generic job framework, the current impl is
> still blocking. The intention that the blocking problem is fixed later.
> None the less applications using these new commands should assume that
> they are asynchronous and thus wait for the job status change event to
> indicate completion.
> 
> In addition to using the job framework, the new commands require the
> caller to be explicit about all the block device nodes used in the
> snapshot operations, with no built-in default heuristics in use.
> 
> Note that the existing "query-named-block-nodes" can be used to query
> what snapshots currently exist for block nodes.
> 

I wasn't sure how you were actually tackling this, but the approach laid 
out in the commit message here looks like a very good idea that doesn't 
require the full resolution of the savevm problem.

Acked-by: John Snow <jsnow@redhat.com>

> Acked-by: Markus Armbruster <armbru@redhat.com>
> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
> ---
>   migration/savevm.c                            | 184 +++++++
>   qapi/job.json                                 |   9 +-
>   qapi/migration.json                           | 173 ++++++
>   .../tests/internal-snapshots-qapi             | 386 +++++++++++++
>   .../tests/internal-snapshots-qapi.out         | 520 ++++++++++++++++++
>   5 files changed, 1271 insertions(+), 1 deletion(-)
>   create mode 100755 tests/qemu-iotests/tests/internal-snapshots-qapi
>   create mode 100644 tests/qemu-iotests/tests/internal-snapshots-qapi.out
> 
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 48186918a3..6b320423c7 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -3077,3 +3077,187 @@ bool vmstate_check_only_migratable(const VMStateDescription *vmsd)
>   
>       return !(vmsd && vmsd->unmigratable);
>   }
> +
> +typedef struct SnapshotJob {
> +    Job common;
> +    char *tag;
> +    char *vmstate;
> +    strList *devices;
> +    Coroutine *co;
> +    Error **errp;
> +    bool ret;
> +} SnapshotJob;
> +
> +static void qmp_snapshot_job_free(SnapshotJob *s)
> +{
> +    g_free(s->tag);
> +    g_free(s->vmstate);
> +    qapi_free_strList(s->devices);
> +}
> +
> +
> +static void snapshot_load_job_bh(void *opaque)
> +{
> +    Job *job = opaque;
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    int orig_vm_running;
> +
> +    job_progress_set_remaining(&s->common, 1);
> +
> +    orig_vm_running = runstate_is_running();
> +    vm_stop(RUN_STATE_RESTORE_VM);
> +
> +    s->ret = load_snapshot(s->tag, s->vmstate, true, s->devices, s->errp);
> +    if (s->ret && orig_vm_running) {
> +        vm_start();
> +    }
> +
> +    job_progress_update(&s->common, 1);
> +
> +    qmp_snapshot_job_free(s);
> +    aio_co_wake(s->co);
> +}
> +
> +static void snapshot_save_job_bh(void *opaque)
> +{
> +    Job *job = opaque;
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +
> +    job_progress_set_remaining(&s->common, 1);
> +    s->ret = save_snapshot(s->tag, false, s->vmstate,
> +                           true, s->devices, s->errp);
> +    job_progress_update(&s->common, 1);
> +
> +    qmp_snapshot_job_free(s);
> +    aio_co_wake(s->co);
> +}
> +
> +static void snapshot_delete_job_bh(void *opaque)
> +{
> +    Job *job = opaque;
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +
> +    job_progress_set_remaining(&s->common, 1);
> +    s->ret = delete_snapshot(s->tag, true, s->devices, s->errp);
> +    job_progress_update(&s->common, 1);
> +
> +    qmp_snapshot_job_free(s);
> +    aio_co_wake(s->co);
> +}
> +
> +static int coroutine_fn snapshot_save_job_run(Job *job, Error **errp)
> +{
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    s->errp = errp;
> +    s->co = qemu_coroutine_self();
> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> +                            snapshot_save_job_bh, job);
> +    qemu_coroutine_yield();
> +    return s->ret ? 0 : -1;
> +}
> +
> +static int coroutine_fn snapshot_load_job_run(Job *job, Error **errp)
> +{
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    s->errp = errp;
> +    s->co = qemu_coroutine_self();
> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> +                            snapshot_load_job_bh, job);
> +    qemu_coroutine_yield();
> +    return s->ret ? 0 : -1;
> +}
> +
> +static int coroutine_fn snapshot_delete_job_run(Job *job, Error **errp)
> +{
> +    SnapshotJob *s = container_of(job, SnapshotJob, common);
> +    s->errp = errp;
> +    s->co = qemu_coroutine_self();
> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> +                            snapshot_delete_job_bh, job);
> +    qemu_coroutine_yield();
> +    return s->ret ? 0 : -1;
> +}
> +
> +
> +static const JobDriver snapshot_load_job_driver = {
> +    .instance_size = sizeof(SnapshotJob),
> +    .job_type      = JOB_TYPE_SNAPSHOT_LOAD,
> +    .run           = snapshot_load_job_run,
> +};
> +
> +static const JobDriver snapshot_save_job_driver = {
> +    .instance_size = sizeof(SnapshotJob),
> +    .job_type      = JOB_TYPE_SNAPSHOT_SAVE,
> +    .run           = snapshot_save_job_run,
> +};
> +
> +static const JobDriver snapshot_delete_job_driver = {
> +    .instance_size = sizeof(SnapshotJob),
> +    .job_type      = JOB_TYPE_SNAPSHOT_DELETE,
> +    .run           = snapshot_delete_job_run,
> +};
> +
> +
> +void qmp_snapshot_save(const char *job_id,
> +                       const char *tag,
> +                       const char *vmstate,
> +                       strList *devices,
> +                       Error **errp)
> +{
> +    SnapshotJob *s;
> +
> +    s = job_create(job_id, &snapshot_save_job_driver, NULL,
> +                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
> +                   NULL, NULL, errp);
> +    if (!s) {
> +        return;
> +    }
> +
> +    s->tag = g_strdup(tag);
> +    s->vmstate = g_strdup(vmstate);
> +    s->devices = QAPI_CLONE(strList, devices);
> +
> +    job_start(&s->common);
> +}
> +
> +void qmp_snapshot_load(const char *job_id,
> +                       const char *tag,
> +                       const char *vmstate,
> +                       strList *devices,
> +                       Error **errp)
> +{
> +    SnapshotJob *s;
> +
> +    s = job_create(job_id, &snapshot_load_job_driver, NULL,
> +                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
> +                   NULL, NULL, errp);
> +    if (!s) {
> +        return;
> +    }
> +
> +    s->tag = g_strdup(tag);
> +    s->vmstate = g_strdup(vmstate);
> +    s->devices = QAPI_CLONE(strList, devices);
> +
> +    job_start(&s->common);
> +}
> +
> +void qmp_snapshot_delete(const char *job_id,
> +                         const char *tag,
> +                         strList *devices,
> +                         Error **errp)
> +{
> +    SnapshotJob *s;
> +
> +    s = job_create(job_id, &snapshot_delete_job_driver, NULL,
> +                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
> +                   NULL, NULL, errp);
> +    if (!s) {
> +        return;
> +    }
> +
> +    s->tag = g_strdup(tag);
> +    s->devices = QAPI_CLONE(strList, devices);
> +
> +    job_start(&s->common);
> +}
> diff --git a/qapi/job.json b/qapi/job.json
> index 280c2f76f1..1a6ef03451 100644
> --- a/qapi/job.json
> +++ b/qapi/job.json
> @@ -22,10 +22,17 @@
>   #
>   # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)
>   #
> +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 6.0)
> +#
> +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 6.0)
> +#
> +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 6.0)
> +#
>   # Since: 1.7
>   ##
>   { 'enum': 'JobType',
> -  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }
> +  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',
> +           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }
>   
>   ##
>   # @JobStatus:
> diff --git a/qapi/migration.json b/qapi/migration.json
> index d1d9632c2a..5ca0ff9bed 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -1843,3 +1843,176 @@
>   # Since: 5.2
>   ##
>   { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }
> +
> +##
> +# @snapshot-save:
> +#
> +# Save a VM snapshot
> +#
> +# @job-id: identifier for the newly created job
> +# @tag: name of the snapshot to create
> +# @vmstate: block device node name to save vmstate to
> +# @devices: list of block device node names to save a snapshot to
> +#
> +# Applications should not assume that the snapshot save is complete
> +# when this command returns. The job commands / events must be used
> +# to determine completion and to fetch details of any errors that arise.
> +#
> +# Note that execution of the guest CPUs may be stopped during the
> +# time it takes to save the snapshot. A future version of QEMU
> +# may ensure CPUs are executing continuously.
> +#
> +# It is strongly recommended that @devices contain all writable
> +# block device nodes if a consistent snapshot is required.
> +#
> +# If @tag already exists, an error will be reported
> +#
> +# Returns: nothing
> +#
> +# Example:
> +#
> +# -> { "execute": "snapshot-save",
> +#      "data": {
> +#         "job-id": "snapsave0",
> +#         "tag": "my-snap",
> +#         "vmstate": "disk0",
> +#         "devices": ["disk0", "disk1"]
> +#      }
> +#    }
> +# <- { "return": { } }
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "created", "id": "snapsave0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "running", "id": "snapsave0"}}
> +# <- {"event": "STOP"}
> +# <- {"event": "RESUME"}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "waiting", "id": "snapsave0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "pending", "id": "snapsave0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "concluded", "id": "snapsave0"}}
> +# -> {"execute": "query-jobs"}
> +# <- {"return": [{"current-progress": 1,
> +#                 "status": "concluded",
> +#                 "total-progress": 1,
> +#                 "type": "snapshot-save",
> +#                 "id": "snapsave0"}]}
> +#
> +# Since: 6.0
> +##
> +{ 'command': 'snapshot-save',
> +  'data': { 'job-id': 'str',
> +            'tag': 'str',
> +            'vmstate': 'str',
> +            'devices': ['str'] } }
> +
> +##
> +# @snapshot-load:
> +#
> +# Load a VM snapshot
> +#
> +# @job-id: identifier for the newly created job
> +# @tag: name of the snapshot to load.
> +# @vmstate: block device node name to load vmstate from
> +# @devices: list of block device node names to load a snapshot from
> +#
> +# Applications should not assume that the snapshot load is complete
> +# when this command returns. The job commands / events must be used
> +# to determine completion and to fetch details of any errors that arise.
> +#
> +# Note that execution of the guest CPUs will be stopped during the
> +# time it takes to load the snapshot.
> +#
> +# It is strongly recommended that @devices contain all writable
> +# block device nodes that can have changed since the original
> +# @snapshot-save command execution.
> +#
> +# Returns: nothing
> +#
> +# Example:
> +#
> +# -> { "execute": "snapshot-load",
> +#      "data": {
> +#         "job-id": "snapload0",
> +#         "tag": "my-snap",
> +#         "vmstate": "disk0",
> +#         "devices": ["disk0", "disk1"]
> +#      }
> +#    }
> +# <- { "return": { } }
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "created", "id": "snapload0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "running", "id": "snapload0"}}
> +# <- {"event": "STOP"}
> +# <- {"event": "RESUME"}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "waiting", "id": "snapload0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "pending", "id": "snapload0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "concluded", "id": "snapload0"}}
> +# -> {"execute": "query-jobs"}
> +# <- {"return": [{"current-progress": 1,
> +#                 "status": "concluded",
> +#                 "total-progress": 1,
> +#                 "type": "snapshot-load",
> +#                 "id": "snapload0"}]}
> +#
> +# Since: 6.0
> +##
> +{ 'command': 'snapshot-load',
> +  'data': { 'job-id': 'str',
> +            'tag': 'str',
> +            'vmstate': 'str',
> +            'devices': ['str'] } }
> +
> +##
> +# @snapshot-delete:
> +#
> +# Delete a VM snapshot
> +#
> +# @job-id: identifier for the newly created job
> +# @tag: name of the snapshot to delete.
> +# @devices: list of block device node names to delete a snapshot from
> +#
> +# Applications should not assume that the snapshot delete is complete
> +# when this command returns. The job commands / events must be used
> +# to determine completion and to fetch details of any errors that arise.
> +#
> +# Returns: nothing
> +#
> +# Example:
> +#
> +# -> { "execute": "snapshot-delete",
> +#      "data": {
> +#         "job-id": "snapdelete0",
> +#         "tag": "my-snap",
> +#         "devices": ["disk0", "disk1"]
> +#      }
> +#    }
> +# <- { "return": { } }
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "created", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "running", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "waiting", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "pending", "id": "snapdelete0"}}
> +# <- {"event": "JOB_STATUS_CHANGE",
> +#     "data": {"status": "concluded", "id": "snapdelete0"}}
> +# -> {"execute": "query-jobs"}
> +# <- {"return": [{"current-progress": 1,
> +#                 "status": "concluded",
> +#                 "total-progress": 1,
> +#                 "type": "snapshot-delete",
> +#                 "id": "snapdelete0"}]}
> +#
> +# Since: 6.0
> +##
> +{ 'command': 'snapshot-delete',
> +  'data': { 'job-id': 'str',
> +            'tag': 'str',
> +            'devices': ['str'] } }
> diff --git a/tests/qemu-iotests/tests/internal-snapshots-qapi b/tests/qemu-iotests/tests/internal-snapshots-qapi
> new file mode 100755
> index 0000000000..6467eaaac0
> --- /dev/null
> +++ b/tests/qemu-iotests/tests/internal-snapshots-qapi
> @@ -0,0 +1,386 @@
> +#!/usr/bin/env bash
> +# group: rw auto quick snapshot
> +#
> +# Test which nodes are involved in internal snapshots
> +#
> +# Copyright (C) 2020-2021 Red Hat, Inc.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +#
> +
> +# creator
> +owner=berrange@redhat.com
> +
> +seq=`basename $0`
> +echo "QA output created by $seq"
> +
> +status=1        # failure is the default!
> +
> +_cleanup()
> +{
> +    _cleanup_qemu
> +    _cleanup_test_img
> +    TEST_IMG="$TEST_IMG.alt1" _cleanup_test_img
> +    TEST_IMG="$TEST_IMG.alt2" _cleanup_test_img
> +    rm -f "$SOCK_DIR/nbd"
> +}
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +# get standard environment, filters and checks
> +. ../common.rc
> +. ../common.filter
> +. ../common.qemu
> +
> +_supported_fmt qcow2
> +_supported_proto file
> +_supported_os Linux
> +_require_drivers copy-on-read
> +
> +# Internal snapshots are (currently) impossible with refcount_bits=1,
> +# and generally impossible with external data files
> +_unsupported_imgopts 'refcount_bits=1[^0-9]' data_file
> +
> +_require_devices virtio-blk
> +
> +
> +size=128M
> +
> +if [ -n "$BACKING_FILE" ]; then
> +    _make_test_img -b "$BACKING_FILE" -F $IMGFMT $size
> +else
> +    _make_test_img $size
> +fi
> +TEST_IMG="$TEST_IMG.alt1" _make_test_img $size
> +IMGOPTS= IMGFMT=raw TEST_IMG="$TEST_IMG.alt2" _make_test_img $size
> +
> +export capture_events="JOB_STATUS_CHANGE STOP RESUME"
> +
> +wait_job()
> +{
> +    local job=$1
> +    shift
> +
> +    # All jobs start with two events...
> +    #
> +    # created
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +    # running
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +
> +    # Next events vary depending on job type and
> +    # whether it succeeds or not.
> +    for evname in $@
> +    do
> +        _wait_event $QEMU_HANDLE $evname
> +    done
> +
> +    # All jobs finish off with two more events...
> +    # concluded
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"query-jobs\"}" "return"
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"job-dismiss\", \"arguments\": {\"id\": \"$job\"}}" "return"
> +    # null
> +    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
> +}
> +
> +run_save()
> +{
> +    local job=$1
> +    local vmstate=$2
> +    local devices=$3
> +    local fail=$4
> +
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-save\",
> +                                  \"arguments\": {
> +                                     \"job-id\": \"$job\",
> +                                     \"tag\": \"snap0\",
> +                                     \"vmstate\": \"$vmstate\",
> +                                     \"devices\": $devices}}" "return"
> +
> +    if [ $fail = 0 ]; then
> +        # job status: waiting, pending
> +        wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
> +    else
> +        # job status: aborting
> +        wait_job $job "JOB_STATUS_CHANGE"
> +    fi
> +}
> +
> +run_load()
> +{
> +    local job=$1
> +    local vmstate=$2
> +    local devices=$3
> +    local fail=$4
> +
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-load\",
> +                                  \"arguments\": {
> +                                     \"job-id\": \"$job\",
> +                                     \"tag\": \"snap0\",
> +                                     \"vmstate\": \"$vmstate\",
> +                                     \"devices\": $devices}}" "return"
> +    if [ $fail = 0 ]; then
> +        # job status: waiting, pending
> +        wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
> +    else
> +        # job status: aborting
> +        wait_job $job "STOP" "JOB_STATUS_CHANGE"
> +    fi
> +}
> +
> +run_delete()
> +{
> +    local job=$1
> +    local devices=$2
> +    local fail=$3
> +
> +    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-delete\",
> +                                  \"arguments\": {
> +                                     \"job-id\": \"$job\",
> +                                     \"tag\": \"snap0\",
> +                                     \"devices\": $devices}}" "return"
> +    if [ $fail = 0 ]; then
> +        # job status: waiting, pending
> +        wait_job $job "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
> +    else
> +        # job status: aborting
> +        wait_job $job "JOB_STATUS_CHANGE"
> +    fi
> +}
> +
> +start_qemu()
> +{
> +    keep_stderr=y
> +    _launch_qemu -nodefaults -nographic "$@"
> +
> +    _send_qemu_cmd $QEMU_HANDLE '{"execute": "qmp_capabilities"}' 'return'
> +}
> +
> +stop_qemu()
> +{
> +    _send_qemu_cmd $QEMU_HANDLE '{"execute": "quit"}' 'return'
> +
> +    wait=1 _cleanup_qemu
> +}
> +
> +
> +echo
> +echo "=====  Snapshot single qcow2 image ====="
> +echo
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-simple" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_load "load-simple" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_delete "delete-simple" "[\"diskfmt0\"]" 0
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot no image ====="
> +echo
> +
> +# When snapshotting we need to pass at least one writable disk
> +# otherwise there's no work to do
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-no-image" "diskfmt0" "[]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot missing image ====="
> +echo
> +
> +# The block node names we pass need to actually exist
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-missing-image" "diskfmt1729" "[\"diskfmt1729\"]" 1
> +stop_qemu
> +
> +echo
> +echo "=====  Snapshot vmstate not in devices list ====="
> +echo
> +
> +# The node name referred to for vmstate must be one of the nodes
> +# being included in the snapshot, otherwise the vmstate that is
> +# captured is liable to be overwritten making subsequent load
> +# impossible
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
> +run_save "save-excluded-vmstate" "diskfmt0" "[\"diskfmt1\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot protocol instead of format ====="
> +echo
> +
> +# The snapshot has to be done against the qcow2 format layer
> +# not the underlying file protocol layer
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_save "save-proto-not-fmt" "disk0" "[\"disk0\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot dual qcow2 image ====="
> +echo
> +
> +# We can snapshot multiple qcow2 disks at the same time
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
> +run_save "save-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_load "load-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_delete "delete-dual-image" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot error with raw image ====="
> +echo
> +
> +# If we're snapshotting multiple disks, all must be capable
> +# of supporting snapshots. A raw disk in the list must cause
> +# an error.
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
> +    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
> +run_save "save-raw-fmt" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\", \"diskfmt2\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot with raw image excluded ====="
> +echo
> +
> +# If we're snapshotting multiple disks, all must be capable
> +# of supporting snapshots. A writable raw disk can be excluded
> +# from the snapshot, though it means its data won't be restored
> +# by later snapshot load operation.
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
> +    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
> +run_save "save-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_load "load-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +run_delete "delete-skip-raw" "[\"diskfmt0\", \"diskfmt1\"]" 0
> +stop_qemu
> +
> +echo
> +echo "=====  Snapshot bad error reporting to stderr ====="
> +echo
> +
> +# This demonstrates that we're not capturing vmstate loading failures
> +# into QMP errors, they're ending up in stderr instead. vmstate needs
> +# to report errors via Error object but that is a major piece of work
> +# for the future. This test case's expected output log will need
> +# adjusting when that is done.
> +
> +start_qemu \
> +    -device virtio-rng \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +
> +run_save "save-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 0
> +stop_qemu
> +
> +# leave off virtio-rng to provoke vmstate failure
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +
> +run_load "load-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 1
> +run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
> +
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot reuse same tag ====="
> +echo
> +
> +# Validates that we get an error when reusing a snapshot tag that
> +# already exists
> +
> +start_qemu \
> +    -device virtio-rng \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +
> +run_save "save-err-stderr-initial" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_save "save-err-stderr-repeat1" "diskfmt0" "[\"diskfmt0\"]" 1
> +run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
> +run_save "save-err-stderr-repeat2" "diskfmt0" "[\"diskfmt0\"]" 0
> +run_delete "delete-err-stderr-repeat2" "[\"diskfmt0\"]" 0
> +
> +stop_qemu
> +
> +echo
> +echo "=====  Snapshot load does not exist ====="
> +echo
> +
> +# Validates that we get an error when loading a snapshot that does
> +# not exist
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_load "load-missing-snapshot" "diskfmt0" "[\"diskfmt0\"]" 1
> +stop_qemu
> +
> +
> +echo
> +echo "=====  Snapshot delete does not exist ====="
> +echo
> +
> +# Validates that we don't get an error when deleting a snapshot that
> +# does not exist
> +
> +start_qemu \
> +    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
> +    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
> +run_delete "delete-missing-snapshot" "[\"diskfmt0\"]" 0
> +stop_qemu
> +
> +
> +# success, all done
> +echo "*** done"
> +rm -f $seq.full
> +status=0
> diff --git a/tests/qemu-iotests/tests/internal-snapshots-qapi.out b/tests/qemu-iotests/tests/internal-snapshots-qapi.out
> new file mode 100644
> index 0000000000..26ff4a838c
> --- /dev/null
> +++ b/tests/qemu-iotests/tests/internal-snapshots-qapi.out
> @@ -0,0 +1,520 @@
> +QA output created by internal-snapshots-qapi
> +Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
> +Formatting 'TEST_DIR/t.IMGFMT.alt1', fmt=IMGFMT size=134217728
> +Formatting 'TEST_DIR/t.qcow2.alt2', fmt=IMGFMT size=134217728
> +
> +=====  Snapshot single qcow2 image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-simple",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-simple"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-simple"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-simple"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-simple"}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-simple",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-simple"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-simple"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-simple"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-simple"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-simple",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-simple"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-simple"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-simple"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-simple"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-simple"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot no image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-no-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": []}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-no-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-no-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-no-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-no-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-no-image", "error": "At least one device is required for snapshot"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-no-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-no-image"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot missing image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-missing-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt1729",
> +                                     "devices": ["diskfmt1729"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-missing-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-missing-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-missing-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-missing-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-missing-image", "error": "No block device node 'diskfmt1729'"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-missing-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-missing-image"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot vmstate not in devices list =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-excluded-vmstate",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-excluded-vmstate"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-excluded-vmstate"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-excluded-vmstate"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-excluded-vmstate"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-excluded-vmstate", "error": "vmstate block device 'diskfmt0' does not exist"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-excluded-vmstate"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-excluded-vmstate"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot protocol instead of format =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-proto-not-fmt",
> +                                     "tag": "snap0",
> +                                     "vmstate": "disk0",
> +                                     "devices": ["disk0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-proto-not-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-proto-not-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-proto-not-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-proto-not-fmt"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-proto-not-fmt"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-proto-not-fmt"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot dual qcow2 image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-dual-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-dual-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-dual-image"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-dual-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-dual-image"}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-dual-image",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-dual-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-dual-image"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-dual-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-dual-image"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-dual-image",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-dual-image"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-dual-image"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-dual-image"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-dual-image"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-dual-image"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot error with raw image =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-raw-fmt",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-raw-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-raw-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-raw-fmt"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-raw-fmt"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-raw-fmt"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-raw-fmt"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot with raw image excluded =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-skip-raw",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-skip-raw"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-skip-raw"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-skip-raw"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-skip-raw"}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-skip-raw",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-skip-raw"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-skip-raw"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-skip-raw"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-skip-raw"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-skip-raw",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0", "diskfmt1"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-skip-raw"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-skip-raw"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-skip-raw"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-skip-raw"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-skip-raw"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot bad error reporting to stderr =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-err-stderr",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}}
> +qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-err-stderr", "error": "Error -22 while loading VM state"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-err-stderr"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-err-stderr",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot reuse same tag =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr-initial",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-initial"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-initial"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-initial"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-initial"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-initial"}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr-repeat1",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat1"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat1"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-err-stderr-repeat1"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat1"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat1", "error": "Snapshot 'snap0' already exists in one or more devices"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat1"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat1"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-err-stderr",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
> +{"execute": "snapshot-save",
> +                                  "arguments": {
> +                                     "job-id": "save-err-stderr-repeat2",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat2"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat2"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat2"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat2"}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-err-stderr-repeat2",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr-repeat2"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr-repeat2"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr-repeat2"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr-repeat2"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr-repeat2"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot load does not exist =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-load",
> +                                  "arguments": {
> +                                     "job-id": "load-missing-snapshot",
> +                                     "tag": "snap0",
> +                                     "vmstate": "diskfmt0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-missing-snapshot"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-missing-snapshot", "error": "Snapshot 'snap0' does not exist in one or more devices"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "load-missing-snapshot"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-missing-snapshot"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +
> +=====  Snapshot delete does not exist =====
> +
> +{"execute": "qmp_capabilities"}
> +{"return": {}}
> +{"execute": "snapshot-delete",
> +                                  "arguments": {
> +                                     "job-id": "delete-missing-snapshot",
> +                                     "tag": "snap0",
> +                                     "devices": ["diskfmt0"]}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-missing-snapshot"}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-missing-snapshot"}}
> +{"execute": "query-jobs"}
> +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-missing-snapshot"}]}
> +{"execute": "job-dismiss", "arguments": {"id": "delete-missing-snapshot"}}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-missing-snapshot"}}
> +{"execute": "quit"}
> +{"return": {}}
> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
> +*** done
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-02-16 19:02 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-04 12:48 [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 01/12] block: push error reporting into bdrv_all_*_snapshot functions Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 02/12] migration: Make save_snapshot() return bool, not 0/-1 Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 03/12] migration: stop returning errno from load_snapshot() Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 04/12] block: add ability to specify list of blockdevs during snapshot Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 05/12] block: allow specifying name of block device for vmstate storage Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 06/12] block: rename and alter bdrv_all_find_snapshot semantics Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 07/12] migration: control whether snapshots are ovewritten Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 08/12] migration: wire up support for snapshot device selection Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 09/12] migration: introduce a delete_snapshot wrapper Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 10/12] iotests: add support for capturing and matching QMP events Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 11/12] iotests: fix loading of common.config from tests/ subdir Daniel P. Berrangé
2021-02-04 12:48 ` [PATCH v11 12/12] migration: introduce snapshot-{save, load, delete} QMP commands Daniel P. Berrangé
2021-02-04 15:34   ` Dr. David Alan Gilbert
2021-02-04 15:38     ` Daniel P. Berrangé
2021-02-04 15:40   ` [PATCH v11 12/12] migration: introduce snapshot-{save,load,delete} " Eric Blake
2021-02-16 18:58   ` John Snow
2021-02-04 15:17 ` [PATCH v11 00/12] migration: bring improved savevm/loadvm/delvm to QMP Dr. David Alan Gilbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.