All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Eric Blake <eblake@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>, Peter Krempa <pkrempa@redhat.com>,
	qemu-block@nongnu.org, Juan Quintela <quintela@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org, Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH 6/6] migration: support picking vmstate disk in QMP snapshot commands
Date: Fri, 3 Jul 2020 09:37:45 +0100	[thread overview]
Message-ID: <20200703083745.GB2213227@redhat.com> (raw)
In-Reply-To: <a4aacccc-3761-591c-637d-1e0110a6c337@redhat.com>

On Thu, Jul 02, 2020 at 01:19:43PM -0500, Eric Blake wrote:
> On 7/2/20 12:57 PM, Daniel P. Berrangé wrote:
> > This wires up support for a new "vmstate" parameter to the QMP commands
> > for snapshots (savevm, loadvm). This parameter accepts block driver
> > state node name.
> > 
> > One use case for this would be a VM using OVMF firmware where the
> > variables store is the first snapshottable disk image. The vmstate
> > snapshot usually wants to be stored in the primary root disk of the
> > VM, not the firmeware varstore. Thus there needs to be a mechanism
> 
> firmware
> 
> > to override the default choice of disk.
> > 
> > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
> > ---
> 
> > +++ b/qapi/migration.json
> > @@ -1630,6 +1630,7 @@
> >   # @tag: name of the snapshot to create. If it already
> >   # exists it will be replaced.
> >   # @exclude: list of block device node names to exclude
> > +# @vmstate: block device node name to save vmstate to
> >   #
> >   # Note that execution of the VM will be paused during the time
> >   # it takes to save the snapshot
> > @@ -1641,6 +1642,7 @@
> >   # -> { "execute": "savevm",
> >   #      "data": {
> >   #         "tag": "my-snap",
> > +#         "vmstate": "disk0",
> >   #         "exclude": ["pflash0-vars"]
> >   #      }
> >   #    }
> > @@ -1650,6 +1652,7 @@
> >   ##
> >   { 'command': 'savevm',
> >     'data': { 'tag': 'str',
> > +            '*vmstate': 'str',
> >               '*exclude': ['str'] } }
> 
> During save, the list of block devices is obvious: everything that is not
> excluded.  But,
> 
> >   ##
> > @@ -1659,6 +1662,7 @@
> >   #
> >   # @tag: name of the snapshot to load.
> >   # @exclude: list of block device node names to exclude
> > +# @vmstate: block device node name to load vmstate from
> >   #
> >   # Returns: nothing
> >   #
> > @@ -1667,6 +1671,7 @@
> >   # -> { "execute": "loadvm",
> >   #      "data": {
> >   #         "tag": "my-snap",
> > +#         "vmstate": "disk0",
> >   #         "exclude": ["pflash0-vars"]
> >   #      }
> >   #    }
> > @@ -1676,6 +1681,7 @@
> >   ##
> >   { 'command': 'loadvm',
> >     'data': { 'tag': 'str',
> > +            '*vmstate': 'str',
> >               '*exclude': ['str'] } }
> 
> ...now that we support exclusion during saving, or even without exclusion
> but when the user has performed hotplug/unplug operations in the meantime
> from when the snapshot was created, isn't load better off listing all
> devices which SHOULD be restored, rather than excluding devices that should
> NOT be restored?  (After all, libvirt knows which disks existed at the time
> of the snapshot, which may be different than the set of disks that exist now
> even though we are throwing out the state now to go back to the state at the
> time of the snapshot)

If the user has hotplugged / unplugged any devices, then I expect the
snapshot load to fail, because the vmstate will be referencing devices
that don't exist, or will be missing devices. Same way migration will
fail unless target QEMU has exact same device setup that was first
serialized into the vmstate

In theory I guess you could have kept device ABI the same, but switched
out disk backends, but I question the sanity of doing that while you have
saved snapshots, unless you're preserving those snapshots in the new
images in which case it will just work.

> Then there's the question of symmetry: if load needs an explicit list of
> blocks to load from (rather than the set of blocks that are currently
> associated with the machine), should save also take its list by positive
> inclusion rather than negative exclusion?

I choose exclusion because the normal case is that you want to snapshot
everything. You sometimes have a small number of exceptions, most notably
the OVMF varstore. IOW if you're currently relying on default behaviour
of snapshotting everything, it is much easier to just exclude one image
and than to switch to explicitly including everything. Essentially I can
just pass a static string associated with the varstore to be excluded,
instead of having to dynamically build up a list of everything.

I wouldn't mind supporting inclusion *and* exclusion, so users have the
choice of which approach to take.

> And why does delvm not need to specify which block is the vmstate? delvm is
> in the same boat as loadvm - the set of blocks involved at the time of the
> snapshot creation may be different than the set of blocks currently
> associated with the guest at the time you run load/delvm.

There's no code in delvm that needs to take any special action wrt
vmstate. It simply deletes snapshots from all the disks present.

> 
> -- 
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



  reply	other threads:[~2020-07-03  8:38 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-02 17:57 [PATCH 0/6] migration: bring savevm/loadvm/delvm over to QMP Daniel P. Berrangé
2020-07-02 17:57 ` [PATCH 1/6] migration: improve error reporting of block driver state name Daniel P. Berrangé
2020-07-02 18:36   ` Eric Blake
2020-07-02 19:13     ` Dr. David Alan Gilbert
2020-07-02 17:57 ` [PATCH 2/6] migration: introduce savevm, loadvm, delvm QMP commands Daniel P. Berrangé
2020-07-02 18:12   ` Eric Blake
2020-07-02 18:24     ` Daniel P. Berrangé
2020-07-03 15:49       ` Dr. David Alan Gilbert
2020-07-03 16:02         ` Daniel P. Berrangé
2020-07-03 16:10           ` Dr. David Alan Gilbert
2020-07-03 16:16             ` Daniel P. Berrangé
2020-07-03 16:22               ` Dr. David Alan Gilbert
2020-07-03 16:49                 ` Daniel P. Berrangé
2020-07-03 17:00                   ` Dr. David Alan Gilbert
2020-07-03 17:10                     ` Daniel P. Berrangé
2020-07-03 17:26                       ` Dr. David Alan Gilbert
2020-07-03 16:24             ` Peter Krempa
2020-07-03 16:26               ` Dr. David Alan Gilbert
2020-07-06 16:15           ` Kevin Wolf
2020-07-07  6:38             ` Peter Krempa
2020-07-07 10:33               ` Kevin Wolf
2020-07-07 10:41                 ` Peter Krempa
2020-07-03 17:22   ` Denis V. Lunev
2020-07-02 17:57 ` [PATCH 3/6] block: add ability to filter out blockdevs during snapshot Daniel P. Berrangé
2020-07-02 17:57 ` [PATCH 4/6] block: allow specifying name of block device for vmstate storage Daniel P. Berrangé
2020-07-02 17:57 ` [PATCH 5/6] migration: support excluding block devs in QMP snapshot commands Daniel P. Berrangé
2020-07-06 15:57   ` Kevin Wolf
2020-07-07  9:14     ` Daniel P. Berrangé
2020-07-07 10:11       ` Kevin Wolf
2020-07-02 17:57 ` [PATCH 6/6] migration: support picking vmstate disk " Daniel P. Berrangé
2020-07-02 18:19   ` Eric Blake
2020-07-03  8:37     ` Daniel P. Berrangé [this message]
2020-07-02 18:53 ` [PATCH 0/6] migration: bring savevm/loadvm/delvm over to QMP no-reply
2020-07-02 19:07 ` no-reply
2020-07-03 17:15 ` Denis V. Lunev
2020-07-03 17:22   ` Daniel P. Berrangé
2020-07-03 17:29     ` Denis V. Lunev
2020-07-06 14:28       ` Daniel P. Berrangé
2020-07-06 16:07         ` Denis V. Lunev
2020-07-06 15:27       ` Kevin Wolf
2020-07-06 15:29         ` Daniel P. Berrangé
2020-07-06 15:50           ` Kevin Wolf
2020-07-06 16:03             ` Daniel P. Berrangé
2020-07-06 16:10               ` Denis V. Lunev
2020-07-06 16:15                 ` Daniel P. Berrangé
2020-07-06 16:21               ` Kevin Wolf
2020-07-07  9:07                 ` Daniel P. Berrangé

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200703083745.GB2213227@redhat.com \
    --to=berrange@redhat.com \
    --cc=armbru@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eblake@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pavel.dovgaluk@ispras.ru \
    --cc=pbonzini@redhat.com \
    --cc=pkrempa@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.