All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Bruno Alvisio <bruno.alvisio@gmail.com>
Cc: xen-devel@lists.xenproject.org, wei.liu2@citrix.com,
	ian.jackson@eu.citrix.com, dave@recoil.org
Subject: Re: [PATCH RFC] Live migration for VMs with QEMU backed local storage
Date: Fri, 23 Jun 2017 09:03:34 +0100	[thread overview]
Message-ID: <20170623080334.erncnpoy24dp6xka@dhcp-3-128.uk.xensource.com> (raw)
In-Reply-To: <1498203740-7809-1-git-send-email-bruno.alvisio@gmail.com>

On Fri, Jun 23, 2017 at 03:42:20AM -0400, Bruno Alvisio wrote:
> This patch is the first attempt on adding live migration of instances with local
> storage to Xen. This patch just handles very restricted case of fully
> virtualized HVMs. The code uses the "drive-mirror" capability provided by QEMU.
> A new "-l" option is introduced to "xl migrate" command. If provided, the local
> disk should be mirrored during the migration process. If the option is set,
> during the VM creation a qemu NBD server is started on the destination. After
> the instance is suspended on the source, the QMP "disk-mirror" command is issued
> to mirror the disk to destination. Once the mirroring job is complete, the
> migration process continues as before. Finally, the NBD server is stopped after
> the instance is successfully resumed on the destination node.

Since I'm not familiar with all this, can this "driver-mirror" QEMU
capability handle the migration of disk while being actively used?

> A major problem with this patch is that the mirroring of the disk is performed
> only after the memory stream is completed and the VM is suspended on the source;
> thus the instance is frozen for a long period of time. The reason this happens
> is that the QEMU process (needed for the disk mirroring) is started on the
> destination node only after the memory copying is completed. One possibility I
> was considering to solve this issue (if it is decided that this capability
> should be used): Could a "helper" QEMU process be started on the destination
> node at the beginning of the migration sequence with the sole purpose of
> handling the disk mirroring and kill it at the end of the migration sequence? 
> 
> From the suggestions given by Konrad Wilk and Paul Durrant the preferred
> approach would be to handle the mirroring of disks by QEMU instead of directly
> being handled directly by, for example, blkback. It would be very helpful for me
> to have a mental map of all the scenarios that can be encountered regarding
> local disk (Xen could start supporting live migration of certain types of local
> disks). This are the ones I can think of:
> - Fully Virtualized HVM: QEMU emulation

PV domains can also use the QEMU PV disk backend, so it should be
feasible to handle this migration for all guest types just using
QEMU.

> - blkback

TBH, I don't think such feature should be added to blkback. It's
too complex to be implemented inside of the kernel itself.

There are options already available to perform block device
duplication at the block level itself in Linux like DRDB [0] and IMHO
this is what should be used in conjunction with blkback.

Remember that at the end of day the Unix philosophy has always been to
implement simple tools that solve specific problems, and then glue
them together in order to solve more complex problems.

In that line of thought, why not simply use iSCSI or similar in order
to share the disk with all the hosts?

> - blktap / blktap2 

This is deprecated and no longer present in upstream kernels, I don't
think it's worth looking into it.

Roger.

[0] http://docs.linbit.com/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-06-23  8:03 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-23  7:42 [PATCH RFC] Live migration for VMs with QEMU backed local storage Bruno Alvisio
2017-06-23  8:03 ` Roger Pau Monné [this message]
2017-06-26 10:06   ` George Dunlap
2017-06-26 23:16     ` Bruno Alvisio
2017-06-29 11:58 ` Wei Liu
2017-06-29 13:33   ` Bruno Alvisio
2017-06-29 13:56     ` Wei Liu
2017-06-29 14:34       ` Roger Pau Monné
2017-06-29 16:11         ` Wei Liu
  -- strict thread matches above, loose matches on Subject: below --
2017-06-23  7:31 Bruno Alvisio
2017-06-23 14:15 ` Konrad Rzeszutek Wilk
2017-06-23 14:20   ` Ian Jackson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170623080334.erncnpoy24dp6xka@dhcp-3-128.uk.xensource.com \
    --to=roger.pau@citrix.com \
    --cc=bruno.alvisio@gmail.com \
    --cc=dave@recoil.org \
    --cc=ian.jackson@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.