qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: "Zhengxiao.zx@Alibaba-inc.com" <Zhengxiao.zx@alibaba-inc.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"cjia@nvidia.com" <cjia@nvidia.com>,
	"eskultet@redhat.com" <eskultet@redhat.com>,
	"Yang, Ziye" <ziye.yang@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	"shuangtai.tst@alibaba-inc.com" <shuangtai.tst@alibaba-inc.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"Wang, Zhi A" <zhi.a.wang@intel.com>,
	"mlevitsk@redhat.com" <mlevitsk@redhat.com>,
	"pasic@linux.ibm.com" <pasic@linux.ibm.com>,
	"aik@ozlabs.ru" <aik@ozlabs.ru>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	"eauger@redhat.com" <eauger@redhat.com>,
	"felipe@nutanix.com" <felipe@nutanix.com>,
	"jonathan.davies@nutanix.com" <jonathan.davies@nutanix.com>,
	"Liu, Changpeng" <changpeng.liu@intel.com>,
	"Ken.Xue@amd.com" <Ken.Xue@amd.com>
Subject: Re: [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device
Date: Fri, 28 Jun 2019 17:28:53 -0400	[thread overview]
Message-ID: <20190628212853.GH6971@joy-OptiPlex-7040> (raw)
In-Reply-To: <20190628094447.GD2922@work-vm>

On Fri, Jun 28, 2019 at 05:44:47PM +0800, Dr. David Alan Gilbert wrote:
> * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > On Tue, Jun 25, 2019 at 03:00:24AM +0800, Dr. David Alan Gilbert wrote:
> > > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > > > 
> > > > 
> > > > On 6/21/2019 2:16 PM, Yan Zhao wrote:
> > > > > On Fri, Jun 21, 2019 at 04:02:50PM +0800, Kirti Wankhede wrote:
> > > > >>
> > > > >>
> > > > >> On 6/21/2019 6:54 AM, Yan Zhao wrote:
> > > > >>> On Fri, Jun 21, 2019 at 08:25:18AM +0800, Yan Zhao wrote:
> > > > >>>> On Thu, Jun 20, 2019 at 10:37:28PM +0800, Kirti Wankhede wrote:
> > > > >>>>> Add migration support for VFIO device
> > > > >>>>>
> > > > >>>>> This Patch set include patches as below:
> > > > >>>>> - Define KABI for VFIO device for migration support.
> > > > >>>>> - Added save and restore functions for PCI configuration space
> > > > >>>>> - Generic migration functionality for VFIO device.
> > > > >>>>>   * This patch set adds functionality only for PCI devices, but can be
> > > > >>>>>     extended to other VFIO devices.
> > > > >>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> > > > >>>>>     resume phases of migration.
> > > > >>>>>   * Added state change notifier and from that notifier function, VFIO
> > > > >>>>>     device's state changed is conveyed to VFIO device driver.
> > > > >>>>>   * During save setup phase and resume/load setup phase, migration region
> > > > >>>>>     is queried and is used to read/write VFIO device data.
> > > > >>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> > > > >>>>>     functionality of iteration during pre-copy phase.
> > > > >>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> > > > >>>>>     iteration to read data from VFIO device driver is implemented till pending
> > > > >>>>>     bytes returned by driver are not zero.
> > > > >>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> > > > >>>>>     driver.
> > > > >>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> > > > >>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> > > > >>>>>   driver, migration is blocked.
> > > > >>>>>
> > > > >>>>> Below is the flow of state change for live migration where states in brackets
> > > > >>>>> represent VM state, migration state and VFIO device state as:
> > > > >>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> > > > >>>>>
> > > > >>>>> Live migration save path:
> > > > >>>>>         QEMU normal running state
> > > > >>>>>         (RUNNING, _NONE, _RUNNING)
> > > > >>>>>                         |
> > > > >>>>>     migrate_init spawns migration_thread.
> > > > >>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> > > > >>>>>     Migration thread then calls each device's .save_setup()
> > > > >>>>>                         |
> > > > >>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> > > > >>>>>     If device is active, get pending bytes by .save_live_pending()
> > > > >>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> > > > >>>>>     Data of VFIO device for pre-copy phase is copied.
> > > > >>>>>     Iterate till pending bytes converge and are less than threshold
> > > > >>>>>                         |
> > > > >>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> > > > >>>>>     for each active device. VFIO device is then transitioned in
> > > > >>>>>      _SAVING state.
> > > > >>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> > > > >>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> > > > >>>>>     pending data is 0.
> > > > >>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> > > > >>>>
> > > > >>>> I suggest we also register to VMStateDescription, whose .pre_save
> > > > >>>> handler would get called after .save_live_complete_precopy in pre-copy
> > > > >>>> only case, and will called before .save_live_iterate in post-copy
> > > > >>>> enabled case.
> > > > >>>> In the .pre_save handler, we can save all device state which must be
> > > > >>>> copied after device stop in source vm and before device start in target vm.
> > > > >>>>
> > > > >>> hi
> > > > >>> to better describe this idea:
> > > > >>>
> > > > >>> in pre-copy only case, the flow is
> > > > >>>
> > > > >>> start migration --> .save_live_iterate (several round) -> stop source vm
> > > > >>> --> .save_live_complete_precopy --> .pre_save  -->start target vm
> > > > >>> -->migration complete
> > > > >>>
> > > > >>>
> > > > >>> in post-copy enabled case, the flow is
> > > > >>>
> > > > >>> start migration --> .save_live_iterate (several round) --> start post copy --> 
> > > > >>> stop source vm --> .pre_save --> start target vm --> .save_live_iterate (several round) 
> > > > >>> -->migration complete
> > > > >>>
> > > > >>> Therefore, we should put saving of device state in .pre_save interface
> > > > >>> rather than in .save_live_complete_precopy. 
> > > > >>> The device state includes pci config data, page tables, register state, etc.
> > > > >>>
> > > > >>> The .save_live_iterate and .save_live_complete_precopy should only deal
> > > > >>> with saving dirty memory.
> > > > >>>
> > > > >>
> > > > >> Vendor driver can decide when to save device state depending on the VFIO
> > > > >> device state set by user. Vendor driver doesn't have to depend on which
> > > > >> callback function QEMU or user application calls. In pre-copy case,
> > > > >> save_live_complete_precopy sets VFIO device state to
> > > > >> VFIO_DEVICE_STATE_SAVING which means vCPUs are stopped and vendor driver
> > > > >> should save all device state.
> > > > >>
> > > > > when post copy stops vCPUs and vfio device, vendor driver only needs to
> > > > > provide device state. but how vendor driver knows that, if no extra
> > > > > interface or no extra device state is provides?
> > > > > 
> > > > 
> > > > .save_live_complete_postcopy interface for post-copy will get called,
> > > > right?
> > > 
> > > That happens at the very end; I think the question here is for something
> > > that gets called at the point we stop iteratively sending RAM, send the
> > > device states and then start sending RAM on demand to the destination
> > > as it's running. Typically we send a small set of device state
> > > (registers etc) at this point.
> > > 
> > > I guess there's two different postcopy cases that we need to think
> > > about:
> > >   a) Where the VFIO device doesn't support postcopy - it just gets
> > >   migrated like any other device, so all it's RAM must get sent
> > >   before we flip into postcopy mode.
> > > 
> > >   b) Where the VFIO device does support postcopy - where the pages
> > >   get sent on demand.
> > > 
> > > (b) maybe tricky depending on whether your hardware can fault
> > > on pages of your RAM that are needed but not yet transferred;  but
> > > if you can that would make life a lot more practical on really
> > > big VFO devices.
> > > 
> > > Dave
> > >
> > hi Dave,
> > so do you think it is good to abstract device state data and save it in
> > .pre_save callback?
> 
> I'm not sure we have a vmsd/pre_save in this setup?  If we did then it's
> a bit confusing because I don't think we have any other iterative device
> that also has a vmsd.
Yes, I tried it. it's ok to register SaveVMHandlers and VMStateDescription at the
same time.

> 
> I'd have to test it, but I think you might get the devices
> ->save_live_complete_precopy called at the right point just before
> postcopy switchover.  It's worth looking.
> 
if a iterative device supports postcopy, then its save_live_complete_precopy
would not get called before postcopy switchover.
However, postcopy may need to save pure device state only data (not memory) at that
time. That's the reason I think we should also register to
VMStateDescription also, as its .pre_save handler would get called at
at that time.

Thanks
Yan

> Dave
> 
> > Thanks
> > Yan
> > 
> > > > Thanks,
> > > > Kirti
> > > > 
> > > > >>>
> > > > >>> I know current implementation does not support post-copy. but at least
> > > > >>> it should not require huge change when we decide to enable it in future.
> > > > >>>
> > > > >>
> > > > >> .has_postcopy and .save_live_complete_postcopy need to be implemented to
> > > > >> support post-copy. I think .save_live_complete_postcopy should be
> > > > >> similar to vfio_save_complete_precopy.
> > > > >>
> > > > >> Thanks,
> > > > >> Kirti
> > > > >>
> > > > >>> Thanks
> > > > >>> Yan
> > > > >>>
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


      reply	other threads:[~2019-06-28 21:36 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-20 14:37 [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-06-20 17:18   ` Alex Williamson
2019-06-21  5:52     ` Kirti Wankhede
2019-06-21 15:03       ` Alex Williamson
2019-06-21 19:35         ` Kirti Wankhede
2019-06-21 20:00           ` Alex Williamson
2019-06-21 20:30             ` Kirti Wankhede
2019-06-21 22:01               ` Alex Williamson
2019-06-24 15:00                 ` Kirti Wankhede
2019-06-24 15:25                   ` Alex Williamson
2019-06-24 18:52                     ` Kirti Wankhede
2019-06-24 19:01                       ` Alex Williamson
2019-06-25 15:20                         ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 03/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-06-21  0:12   ` Yan Zhao
2019-06-21  6:44     ` Kirti Wankhede
2019-06-21  7:50       ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 04/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-06-24 14:00   ` Cornelia Huck
2019-06-27 14:56     ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 05/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-06-25 10:29   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 06/13] vfio: Add migration state change notifier Kirti Wankhede
2019-06-27 10:33   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 07/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-06-27 10:01   ` Dr. David Alan Gilbert
2019-06-27 14:31     ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2019-06-20 19:25   ` Alex Williamson
2019-06-21  6:38     ` Kirti Wankhede
2019-06-21 15:16       ` Alex Williamson
2019-06-21 19:38         ` Kirti Wankhede
2019-06-21 20:02           ` Alex Williamson
2019-06-21 20:07             ` Kirti Wankhede
2019-06-21 20:32               ` Alex Williamson
2019-06-21 21:05                 ` Kirti Wankhede
2019-06-21 22:13                   ` Alex Williamson
2019-06-24 14:31                     ` Kirti Wankhede
2019-06-21  0:31   ` Yan Zhao
2019-06-25  3:30     ` Yan Zhao
2019-06-28  8:50       ` Dr. David Alan Gilbert
2019-06-28 21:16         ` Yan Zhao
2019-06-28  9:09   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 09/13] vfio: Add load " Kirti Wankhede
2019-06-28  9:18   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 10/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-06-26  0:40   ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 11/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 12/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 13/13] vfio: Add trace events in migration code path Kirti Wankhede
2019-06-20 18:50   ` Dr. David Alan Gilbert
2019-06-21  5:54     ` Kirti Wankhede
2019-06-21  0:25 ` [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Yan Zhao
2019-06-21  1:24   ` Yan Zhao
2019-06-21  8:02     ` Kirti Wankhede
2019-06-21  8:46       ` Yan Zhao
2019-06-21  9:22         ` Kirti Wankhede
2019-06-21 10:45           ` Yan Zhao
2019-06-24 19:00           ` Dr. David Alan Gilbert
2019-06-26  0:43             ` Yan Zhao
2019-06-28  9:44               ` Dr. David Alan Gilbert
2019-06-28 21:28                 ` Yan Zhao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190628212853.GH6971@joy-OptiPlex-7040 \
    --to=yan.y.zhao@intel.com \
    --cc=Ken.Xue@amd.com \
    --cc=Zhengxiao.zx@alibaba-inc.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=changpeng.liu@intel.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eauger@redhat.com \
    --cc=eskultet@redhat.com \
    --cc=felipe@nutanix.com \
    --cc=jonathan.davies@nutanix.com \
    --cc=kevin.tian@intel.com \
    --cc=kwankhede@nvidia.com \
    --cc=mlevitsk@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuangtai.tst@alibaba-inc.com \
    --cc=yi.l.liu@intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=ziye.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).