qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: Kirti Wankhede <kwankhede@nvidia.com>
Cc: "pasic@linux.ibm.com" <pasic@linux.ibm.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"cjia@nvidia.com" <cjia@nvidia.com>,
	"Ken.Xue@amd.com" <Ken.Xue@amd.com>,
	"eskultet@redhat.com" <eskultet@redhat.com>,
	"Yang, Ziye" <ziye.yang@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Zhengxiao.zx@Alibaba-inc.com" <Zhengxiao.zx@Alibaba-inc.com>,
	"shuangtai.tst@alibaba-inc.com" <shuangtai.tst@alibaba-inc.com>,
	"dgilbert@redhat.com" <dgilbert@redhat.com>,
	"mlevitsk@redhat.com" <mlevitsk@redhat.com>,
	"yulei.zhang@intel.com" <yulei.zhang@intel.com>,
	"aik@ozlabs.ru" <aik@ozlabs.ru>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"eauger@redhat.com" <eauger@redhat.com>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	"jonathan.davies@nutanix.com" <jonathan.davies@nutanix.com>,
	"felipe@nutanix.com" <felipe@nutanix.com>,
	"Liu, Changpeng" <changpeng.liu@intel.com>,
	"Wang, Zhi A" <zhi.a.wang@intel.com>
Subject: Re: [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers
Date: Mon, 24 Jun 2019 23:30:29 -0400	[thread overview]
Message-ID: <20190625033029.GC6971@joy-OptiPlex-7040> (raw)
In-Reply-To: <20190621003153.GG9303@joy-OptiPlex-7040>

On Fri, Jun 21, 2019 at 08:31:53AM +0800, Yan Zhao wrote:
> On Thu, Jun 20, 2019 at 10:37:36PM +0800, Kirti Wankhede wrote:
> > Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
> > functions. These functions handles pre-copy and stop-and-copy phase.
> > 
> > In _SAVING|_RUNNING device state or pre-copy phase:
> > - read pending_bytes
> > - read data_offset - indicates kernel driver to write data to staging
> >   buffer which is mmapped.
> > - read data_size - amount of data in bytes written by vendor driver in migration
> >   region.
> > - if data section is trapped, pread() number of bytes in data_size, from
> >   data_offset.
> > - if data section is mmaped, read mmaped buffer of size data_size.
> > - Write data packet to file stream as below:
> > {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
> > VFIO_MIG_FLAG_END_OF_STATE }
> > 
> > In _SAVING device state or stop-and-copy phase
> > a. read config space of device and save to migration file stream. This
> >    doesn't need to be from vendor driver. Any other special config state
> >    from driver can be saved as data in following iteration.
> > b. read pending_bytes - indicates kernel driver to write data to staging
> >    buffer which is mmapped.
> > c. read data_size - amount of data in bytes written by vendor driver in
> >    migration region.
> > d. if data section is trapped, pread() from data_offset of size data_size.
> > e. if data section is mmaped, read mmaped buffer of size data_size.
> > f. Write data packet as below:
> >    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
> > g. iterate through steps b to f until (pending_bytes > 0)
> > h. Write {VFIO_MIG_FLAG_END_OF_STATE}
> > 
> > .save_live_iterate runs outside the iothread lock in the migration case, which
> > could race with asynchronous call to get dirty page list causing data corruption
> > in mapped migration region. Mutex added here to serial migration buffer read
> > operation.
> > 
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > ---
> >  hw/vfio/migration.c | 212 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 212 insertions(+)
> > 
> > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > index fe0887c27664..0a2f30872316 100644
> > --- a/hw/vfio/migration.c
> > +++ b/hw/vfio/migration.c
> > @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
> >      return 0;
> >  }
> >  
> > +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
> > +{
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    VFIORegion *region = &migration->region.buffer;
> > +    uint64_t data_offset = 0, data_size = 0;
> > +    int ret;
> > +
> > +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > +                                             data_offset));
> > +    if (ret != sizeof(data_offset)) {
> > +        error_report("Failed to get migration buffer data offset %d",
> > +                     ret);
> > +        return -EINVAL;
> > +    }
> > +
> > +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
> > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > +                                             data_size));
> > +    if (ret != sizeof(data_size)) {
> > +        error_report("Failed to get migration buffer data size %d",
> > +                     ret);
> > +        return -EINVAL;
> > +    }
> > +
> how big is the data_size ? 
> if this size is too big, it may take too much time and block others.
> 
> > +    if (data_size > 0) {
> > +        void *buf = NULL;
> > +        bool buffer_mmaped = false;
> > +
> > +        if (region->mmaps) {
> > +            int i;
> > +
> > +            for (i = 0; i < region->nr_mmaps; i++) {
> > +                if ((data_offset >= region->mmaps[i].offset) &&
> > +                    (data_offset < region->mmaps[i].offset +
> > +                                   region->mmaps[i].size)) {
> > +                    buf = region->mmaps[i].mmap + (data_offset -
> > +                                                   region->mmaps[i].offset);
> > +                    buffer_mmaped = true;
> > +                    break;
> > +                }
> > +            }
> > +        }
> > +
> > +        if (!buffer_mmaped) {
> > +            buf = g_malloc0(data_size);
> > +            ret = pread(vbasedev->fd, buf, data_size,
> > +                        region->fd_offset + data_offset);
> > +            if (ret != data_size) {
> > +                error_report("Failed to get migration data %d", ret);
> > +                g_free(buf);
> > +                return -EINVAL;
> > +            }
> > +        }
> > +
> > +        qemu_put_be64(f, data_size);
> > +        qemu_put_buffer(f, buf, data_size);
> > +
> > +        if (!buffer_mmaped) {
> > +            g_free(buf);
> > +        }
> > +        migration->pending_bytes -= data_size;
> > +    } else {
> > +        qemu_put_be64(f, data_size);
> > +    }
> > +
> > +    ret = qemu_file_get_error(f);
> > +
> > +    return data_size;
> > +}
> > +
> > +static int vfio_update_pending(VFIODevice *vbasedev)
> > +{
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    VFIORegion *region = &migration->region.buffer;
> > +    uint64_t pending_bytes = 0;
> > +    int ret;
> > +
> > +    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
> > +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> > +                                             pending_bytes));
> > +    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
> > +        error_report("Failed to get pending bytes %d", ret);
> > +        migration->pending_bytes = 0;
> > +        return (ret < 0) ? ret : -EINVAL;
> > +    }
> > +
> > +    migration->pending_bytes = pending_bytes;
> > +    return 0;
> > +}
> > +
> > +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
> > +
> > +    if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) {
> > +        vfio_pci_save_config(vbasedev, f);
> > +    }
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > +
> > +    return qemu_file_get_error(f);
> > +}
> > +
> >  /* ---------------------------------------------------------------------- */
> >  
> >  static int vfio_save_setup(QEMUFile *f, void *opaque)
> > @@ -163,9 +268,116 @@ static void vfio_save_cleanup(void *opaque)
> >      }
> >  }
> >  
> > +static void vfio_save_pending(QEMUFile *f, void *opaque,
> > +                              uint64_t threshold_size,
> > +                              uint64_t *res_precopy_only,
> > +                              uint64_t *res_compatible,
> > +                              uint64_t *res_postcopy_only)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    int ret;
> > +
> > +    ret = vfio_update_pending(vbasedev);
> > +    if (ret) {
> > +        return;
> > +    }
> > +
> > +    if (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING) {
> > +        *res_precopy_only += migration->pending_bytes;
> > +    } else {
> > +        *res_postcopy_only += migration->pending_bytes;
> > +    }
by definition,
- res_precopy_only is for data which must be migrated in precopy phase
   or in stopped state, in other words - before target vm start
- res_postcopy_only is for data which must be migrated in postcopy phase
  or in stopped state, in other words - after source vm stop
So, we can only determining data type by the nature of the data. i.e.
if it is device state data which must be copied after source vm stop and
before target vm start, it belongs to res_precopy_only.

It is not right to determining data type by current device state.

Thanks
Yan

> > +    *res_compatible += 0;
> > +}
> > +
> > +static int vfio_save_iterate(QEMUFile *f, void *opaque)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    int ret;
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > +
> > +    qemu_mutex_lock(&migration->lock);
> > +    ret = vfio_save_buffer(f, vbasedev);
> > +    qemu_mutex_unlock(&migration->lock);
> > +
> > +    if (ret < 0) {
> > +        error_report("vfio_save_buffer failed %s",
> > +                     strerror(errno));
> > +        return ret;
> > +    }
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > +
> > +    ret = qemu_file_get_error(f);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    return ret;
> > +}
> > +
> > +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> > +{
> > +    VFIODevice *vbasedev = opaque;
> > +    VFIOMigration *migration = vbasedev->migration;
> > +    int ret;
> > +
> > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
> > +    if (ret) {
> > +        error_report("Failed to set state STOP and SAVING");
> > +        return ret;
> > +    }
> > +
> > +    ret = vfio_save_device_config_state(f, opaque);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    ret = vfio_update_pending(vbasedev);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    while (migration->pending_bytes > 0) {
> > +        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> > +        ret = vfio_save_buffer(f, vbasedev);
> > +        if (ret < 0) {
> > +            error_report("Failed to save buffer");
> > +            return ret;
> > +        } else if (ret == 0) {
> > +            break;
> > +        }
> > +
> > +        ret = vfio_update_pending(vbasedev);
> > +        if (ret) {
> > +            return ret;
> > +        }
> > +    }
> > +
> > +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > +
> > +    ret = qemu_file_get_error(f);
> > +    if (ret) {
> > +        return ret;
> > +    }
> > +
> > +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOPPED);
> > +    if (ret) {
> > +        error_report("Failed to set state STOPPED");
> > +        return ret;
> > +    }
> > +    return ret;
> > +}
> > +
> >  static SaveVMHandlers savevm_vfio_handlers = {
> >      .save_setup = vfio_save_setup,
> >      .save_cleanup = vfio_save_cleanup,
> > +    .save_live_pending = vfio_save_pending,
> > +    .save_live_iterate = vfio_save_iterate,
> > +    .save_live_complete_precopy = vfio_save_complete_precopy,
> >  };
> >  
> >  /* ---------------------------------------------------------------------- */
> > -- 
> > 2.7.0
> > 
> 


  reply	other threads:[~2019-06-25  3:37 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-20 14:37 [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-06-20 17:18   ` Alex Williamson
2019-06-21  5:52     ` Kirti Wankhede
2019-06-21 15:03       ` Alex Williamson
2019-06-21 19:35         ` Kirti Wankhede
2019-06-21 20:00           ` Alex Williamson
2019-06-21 20:30             ` Kirti Wankhede
2019-06-21 22:01               ` Alex Williamson
2019-06-24 15:00                 ` Kirti Wankhede
2019-06-24 15:25                   ` Alex Williamson
2019-06-24 18:52                     ` Kirti Wankhede
2019-06-24 19:01                       ` Alex Williamson
2019-06-25 15:20                         ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 03/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-06-21  0:12   ` Yan Zhao
2019-06-21  6:44     ` Kirti Wankhede
2019-06-21  7:50       ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 04/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-06-24 14:00   ` Cornelia Huck
2019-06-27 14:56     ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 05/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-06-25 10:29   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 06/13] vfio: Add migration state change notifier Kirti Wankhede
2019-06-27 10:33   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 07/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-06-27 10:01   ` Dr. David Alan Gilbert
2019-06-27 14:31     ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2019-06-20 19:25   ` Alex Williamson
2019-06-21  6:38     ` Kirti Wankhede
2019-06-21 15:16       ` Alex Williamson
2019-06-21 19:38         ` Kirti Wankhede
2019-06-21 20:02           ` Alex Williamson
2019-06-21 20:07             ` Kirti Wankhede
2019-06-21 20:32               ` Alex Williamson
2019-06-21 21:05                 ` Kirti Wankhede
2019-06-21 22:13                   ` Alex Williamson
2019-06-24 14:31                     ` Kirti Wankhede
2019-06-21  0:31   ` Yan Zhao
2019-06-25  3:30     ` Yan Zhao [this message]
2019-06-28  8:50       ` Dr. David Alan Gilbert
2019-06-28 21:16         ` Yan Zhao
2019-06-28  9:09   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 09/13] vfio: Add load " Kirti Wankhede
2019-06-28  9:18   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 10/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-06-26  0:40   ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 11/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 12/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 13/13] vfio: Add trace events in migration code path Kirti Wankhede
2019-06-20 18:50   ` Dr. David Alan Gilbert
2019-06-21  5:54     ` Kirti Wankhede
2019-06-21  0:25 ` [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Yan Zhao
2019-06-21  1:24   ` Yan Zhao
2019-06-21  8:02     ` Kirti Wankhede
2019-06-21  8:46       ` Yan Zhao
2019-06-21  9:22         ` Kirti Wankhede
2019-06-21 10:45           ` Yan Zhao
2019-06-24 19:00           ` Dr. David Alan Gilbert
2019-06-26  0:43             ` Yan Zhao
2019-06-28  9:44               ` Dr. David Alan Gilbert
2019-06-28 21:28                 ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190625033029.GC6971@joy-OptiPlex-7040 \
    --to=yan.y.zhao@intel.com \
    --cc=Ken.Xue@amd.com \
    --cc=Zhengxiao.zx@Alibaba-inc.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=changpeng.liu@intel.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eauger@redhat.com \
    --cc=eskultet@redhat.com \
    --cc=felipe@nutanix.com \
    --cc=jonathan.davies@nutanix.com \
    --cc=kevin.tian@intel.com \
    --cc=kwankhede@nvidia.com \
    --cc=mlevitsk@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuangtai.tst@alibaba-inc.com \
    --cc=yi.l.liu@intel.com \
    --cc=yulei.zhang@intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=ziye.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).