All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kirti Wankhede <kwankhede@nvidia.com>
To: <alex.williamson@redhat.com>, <cjia@nvidia.com>
Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com,
	yi.l.liu@intel.com, yan.y.zhao@intel.com, eskultet@redhat.com,
	ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com,
	shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com,
	zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com,
	aik@ozlabs.ru, Kirti Wankhede <kwankhede@nvidia.com>,
	eauger@redhat.com, felipe@nutanix.com,
	jonathan.davies@nutanix.com, changpeng.liu@intel.com,
	Ken.Xue@amd.com
Subject: [Qemu-devel] [PATCH v7 09/13] vfio: Add save state functions to SaveVMHandlers
Date: Tue, 9 Jul 2019 15:19:16 +0530	[thread overview]
Message-ID: <1562665760-26158-10-git-send-email-kwankhede@nvidia.com> (raw)
In-Reply-To: <1562665760-26158-1-git-send-email-kwankhede@nvidia.com>

Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
functions. These functions handles pre-copy and stop-and-copy phase.

In _SAVING|_RUNNING device state or pre-copy phase:
- read pending_bytes
- read data_offset - indicates kernel driver to write data to staging
  buffer which is mmapped.
- read data_size - amount of data in bytes written by vendor driver in migration
  region.
- if data section is trapped, pread() from data_offset of data_size.
- if data section is mmaped, read mmaped buffer of data_size.
- Write data packet to file stream as below:
{VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
VFIO_MIG_FLAG_END_OF_STATE }

In _SAVING device state or stop-and-copy phase
a. read config space of device and save to migration file stream. This
   doesn't need to be from vendor driver. Any other special config state
   from driver can be saved as data in following iteration.
b. read pending_bytes
c. read data_offset - indicates kernel driver to write data to staging
   buffer which is mmapped.
d. read data_size - amount of data in bytes written by vendor driver in
   migration region.
e. if data section is trapped, pread() from data_offset of data_size.
f. if data section is mmaped, read mmaped buffer of data_size.
g. Write data packet as below:
   {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
h. iterate through steps b to g while (pending_bytes > 0)
i. Write {VFIO_MIG_FLAG_END_OF_STATE}

When data region is mapped, its user's responsibility to read data from
data_offset of data_size before moving to next steps.

.save_live_iterate runs outside the iothread lock in the migration case, which
could race with asynchronous call to get dirty page list causing data corruption
in mapped migration region. Mutex added here to serial migration buffer read
operation.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 hw/vfio/migration.c  | 246 +++++++++++++++++++++++++++++++++++++++++++++++++++
 hw/vfio/trace-events |   6 ++
 2 files changed, 252 insertions(+)

diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 0597a45fda2d..4e9b4cce230b 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -117,6 +117,138 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state)
     return 0;
 }
 
+static void *find_data_region(VFIORegion *region,
+                              uint64_t data_offset,
+                              uint64_t data_size)
+{
+    void *ptr = NULL;
+    int i;
+
+    for (i = 0; i < region->nr_mmaps; i++) {
+        if ((data_offset >= region->mmaps[i].offset) &&
+            (data_offset < region->mmaps[i].offset + region->mmaps[i].size) &&
+            (data_size <= region->mmaps[i].size)) {
+            ptr = region->mmaps[i].mmap + (data_offset -
+                                           region->mmaps[i].offset);
+            break;
+        }
+    }
+    return ptr;
+}
+
+static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
+{
+    VFIOMigration *migration = vbasedev->migration;
+    VFIORegion *region = &migration->region.buffer;
+    uint64_t data_offset = 0, data_size = 0;
+    int ret;
+
+    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
+                region->fd_offset + offsetof(struct vfio_device_migration_info,
+                                             data_offset));
+    if (ret != sizeof(data_offset)) {
+        error_report("%s: Failed to get migration buffer data offset %d",
+                     vbasedev->name, ret);
+        return -EINVAL;
+    }
+
+    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
+                region->fd_offset + offsetof(struct vfio_device_migration_info,
+                                             data_size));
+    if (ret != sizeof(data_size)) {
+        error_report("%s: Failed to get migration buffer data size %d",
+                     vbasedev->name, ret);
+        return -EINVAL;
+    }
+
+    if (data_size > 0) {
+        void *buf = NULL;
+        bool buffer_mmaped;
+
+        if (region->mmaps) {
+            buf = find_data_region(region, data_offset, data_size);
+        }
+
+        buffer_mmaped = (buf != NULL) ? true : false;
+
+        if (!buffer_mmaped) {
+            buf = g_try_malloc0(data_size);
+            if (!buf) {
+                error_report("%s: Error allocating buffer ", __func__);
+                return -ENOMEM;
+            }
+
+            ret = pread(vbasedev->fd, buf, data_size,
+                        region->fd_offset + data_offset);
+            if (ret != data_size) {
+                error_report("%s: Failed to get migration data %d",
+                             vbasedev->name, ret);
+                g_free(buf);
+                return -EINVAL;
+            }
+        }
+
+        qemu_put_be64(f, data_size);
+        qemu_put_buffer(f, buf, data_size);
+
+        if (!buffer_mmaped) {
+            g_free(buf);
+        }
+        migration->pending_bytes -= data_size;
+    } else {
+        qemu_put_be64(f, data_size);
+    }
+
+    trace_vfio_save_buffer(vbasedev->name, data_offset, data_size,
+                           migration->pending_bytes);
+
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        return ret;
+    }
+
+    return data_size;
+}
+
+static int vfio_update_pending(VFIODevice *vbasedev)
+{
+    VFIOMigration *migration = vbasedev->migration;
+    VFIORegion *region = &migration->region.buffer;
+    uint64_t pending_bytes = 0;
+    int ret;
+
+    ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
+                region->fd_offset + offsetof(struct vfio_device_migration_info,
+                                             pending_bytes));
+    if ((ret < 0) || (ret != sizeof(pending_bytes))) {
+        error_report("%s: Failed to get pending bytes %d",
+                     vbasedev->name, ret);
+        migration->pending_bytes = 0;
+        return (ret < 0) ? ret : -EINVAL;
+    }
+
+    migration->pending_bytes = pending_bytes;
+    trace_vfio_update_pending(vbasedev->name, pending_bytes);
+    return 0;
+}
+
+static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
+{
+    VFIODevice *vbasedev = opaque;
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
+
+    if (vbasedev->ops && vbasedev->ops->vfio_save_config) {
+        vbasedev->ops->vfio_save_config(vbasedev, f);
+    }
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
+
+    trace_vfio_save_device_config_state(vbasedev->name);
+
+    return qemu_file_get_error(f);
+}
+
 /* ---------------------------------------------------------------------- */
 
 static int vfio_save_setup(QEMUFile *f, void *opaque)
@@ -178,9 +310,123 @@ static void vfio_save_cleanup(void *opaque)
     trace_vfio_save_cleanup(vbasedev->name);
 }
 
+static void vfio_save_pending(QEMUFile *f, void *opaque,
+                              uint64_t threshold_size,
+                              uint64_t *res_precopy_only,
+                              uint64_t *res_compatible,
+                              uint64_t *res_postcopy_only)
+{
+    VFIODevice *vbasedev = opaque;
+    VFIOMigration *migration = vbasedev->migration;
+    int ret;
+
+    ret = vfio_update_pending(vbasedev);
+    if (ret) {
+        return;
+    }
+
+    *res_precopy_only += migration->pending_bytes;
+
+    trace_vfio_save_pending(vbasedev->name, *res_precopy_only,
+                            *res_postcopy_only, *res_compatible);
+}
+
+static int vfio_save_iterate(QEMUFile *f, void *opaque)
+{
+    VFIODevice *vbasedev = opaque;
+    VFIOMigration *migration = vbasedev->migration;
+    int ret, data_size;
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
+
+    qemu_mutex_lock(&migration->lock);
+    data_size = vfio_save_buffer(f, vbasedev);
+    qemu_mutex_unlock(&migration->lock);
+
+    if (data_size < 0) {
+        error_report("%s: vfio_save_buffer failed %s", vbasedev->name,
+                     strerror(errno));
+        return data_size;
+    }
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
+
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        return ret;
+    }
+
+    trace_vfio_save_iterate(vbasedev->name, data_size);
+    if (data_size == 0) {
+        /* indicates data finished, goto complete phase */
+        return 1;
+    }
+
+    return 0;
+}
+
+static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
+{
+    VFIODevice *vbasedev = opaque;
+    VFIOMigration *migration = vbasedev->migration;
+    int ret;
+
+    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_SAVING);
+    if (ret) {
+        error_report("%s: Failed to set state STOP and SAVING",
+                     vbasedev->name);
+        return ret;
+    }
+
+    ret = vfio_save_device_config_state(f, opaque);
+    if (ret) {
+        return ret;
+    }
+
+    ret = vfio_update_pending(vbasedev);
+    if (ret) {
+        return ret;
+    }
+
+    while (migration->pending_bytes > 0) {
+        qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
+        ret = vfio_save_buffer(f, vbasedev);
+        if (ret < 0) {
+            error_report("%s: Failed to save buffer", vbasedev->name);
+            return ret;
+        } else if (ret == 0) {
+            break;
+        }
+
+        ret = vfio_update_pending(vbasedev);
+        if (ret) {
+            return ret;
+        }
+    }
+
+    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
+
+    ret = qemu_file_get_error(f);
+    if (ret) {
+        return ret;
+    }
+
+    ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_MASK);
+    if (ret) {
+        error_report("%s: Failed to set state STOPPED", vbasedev->name);
+        return ret;
+    }
+
+    trace_vfio_save_complete_precopy(vbasedev->name);
+    return ret;
+}
+
 static SaveVMHandlers savevm_vfio_handlers = {
     .save_setup = vfio_save_setup,
     .save_cleanup = vfio_save_cleanup,
+    .save_live_pending = vfio_save_pending,
+    .save_live_iterate = vfio_save_iterate,
+    .save_live_complete_precopy = vfio_save_complete_precopy,
 };
 
 /* ---------------------------------------------------------------------- */
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index 4bb43f18f315..bdf40ba368c7 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -151,3 +151,9 @@ vfio_vmstate_change(char *name, int running, const char *reason, uint32_t dev_st
 vfio_migration_state_notifier(char *name, int state) " (%s) state %d"
 vfio_save_setup(char *name) " (%s)"
 vfio_save_cleanup(char *name) " (%s)"
+vfio_save_buffer(char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64
+vfio_update_pending(char *name, uint64_t pending) " (%s) pending 0x%"PRIx64
+vfio_save_device_config_state(char *name) " (%s)"
+vfio_save_pending(char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64
+vfio_save_iterate(char *name, int data_size) " (%s) data_size %d"
+vfio_save_complete_precopy(char *name) " (%s)"
-- 
2.7.0



  parent reply	other threads:[~2019-07-09  9:57 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-09  9:49 [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-07-16 20:56   ` Alex Williamson
2019-07-17 11:55     ` Cornelia Huck
2019-07-23 12:13     ` Cornelia Huck
2019-08-21 20:32       ` Kirti Wankhede
2019-08-21 20:31     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-07-16 16:29   ` Cornelia Huck
2019-07-18 18:54     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 03/13] vfio: Add vfio_get_object callback to VFIODeviceOps Kirti Wankhede
2019-07-16 16:32   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 04/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-07-11 12:07   ` Dr. David Alan Gilbert
2019-08-22  4:50     ` Kirti Wankhede
2019-08-22  9:32       ` Dr. David Alan Gilbert
2019-08-22 19:10         ` Kirti Wankhede
2019-08-22 19:13           ` Dr. David Alan Gilbert
2019-08-22 23:57             ` Tian, Kevin
2019-08-23  9:26               ` Dr. David Alan Gilbert
2019-08-23  9:49                 ` Tian, Kevin
2019-07-16 21:14   ` Alex Williamson
2019-07-17  9:10     ` Dr. David Alan Gilbert
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 05/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-07-16 21:37   ` Alex Williamson
2019-07-18 20:19     ` Kirti Wankhede
2019-07-23 12:52   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 06/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-07-11 12:13   ` Dr. David Alan Gilbert
2019-07-11 19:14     ` Kirti Wankhede
2019-07-22  8:23       ` Yan Zhao
2019-08-20 20:31         ` Kirti Wankhede
2019-07-16 22:03   ` Alex Williamson
2019-07-22  8:37   ` Yan Zhao
2019-08-20 20:33     ` Kirti Wankhede
2019-08-23  1:32       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 07/13] vfio: Add migration state change notifier Kirti Wankhede
2019-07-17  2:25   ` Yan Zhao
2019-08-20 20:24     ` Kirti Wankhede
2019-08-23  0:54       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 08/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-07-22  8:34   ` Yan Zhao
2019-08-20 20:33     ` Kirti Wankhede
2019-08-23  1:23       ` Yan Zhao
2019-07-09  9:49 ` Kirti Wankhede [this message]
2019-07-12  2:44   ` [Qemu-devel] [PATCH v7 09/13] vfio: Add save state functions to SaveVMHandlers Yan Zhao
2019-07-18 18:45     ` Kirti Wankhede
2019-07-17  2:50   ` Yan Zhao
2019-08-20 20:30     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 10/13] vfio: Add load " Kirti Wankhede
2019-07-12  2:52   ` Yan Zhao
2019-07-18 19:00     ` Kirti Wankhede
2019-07-22  3:20       ` Yan Zhao
2019-07-22 19:07         ` Alex Williamson
2019-07-22 21:50           ` Yan Zhao
2019-08-20 20:35             ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 11/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-07-12  0:33   ` Yan Zhao
2019-07-18 18:39     ` Kirti Wankhede
2019-07-19  1:24       ` Yan Zhao
2019-07-22  8:39   ` Yan Zhao
2019-08-20 20:34     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 12/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-07-23 13:18   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 13/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-07-11  2:55 ` [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device Yan Zhao
2019-07-11 10:50   ` Dr. David Alan Gilbert
2019-07-11 11:47     ` Yan Zhao
2019-07-11 16:23       ` Dr. David Alan Gilbert
2019-07-11 19:08         ` Kirti Wankhede
2019-07-12  0:32           ` Yan Zhao
2019-07-18 18:32             ` Kirti Wankhede
2019-07-19  1:23               ` Yan Zhao
2019-07-24 11:32                 ` Dr. David Alan Gilbert
2019-07-12 17:42           ` Dr. David Alan Gilbert
2019-07-15  0:35             ` Yan Zhao
2019-07-12  0:14         ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1562665760-26158-10-git-send-email-kwankhede@nvidia.com \
    --to=kwankhede@nvidia.com \
    --cc=Ken.Xue@amd.com \
    --cc=Zhengxiao.zx@Alibaba-inc.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=changpeng.liu@intel.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eauger@redhat.com \
    --cc=eskultet@redhat.com \
    --cc=felipe@nutanix.com \
    --cc=jonathan.davies@nutanix.com \
    --cc=kevin.tian@intel.com \
    --cc=mlevitsk@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuangtai.tst@alibaba-inc.com \
    --cc=yan.y.zhao@intel.com \
    --cc=yi.l.liu@intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=ziye.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.