From: Kirti Wankhede <kwankhede@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com,
yi.l.liu@intel.com, cjia@nvidia.com, eskultet@redhat.com,
ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com,
shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com,
zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com,
aik@ozlabs.ru, eauger@redhat.com, felipe@nutanix.com,
jonathan.davies@nutanix.com, yan.y.zhao@intel.com,
changpeng.liu@intel.com, Ken.Xue@amd.com
Subject: Re: [PATCH v16 QEMU 09/16] vfio: Add save state functions to SaveVMHandlers
Date: Tue, 5 May 2020 04:48:14 +0530 [thread overview]
Message-ID: <b57322be-a337-ccb8-19e3-6c6bc3343119@nvidia.com> (raw)
In-Reply-To: <20200325160311.265ca037@w520.home>
On 3/26/2020 3:33 AM, Alex Williamson wrote:
> On Wed, 25 Mar 2020 02:39:07 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>
>> Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy
>> functions. These functions handles pre-copy and stop-and-copy phase.
>>
>> In _SAVING|_RUNNING device state or pre-copy phase:
>> - read pending_bytes. If pending_bytes > 0, go through below steps.
>> - read data_offset - indicates kernel driver to write data to staging
>> buffer.
>> - read data_size - amount of data in bytes written by vendor driver in
>> migration region.
>> - read data_size bytes of data from data_offset in the migration region.
>> - Write data packet to file stream as below:
>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
>> VFIO_MIG_FLAG_END_OF_STATE }
>>
>> In _SAVING device state or stop-and-copy phase
>> a. read config space of device and save to migration file stream. This
>> doesn't need to be from vendor driver. Any other special config state
>> from driver can be saved as data in following iteration.
>> b. read pending_bytes. If pending_bytes > 0, go through below steps.
>> c. read data_offset - indicates kernel driver to write data to staging
>> buffer.
>> d. read data_size - amount of data in bytes written by vendor driver in
>> migration region.
>> e. read data_size bytes of data from data_offset in the migration region.
>> f. Write data packet as below:
>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
>> g. iterate through steps b to f while (pending_bytes > 0)
>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
>>
>> When data region is mapped, its user's responsibility to read data from
>> data_offset of data_size before moving to next steps.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>> hw/vfio/migration.c | 245 +++++++++++++++++++++++++++++++++++++++++-
>> hw/vfio/trace-events | 6 ++
>> include/hw/vfio/vfio-common.h | 1 +
>> 3 files changed, 251 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>> index 033f76526e49..ecbeed5182c2 100644
>> --- a/hw/vfio/migration.c
>> +++ b/hw/vfio/migration.c
>> @@ -138,6 +138,137 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask,
>> return 0;
>> }
>>
>> +static void *find_data_region(VFIORegion *region,
>> + uint64_t data_offset,
>> + uint64_t data_size)
>> +{
>> + void *ptr = NULL;
>> + int i;
>> +
>> + for (i = 0; i < region->nr_mmaps; i++) {
>> + if ((data_offset >= region->mmaps[i].offset) &&
>> + (data_offset < region->mmaps[i].offset + region->mmaps[i].size) &&
>> + (data_size <= region->mmaps[i].size)) {
>
> (data_offset - region->mmaps[i].offset) can be non-zero, so this test
> is invalid. Additionally the uapi does not require that a give data
> chunk fits exclusively within an mmap'd area, it may overlap one or
> more mmap'd sections of the region, possibly with non-mmap'd areas
> included.
>
What's the advantage of having mmap and non-mmap overlapped regions?
Isn't it better to have data section either mapped or trapped?
>> + ptr = region->mmaps[i].mmap + (data_offset -
>> + region->mmaps[i].offset);
>> + break;
>> + }
>> + }
>> + return ptr;
>> +}
>> +
>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>> +{
>> + VFIOMigration *migration = vbasedev->migration;
>> + VFIORegion *region = &migration->region;
>> + uint64_t data_offset = 0, data_size = 0;
>> + int ret;
>> +
>> + ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>> + region->fd_offset + offsetof(struct vfio_device_migration_info,
>> + data_offset));
>> + if (ret != sizeof(data_offset)) {
>> + error_report("%s: Failed to get migration buffer data offset %d",
>> + vbasedev->name, ret);
>> + return -EINVAL;
>> + }
>> +
>> + ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>> + region->fd_offset + offsetof(struct vfio_device_migration_info,
>> + data_size));
>> + if (ret != sizeof(data_size)) {
>> + error_report("%s: Failed to get migration buffer data size %d",
>> + vbasedev->name, ret);
>> + return -EINVAL;
>> + }
>> +
>> + if (data_size > 0) {
>> + void *buf = NULL;
>> + bool buffer_mmaped;
>> +
>> + if (region->mmaps) {
>> + buf = find_data_region(region, data_offset, data_size);
>> + }
>> +
>> + buffer_mmaped = (buf != NULL) ? true : false;
>
> The ternary is unnecessary, "? true : false" is redundant.
>
Removing it.
>> +
>> + if (!buffer_mmaped) {
>> + buf = g_try_malloc0(data_size);
>
> Why do we need zero'd memory?
>
Zeroed memory not required, removing 0
>> + if (!buf) {
>> + error_report("%s: Error allocating buffer ", __func__);
>> + return -ENOMEM;
>> + }
>> +
>> + ret = pread(vbasedev->fd, buf, data_size,
>> + region->fd_offset + data_offset);
>> + if (ret != data_size) {
>> + error_report("%s: Failed to get migration data %d",
>> + vbasedev->name, ret);
>> + g_free(buf);
>> + return -EINVAL;
>> + }
>> + }
>> +
>> + qemu_put_be64(f, data_size);
>> + qemu_put_buffer(f, buf, data_size);
>
> This can segfault when mmap'd given the above assumptions about size
> and layout.
>
>> +
>> + if (!buffer_mmaped) {
>> + g_free(buf);
>> + }
>> + } else {
>> + qemu_put_be64(f, data_size);
>
> We insert a zero? Couldn't we add the section header and end here and
> skip it entirely?
>
This is used during resuming, data_size 0 indicates end of data.
>> + }
>> +
>> + trace_vfio_save_buffer(vbasedev->name, data_offset, data_size,
>> + migration->pending_bytes);
>> +
>> + ret = qemu_file_get_error(f);
>> + if (ret) {
>> + return ret;
>> + }
>> +
>> + return data_size;
>> +}
>> +
>> +static int vfio_update_pending(VFIODevice *vbasedev)
>> +{
>> + VFIOMigration *migration = vbasedev->migration;
>> + VFIORegion *region = &migration->region;
>> + uint64_t pending_bytes = 0;
>> + int ret;
>> +
>> + ret = pread(vbasedev->fd, &pending_bytes, sizeof(pending_bytes),
>> + region->fd_offset + offsetof(struct vfio_device_migration_info,
>> + pending_bytes));
>> + if ((ret < 0) || (ret != sizeof(pending_bytes))) {
>> + error_report("%s: Failed to get pending bytes %d",
>> + vbasedev->name, ret);
>> + migration->pending_bytes = 0;
>> + return (ret < 0) ? ret : -EINVAL;
>> + }
>> +
>> + migration->pending_bytes = pending_bytes;
>> + trace_vfio_update_pending(vbasedev->name, pending_bytes);
>> + return 0;
>> +}
>> +
>> +static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
>> +{
>> + VFIODevice *vbasedev = opaque;
>> +
>> + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE);
>> +
>> + if (vbasedev->ops && vbasedev->ops->vfio_save_config) {
>> + vbasedev->ops->vfio_save_config(vbasedev, f);
>> + }
>> +
>> + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>> +
>> + trace_vfio_save_device_config_state(vbasedev->name);
>> +
>> + return qemu_file_get_error(f);
>> +}
>> +
>> /* ---------------------------------------------------------------------- */
>>
>> static int vfio_save_setup(QEMUFile *f, void *opaque)
>> @@ -154,7 +285,7 @@ static int vfio_save_setup(QEMUFile *f, void *opaque)
>> qemu_mutex_unlock_iothread();
>> if (ret) {
>> error_report("%s: Failed to mmap VFIO migration region %d: %s",
>> - vbasedev->name, migration->region.index,
>> + vbasedev->name, migration->region.nr,
>> strerror(-ret));
>> return ret;
>> }
>> @@ -194,9 +325,121 @@ static void vfio_save_cleanup(void *opaque)
>> trace_vfio_save_cleanup(vbasedev->name);
>> }
>>
>> +static void vfio_save_pending(QEMUFile *f, void *opaque,
>> + uint64_t threshold_size,
>> + uint64_t *res_precopy_only,
>> + uint64_t *res_compatible,
>> + uint64_t *res_postcopy_only)
>> +{
>> + VFIODevice *vbasedev = opaque;
>> + VFIOMigration *migration = vbasedev->migration;
>> + int ret;
>> +
>> + ret = vfio_update_pending(vbasedev);
>> + if (ret) {
>> + return;
>> + }
>> +
>> + *res_precopy_only += migration->pending_bytes;
>> +
>> + trace_vfio_save_pending(vbasedev->name, *res_precopy_only,
>> + *res_postcopy_only, *res_compatible);
>> +}
>> +
>> +static int vfio_save_iterate(QEMUFile *f, void *opaque)
>> +{
>> + VFIODevice *vbasedev = opaque;
>> + int ret, data_size;
>> +
>> + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
>> +
>> + data_size = vfio_save_buffer(f, vbasedev);
>> +
>> + if (data_size < 0) {
>> + error_report("%s: vfio_save_buffer failed %s", vbasedev->name,
>> + strerror(errno));
>> + return data_size;
>> + }
>> +
>> + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>> +
>> + ret = qemu_file_get_error(f);
>> + if (ret) {
>> + return ret;
>> + }
>> +
>> + trace_vfio_save_iterate(vbasedev->name, data_size);
>> + if (data_size == 0) {
>> + /* indicates data finished, goto complete phase */
>> + return 1;
>
> But it's pending_bytes not data_size that indicates we're done. How do
> we get away with ignoring pending_bytes for the save_live_iterate phase?
>
This is requirement mentioned above qemu_savevm_state_iterate() which
calls .save_live_iterate.
/*
* this function has three return values:
* negative: there was one error, and we have -errno.
* 0 : We haven't finished, caller have to go again
* 1 : We have finished, we can go to complete phase
*/
int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy)
This is to serialize savevm_state.handlers (or in other words devices).
Thanks,
Kirti
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
>> +{
>> + VFIODevice *vbasedev = opaque;
>> + VFIOMigration *migration = vbasedev->migration;
>> + int ret;
>> +
>> + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_RUNNING,
>> + VFIO_DEVICE_STATE_SAVING);
>> + if (ret) {
>> + error_report("%s: Failed to set state STOP and SAVING",
>> + vbasedev->name);
>> + return ret;
>> + }
>> +
>> + ret = vfio_save_device_config_state(f, opaque);
>> + if (ret) {
>> + return ret;
>> + }
>> +
>> + ret = vfio_update_pending(vbasedev);
>> + if (ret) {
>> + return ret;
>> + }
>> +
>> + while (migration->pending_bytes > 0) {
>> + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
>> + ret = vfio_save_buffer(f, vbasedev);
>> + if (ret < 0) {
>> + error_report("%s: Failed to save buffer", vbasedev->name);
>> + return ret;
>> + } else if (ret == 0) {
>> + break;
>> + }
>> +
>> + ret = vfio_update_pending(vbasedev);
>> + if (ret) {
>> + return ret;
>> + }
>> + }
>> +
>> + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
>> +
>> + ret = qemu_file_get_error(f);
>> + if (ret) {
>> + return ret;
>> + }
>> +
>> + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_SAVING, 0);
>> + if (ret) {
>> + error_report("%s: Failed to set state STOPPED", vbasedev->name);
>> + return ret;
>> + }
>> +
>> + trace_vfio_save_complete_precopy(vbasedev->name);
>> + return ret;
>> +}
>> +
>> static SaveVMHandlers savevm_vfio_handlers = {
>> .save_setup = vfio_save_setup,
>> .save_cleanup = vfio_save_cleanup,
>> + .save_live_pending = vfio_save_pending,
>> + .save_live_iterate = vfio_save_iterate,
>> + .save_live_complete_precopy = vfio_save_complete_precopy,
>> };
>>
>> /* ---------------------------------------------------------------------- */
>> diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
>> index 4bb43f18f315..bdf40ba368c7 100644
>> --- a/hw/vfio/trace-events
>> +++ b/hw/vfio/trace-events
>> @@ -151,3 +151,9 @@ vfio_vmstate_change(char *name, int running, const char *reason, uint32_t dev_st
>> vfio_migration_state_notifier(char *name, int state) " (%s) state %d"
>> vfio_save_setup(char *name) " (%s)"
>> vfio_save_cleanup(char *name) " (%s)"
>> +vfio_save_buffer(char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64
>> +vfio_update_pending(char *name, uint64_t pending) " (%s) pending 0x%"PRIx64
>> +vfio_save_device_config_state(char *name) " (%s)"
>> +vfio_save_pending(char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64
>> +vfio_save_iterate(char *name, int data_size) " (%s) data_size %d"
>> +vfio_save_complete_precopy(char *name) " (%s)"
>> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
>> index 28f55f66d019..c78033e4149d 100644
>> --- a/include/hw/vfio/vfio-common.h
>> +++ b/include/hw/vfio/vfio-common.h
>> @@ -60,6 +60,7 @@ typedef struct VFIORegion {
>>
>> typedef struct VFIOMigration {
>> VFIORegion region;
>> + uint64_t pending_bytes;
>> } VFIOMigration;
>>
>> typedef struct VFIOAddressSpace {
>
next prev parent reply other threads:[~2020-05-04 23:19 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-24 21:08 [PATCH v16 QEMU 00/16] Add migration support for VFIO devices Kirti Wankhede
2020-03-24 21:08 ` [PATCH v16 QEMU 01/16] vfio: KABI for migration interface - Kernel header placeholder Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 02/16] vfio: Add function to unmap VFIO region Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 03/16] vfio: Add vfio_get_object callback to VFIODeviceOps Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 04/16] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2020-03-25 19:56 ` Alex Williamson
2020-03-26 17:29 ` Dr. David Alan Gilbert
2020-03-26 17:38 ` Alex Williamson
2020-05-04 23:18 ` Kirti Wankhede
2020-05-05 4:37 ` Alex Williamson
2020-05-06 6:11 ` Yan Zhao
2020-05-06 19:48 ` Kirti Wankhede
2020-05-06 20:03 ` Alex Williamson
2020-05-07 5:40 ` Kirti Wankhede
2020-05-07 18:14 ` Alex Williamson
2020-03-26 17:46 ` Dr. David Alan Gilbert
2020-05-04 23:19 ` Kirti Wankhede
2020-04-07 4:10 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2020-05-04 23:21 ` Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 05/16] vfio: Add migration region initialization and finalize function Kirti Wankhede
2020-03-26 17:52 ` Dr. David Alan Gilbert
2020-05-04 23:19 ` Kirti Wankhede
2020-05-19 19:32 ` Dr. David Alan Gilbert
2020-03-24 21:09 ` [PATCH v16 QEMU 06/16] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 07/16] vfio: Add migration state change notifier Kirti Wankhede
2020-04-01 11:27 ` Dr. David Alan Gilbert
2020-05-04 23:20 ` Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 08/16] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2020-03-25 21:02 ` Alex Williamson
2020-05-04 23:19 ` Kirti Wankhede
2020-05-05 4:37 ` Alex Williamson
2020-05-06 6:38 ` Yan Zhao
2020-05-06 9:58 ` Cornelia Huck
2020-05-06 16:53 ` Dr. David Alan Gilbert
2020-05-06 19:30 ` Kirti Wankhede
2020-05-07 6:37 ` Cornelia Huck
2020-05-07 20:29 ` Alex Williamson
2020-04-01 17:36 ` Dr. David Alan Gilbert
2020-05-04 23:20 ` Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 09/16] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2020-03-25 22:03 ` Alex Williamson
2020-05-04 23:18 ` Kirti Wankhede [this message]
2020-05-05 4:37 ` Alex Williamson
2020-05-11 9:53 ` Kirti Wankhede
2020-05-11 15:59 ` Alex Williamson
2020-05-12 2:06 ` Yan Zhao
2020-05-09 5:31 ` Yan Zhao
2020-05-11 10:22 ` Kirti Wankhede
2020-05-12 0:50 ` Yan Zhao
2020-03-24 21:09 ` [PATCH v16 QEMU 10/16] vfio: Add load " Kirti Wankhede
2020-03-25 22:36 ` Alex Williamson
2020-04-01 18:58 ` Dr. David Alan Gilbert
2020-05-04 23:20 ` Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 11/16] iommu: add callback to get address limit IOMMU supports Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 12/16] memory: Set DIRTY_MEMORY_MIGRATION when IOMMU is enabled Kirti Wankhede
2020-04-01 19:00 ` Dr. David Alan Gilbert
2020-04-01 19:42 ` Alex Williamson
2020-03-24 21:09 ` [PATCH v16 QEMU 13/16] vfio: Add function to start and stop dirty pages tracking Kirti Wankhede
2020-03-26 19:10 ` Alex Williamson
2020-05-04 23:20 ` Kirti Wankhede
2020-04-01 19:03 ` Dr. David Alan Gilbert
2020-05-04 23:21 ` Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 14/16] vfio: Add vfio_listener_log_sync to mark dirty pages Kirti Wankhede
2020-03-25 2:19 ` Yan Zhao
2020-03-26 19:46 ` Alex Williamson
2020-04-01 19:08 ` Dr. David Alan Gilbert
2020-04-01 5:50 ` Yan Zhao
2020-04-03 20:11 ` Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 15/16] vfio: Add ioctl to get dirty pages bitmap during dma unmap Kirti Wankhede
2020-03-24 21:09 ` [PATCH v16 QEMU 16/16] vfio: Make vfio-pci device migration capable Kirti Wankhede
2020-03-24 23:36 ` [PATCH v16 QEMU 00/16] Add migration support for VFIO devices no-reply
2020-03-31 18:34 ` Alex Williamson
2020-04-01 6:41 ` Yan Zhao
2020-04-01 18:34 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b57322be-a337-ccb8-19e3-6c6bc3343119@nvidia.com \
--to=kwankhede@nvidia.com \
--cc=Ken.Xue@amd.com \
--cc=Zhengxiao.zx@Alibaba-inc.com \
--cc=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=changpeng.liu@intel.com \
--cc=cjia@nvidia.com \
--cc=cohuck@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eauger@redhat.com \
--cc=eskultet@redhat.com \
--cc=felipe@nutanix.com \
--cc=jonathan.davies@nutanix.com \
--cc=kevin.tian@intel.com \
--cc=mlevitsk@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=shuangtai.tst@alibaba-inc.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
--cc=zhi.a.wang@intel.com \
--cc=ziye.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).