qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kirti Wankhede <kwankhede@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Zhengxiao.zx@Alibaba-inc.com, kevin.tian@intel.com,
	yi.l.liu@intel.com, cjia@nvidia.com, eskultet@redhat.com,
	ziye.yang@intel.com, qemu-devel@nongnu.org, cohuck@redhat.com,
	shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com,
	zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com,
	aik@ozlabs.ru, eauger@redhat.com, felipe@nutanix.com,
	jonathan.davies@nutanix.com, yan.y.zhao@intel.com,
	changpeng.liu@intel.com, Ken.Xue@amd.com
Subject: Re: [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers
Date: Mon, 24 Jun 2019 20:01:26 +0530	[thread overview]
Message-ID: <9c334b12-5715-fd02-8060-83058a9e89b7@nvidia.com> (raw)
In-Reply-To: <20190621161325.30992675@x1.home>

<snip>
>>>>>>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>>>>>>>>>> +{
>>>>>>>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>>>>>>>> +    VFIORegion *region = &migration->region.buffer;
>>>>>>>>>> +    uint64_t data_offset = 0, data_size = 0;
>>>>>>>>>> +    int ret;
>>>>>>>>>> +
>>>>>>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>>>>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>>>>>> +                                             data_offset));
>>>>>>>>>> +    if (ret != sizeof(data_offset)) {
>>>>>>>>>> +        error_report("Failed to get migration buffer data offset %d",
>>>>>>>>>> +                     ret);
>>>>>>>>>> +        return -EINVAL;
>>>>>>>>>> +    }
>>>>>>>>>> +
>>>>>>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>>>>>>>>>> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
>>>>>>>>>> +                                             data_size));
>>>>>>>>>> +    if (ret != sizeof(data_size)) {
>>>>>>>>>> +        error_report("Failed to get migration buffer data size %d",
>>>>>>>>>> +                     ret);
>>>>>>>>>> +        return -EINVAL;
>>>>>>>>>> +    }
>>>>>>>>>> +
>>>>>>>>>> +    if (data_size > 0) {
>>>>>>>>>> +        void *buf = NULL;
>>>>>>>>>> +        bool buffer_mmaped = false;
>>>>>>>>>> +
>>>>>>>>>> +        if (region->mmaps) {
>>>>>>>>>> +            int i;
>>>>>>>>>> +
>>>>>>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
>>>>>>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
>>>>>>>>>> +                    (data_offset < region->mmaps[i].offset +
>>>>>>>>>> +                                   region->mmaps[i].size)) {
>>>>>>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
>>>>>>>>>> +                                                   region->mmaps[i].offset);        
>>>>>>>>>
>>>>>>>>> So you're expecting that data_offset is somewhere within the data
>>>>>>>>> area.  Why doesn't the data always simply start at the beginning of the
>>>>>>>>> data area?  ie. data_offset would coincide with the beginning of the
>>>>>>>>> mmap'able area (if supported) and be static.  Does this enable some
>>>>>>>>> functionality in the vendor driver?        
>>>>>>>>
>>>>>>>> Do you want to enforce that to vendor driver?
>>>>>>>> From the feedback on previous version I thought vendor driver should
>>>>>>>> define data_offset within the region
>>>>>>>> "I'd suggest that the vendor driver expose a read-only
>>>>>>>> data_offset that matches a sparse mmap capability entry should the
>>>>>>>> driver support mmap.  The use should always read or write data from the
>>>>>>>> vendor defined data_offset"
>>>>>>>>
>>>>>>>> This also adds flexibility to vendor driver such that vendor driver can
>>>>>>>> define different data_offset for device data and dirty page bitmap
>>>>>>>> within same mmaped region.      
>>>>>>>
>>>>>>> I agree, it adds flexibility, the protocol was not evident to me until
>>>>>>> I got here though.
>>>>>>>       
>>>>>>>>>  Does resume data need to be
>>>>>>>>> written from the same offset where it's read?        
>>>>>>>>
>>>>>>>> No, resume data should be written from the data_offset that vendor
>>>>>>>> driver provided during resume.      
>>>>>
>>>>> A)
>>>>>     
>>>>>>> s/resume/save/?    
>>>>>
>>>>> B)
>>>>>      
>>>>>>> Or is this saying that on resume that the vendor driver is requesting a
>>>>>>> specific block of data via data_offset?       
>>>>>>
>>>>>> Correct.    
>>>>>
>>>>> Which one is correct?  Thanks,
>>>>>     
>>>>
>>>> B is correct.  
>>>
>>> Shouldn't data_offset be stored in the migration stream then so we can
>>> at least verify that source and target are in sync?   
>>
>> Why? data_offset is offset within migration region, nothing to do with
>> data stream. While resuming vendor driver can ask data at different
>> offset in migration region.
> 
> So the data is opaque and the sequencing is opaque, the user should
> have no expectation that there's any relationship between where the
> data was read from while saving versus where the target device is
> requesting the next block be written while resuming.  We have a data
> blob and a size and we do what we're told.
> 

That's correct.

>>> I'm not getting a
>>> sense that this protocol involves any sort of sanity or integrity
>>> testing on the vendor driver end, the user can just feed garbage into
>>> the device on resume and watch the results :-\  Thanks,
>>>  
>>
>> vendor driver should be able to do sanity and integrity check within its
>> opaque data. If that sanity fails, return failure for access on field in
>> migration region structure.
> 
> Would that be a synchronous failure on the write of data_size, which
> should result in the device_state moving to invalid?  Thanks,
> 

If data section of migration region is mapped, then on write to
data_size, vendor driver should read staging buffer, validate data and
return sizeof(data_size) if success or return error (< 0). If data
section is trapped, then write on data section should return accordingly
on receiving data. On error, migration/restore would fail.

Thanks,
Kirti


  reply	other threads:[~2019-06-24 14:34 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-20 14:37 [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-06-20 17:18   ` Alex Williamson
2019-06-21  5:52     ` Kirti Wankhede
2019-06-21 15:03       ` Alex Williamson
2019-06-21 19:35         ` Kirti Wankhede
2019-06-21 20:00           ` Alex Williamson
2019-06-21 20:30             ` Kirti Wankhede
2019-06-21 22:01               ` Alex Williamson
2019-06-24 15:00                 ` Kirti Wankhede
2019-06-24 15:25                   ` Alex Williamson
2019-06-24 18:52                     ` Kirti Wankhede
2019-06-24 19:01                       ` Alex Williamson
2019-06-25 15:20                         ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 03/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-06-21  0:12   ` Yan Zhao
2019-06-21  6:44     ` Kirti Wankhede
2019-06-21  7:50       ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 04/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-06-24 14:00   ` Cornelia Huck
2019-06-27 14:56     ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 05/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-06-25 10:29   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 06/13] vfio: Add migration state change notifier Kirti Wankhede
2019-06-27 10:33   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 07/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-06-27 10:01   ` Dr. David Alan Gilbert
2019-06-27 14:31     ` Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2019-06-20 19:25   ` Alex Williamson
2019-06-21  6:38     ` Kirti Wankhede
2019-06-21 15:16       ` Alex Williamson
2019-06-21 19:38         ` Kirti Wankhede
2019-06-21 20:02           ` Alex Williamson
2019-06-21 20:07             ` Kirti Wankhede
2019-06-21 20:32               ` Alex Williamson
2019-06-21 21:05                 ` Kirti Wankhede
2019-06-21 22:13                   ` Alex Williamson
2019-06-24 14:31                     ` Kirti Wankhede [this message]
2019-06-21  0:31   ` Yan Zhao
2019-06-25  3:30     ` Yan Zhao
2019-06-28  8:50       ` Dr. David Alan Gilbert
2019-06-28 21:16         ` Yan Zhao
2019-06-28  9:09   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 09/13] vfio: Add load " Kirti Wankhede
2019-06-28  9:18   ` Dr. David Alan Gilbert
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 10/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-06-26  0:40   ` Yan Zhao
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 11/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 12/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-06-20 14:37 ` [Qemu-devel] [PATCH v4 13/13] vfio: Add trace events in migration code path Kirti Wankhede
2019-06-20 18:50   ` Dr. David Alan Gilbert
2019-06-21  5:54     ` Kirti Wankhede
2019-06-21  0:25 ` [Qemu-devel] [PATCH v4 00/13] Add migration support for VFIO device Yan Zhao
2019-06-21  1:24   ` Yan Zhao
2019-06-21  8:02     ` Kirti Wankhede
2019-06-21  8:46       ` Yan Zhao
2019-06-21  9:22         ` Kirti Wankhede
2019-06-21 10:45           ` Yan Zhao
2019-06-24 19:00           ` Dr. David Alan Gilbert
2019-06-26  0:43             ` Yan Zhao
2019-06-28  9:44               ` Dr. David Alan Gilbert
2019-06-28 21:28                 ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9c334b12-5715-fd02-8060-83058a9e89b7@nvidia.com \
    --to=kwankhede@nvidia.com \
    --cc=Ken.Xue@amd.com \
    --cc=Zhengxiao.zx@Alibaba-inc.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=changpeng.liu@intel.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eauger@redhat.com \
    --cc=eskultet@redhat.com \
    --cc=felipe@nutanix.com \
    --cc=jonathan.davies@nutanix.com \
    --cc=kevin.tian@intel.com \
    --cc=mlevitsk@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuangtai.tst@alibaba-inc.com \
    --cc=yan.y.zhao@intel.com \
    --cc=yi.l.liu@intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=ziye.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).