qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: Kirti Wankhede <kwankhede@nvidia.com>
Cc: "Zhengxiao.zx@Alibaba-inc.com" <Zhengxiao.zx@Alibaba-inc.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"cjia@nvidia.com" <cjia@nvidia.com>,
	"eskultet@redhat.com" <eskultet@redhat.com>,
	"Yang, Ziye" <ziye.yang@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	"shuangtai.tst@alibaba-inc.com" <shuangtai.tst@alibaba-inc.com>,
	"dgilbert@redhat.com" <dgilbert@redhat.com>,
	"Wang, Zhi A" <zhi.a.wang@intel.com>,
	"mlevitsk@redhat.com" <mlevitsk@redhat.com>,
	"pasic@linux.ibm.com" <pasic@linux.ibm.com>,
	"aik@ozlabs.ru" <aik@ozlabs.ru>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"eauger@redhat.com" <eauger@redhat.com>,
	"felipe@nutanix.com" <felipe@nutanix.com>,
	"jonathan.davies@nutanix.com" <jonathan.davies@nutanix.com>,
	"Liu, Changpeng" <changpeng.liu@intel.com>,
	"Ken.Xue@amd.com" <Ken.Xue@amd.com>
Subject: Re: [Qemu-devel] [PATCH v7 11/13] vfio: Add function to get dirty page list
Date: Thu, 18 Jul 2019 21:24:56 -0400	[thread overview]
Message-ID: <20190719012456.GG8912@joy-OptiPlex-7040> (raw)
In-Reply-To: <70fd135d-4719-e39c-09fe-d5a012520ea8@nvidia.com>

On Fri, Jul 19, 2019 at 02:39:10AM +0800, Kirti Wankhede wrote:
> 
> 
> On 7/12/2019 6:03 AM, Yan Zhao wrote:
> > On Tue, Jul 09, 2019 at 05:49:18PM +0800, Kirti Wankhede wrote:
> >> Dirty page tracking (.log_sync) is part of RAM copying state, where
> >> vendor driver provides the bitmap of pages which are dirtied by vendor
> >> driver through migration region and as part of RAM copy, those pages
> >> gets copied to file stream.
> >>
> >> To get dirty page bitmap:
> >> - write start address, page_size and pfn count.
> >> - read count of pfns copied.
> >>     - Vendor driver should return 0 if driver doesn't have any page to
> >>       report dirty in given range.
> >>     - Vendor driver should return -1 to mark all pages dirty for given range.
> >> - read data_offset, where vendor driver has written bitmap.
> >> - read bitmap from the region or mmaped part of the region.
> >> - Iterate above steps till page bitmap for all requested pfns are copied.
> >>
> >> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >> ---
> >>  hw/vfio/migration.c           | 123 ++++++++++++++++++++++++++++++++++++++++++
> >>  hw/vfio/trace-events          |   1 +
> >>  include/hw/vfio/vfio-common.h |   2 +
> >>  3 files changed, 126 insertions(+)
> >>
> >> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> >> index 5fb4c5329ede..ca1a8c0f5f1f 100644
> >> --- a/hw/vfio/migration.c
> >> +++ b/hw/vfio/migration.c
> >> @@ -269,6 +269,129 @@ static int vfio_load_device_config_state(QEMUFile *f, void *opaque)
> >>      return qemu_file_get_error(f);
> >>  }
> >>  
> >> +void vfio_get_dirty_page_list(VFIODevice *vbasedev,
> >> +                              uint64_t start_pfn,
> >> +                              uint64_t pfn_count,
> >> +                              uint64_t page_size)
> >> +{
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    VFIORegion *region = &migration->region.buffer;
> >> +    uint64_t count = 0;
> >> +    int64_t copied_pfns = 0;
> >> +    int64_t total_pfns = pfn_count;
> >> +    int ret;
> >> +
> >> +    qemu_mutex_lock(&migration->lock);
> >> +
> >> +    while (total_pfns > 0) {
> >> +        uint64_t bitmap_size, data_offset = 0;
> >> +        uint64_t start = start_pfn + count;
> >> +        void *buf = NULL;
> >> +        bool buffer_mmaped = false;
> >> +
> >> +        ret = pwrite(vbasedev->fd, &start, sizeof(start),
> >> +                 region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                              start_pfn));
> >> +        if (ret < 0) {
> >> +            error_report("%s: Failed to set dirty pages start address %d %s",
> >> +                         vbasedev->name, ret, strerror(errno));
> >> +            goto dpl_unlock;
> >> +        }
> >> +
> >> +        ret = pwrite(vbasedev->fd, &page_size, sizeof(page_size),
> >> +                 region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                              page_size));
> >> +        if (ret < 0) {
> >> +            error_report("%s: Failed to set dirty page size %d %s",
> >> +                         vbasedev->name, ret, strerror(errno));
> >> +            goto dpl_unlock;
> >> +        }
> >> +
> >> +        ret = pwrite(vbasedev->fd, &total_pfns, sizeof(total_pfns),
> >> +                 region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                              total_pfns));
> >> +        if (ret < 0) {
> >> +            error_report("%s: Failed to set dirty page total pfns %d %s",
> >> +                         vbasedev->name, ret, strerror(errno));
> >> +            goto dpl_unlock;
> >> +        }
> >> +
> >> +        /* Read copied dirty pfns */
> >> +        ret = pread(vbasedev->fd, &copied_pfns, sizeof(copied_pfns),
> >> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                             copied_pfns));
> >> +        if (ret < 0) {
> >> +            error_report("%s: Failed to get dirty pages bitmap count %d %s",
> >> +                         vbasedev->name, ret, strerror(errno));
> >> +            goto dpl_unlock;
> >> +        }
> >> +
> >> +        if (copied_pfns == VFIO_DEVICE_DIRTY_PFNS_NONE) {
> >> +            /*
> >> +             * copied_pfns could be 0 if driver doesn't have any page to
> >> +             * report dirty in given range
> >> +             */
> >> +            break;
> >> +        } else if (copied_pfns == VFIO_DEVICE_DIRTY_PFNS_ALL) {
> >> +            /* Mark all pages dirty for this range */
> >> +            cpu_physical_memory_set_dirty_range(start_pfn * page_size,
> >> +                                                pfn_count * page_size,
> >> +                                                DIRTY_MEMORY_MIGRATION);
> > seesm pfn_count here is not right
> 
> Changing it to total_pfns in next version
>
if it's total_pfns, then it cannot be in the loop, right?

Thanks
Yan

> Thanks,
> Kirti
> 
> >> +            break;
> >> +        }
> >> +
> >> +        bitmap_size = (BITS_TO_LONGS(copied_pfns) + 1) * sizeof(unsigned long);
> >> +
> >> +        ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> >> +                region->fd_offset + offsetof(struct vfio_device_migration_info,
> >> +                                             data_offset));
> >> +        if (ret != sizeof(data_offset)) {
> >> +            error_report("%s: Failed to get migration buffer data offset %d",
> >> +                         vbasedev->name, ret);
> >> +            goto dpl_unlock;
> >> +        }
> >> +
> >> +        if (region->mmaps) {
> >> +            buf = find_data_region(region, data_offset, bitmap_size);
> >> +        }
> >> +
> >> +        buffer_mmaped = (buf != NULL) ? true : false;
> >> +
> >> +        if (!buffer_mmaped) {
> >> +            buf = g_try_malloc0(bitmap_size);
> >> +            if (!buf) {
> >> +                error_report("%s: Error allocating buffer ", __func__);
> >> +                goto dpl_unlock;
> >> +            }
> >> +
> >> +            ret = pread(vbasedev->fd, buf, bitmap_size,
> >> +                        region->fd_offset + data_offset);
> >> +            if (ret != bitmap_size) {
> >> +                error_report("%s: Failed to get dirty pages bitmap %d",
> >> +                             vbasedev->name, ret);
> >> +                g_free(buf);
> >> +                goto dpl_unlock;
> >> +            }
> >> +        }
> >> +
> >> +        cpu_physical_memory_set_dirty_lebitmap((unsigned long *)buf,
> >> +                                               (start_pfn + count) * page_size,
> >> +                                                copied_pfns);
> >> +        count      += copied_pfns;
> >> +        total_pfns -= copied_pfns;
> >> +
> >> +        if (!buffer_mmaped) {
> >> +            g_free(buf);
> >> +        }
> >> +    }
> >> +
> >> +    trace_vfio_get_dirty_page_list(vbasedev->name, start_pfn, pfn_count,
> >> +                                   page_size);
> >> +
> >> +dpl_unlock:
> >> +    qemu_mutex_unlock(&migration->lock);
> >> +}
> >> +
> >>  /* ---------------------------------------------------------------------- */
> >>  
> >>  static int vfio_save_setup(QEMUFile *f, void *opaque)
> >> diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
> >> index ac065b559f4e..414a5e69ec5e 100644
> >> --- a/hw/vfio/trace-events
> >> +++ b/hw/vfio/trace-events
> >> @@ -160,3 +160,4 @@ vfio_save_complete_precopy(char *name) " (%s)"
> >>  vfio_load_device_config_state(char *name) " (%s)"
> >>  vfio_load_state(char *name, uint64_t data) " (%s) data 0x%"PRIx64
> >>  vfio_load_state_device_data(char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64
> >> +vfio_get_dirty_page_list(char *name, uint64_t start, uint64_t pfn_count, uint64_t page_size) " (%s) start 0x%"PRIx64" pfn_count 0x%"PRIx64 " page size 0x%"PRIx64
> >> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> >> index a022484d2636..dc1b83a0b4ef 100644
> >> --- a/include/hw/vfio/vfio-common.h
> >> +++ b/include/hw/vfio/vfio-common.h
> >> @@ -222,5 +222,7 @@ int vfio_spapr_remove_window(VFIOContainer *container,
> >>  
> >>  int vfio_migration_probe(VFIODevice *vbasedev, Error **errp);
> >>  void vfio_migration_finalize(VFIODevice *vbasedev);
> >> +void vfio_get_dirty_page_list(VFIODevice *vbasedev, uint64_t start_pfn,
> >> +                               uint64_t pfn_count, uint64_t page_size);
> >>  
> >>  #endif /* HW_VFIO_VFIO_COMMON_H */
> >> -- 
> >> 2.7.0
> >>


  reply	other threads:[~2019-07-19  1:31 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-09  9:49 [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-07-16 20:56   ` Alex Williamson
2019-07-17 11:55     ` Cornelia Huck
2019-07-23 12:13     ` Cornelia Huck
2019-08-21 20:32       ` Kirti Wankhede
2019-08-21 20:31     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-07-16 16:29   ` Cornelia Huck
2019-07-18 18:54     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 03/13] vfio: Add vfio_get_object callback to VFIODeviceOps Kirti Wankhede
2019-07-16 16:32   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 04/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-07-11 12:07   ` Dr. David Alan Gilbert
2019-08-22  4:50     ` Kirti Wankhede
2019-08-22  9:32       ` Dr. David Alan Gilbert
2019-08-22 19:10         ` Kirti Wankhede
2019-08-22 19:13           ` Dr. David Alan Gilbert
2019-08-22 23:57             ` Tian, Kevin
2019-08-23  9:26               ` Dr. David Alan Gilbert
2019-08-23  9:49                 ` Tian, Kevin
2019-07-16 21:14   ` Alex Williamson
2019-07-17  9:10     ` Dr. David Alan Gilbert
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 05/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-07-16 21:37   ` Alex Williamson
2019-07-18 20:19     ` Kirti Wankhede
2019-07-23 12:52   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 06/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-07-11 12:13   ` Dr. David Alan Gilbert
2019-07-11 19:14     ` Kirti Wankhede
2019-07-22  8:23       ` Yan Zhao
2019-08-20 20:31         ` Kirti Wankhede
2019-07-16 22:03   ` Alex Williamson
2019-07-22  8:37   ` Yan Zhao
2019-08-20 20:33     ` Kirti Wankhede
2019-08-23  1:32       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 07/13] vfio: Add migration state change notifier Kirti Wankhede
2019-07-17  2:25   ` Yan Zhao
2019-08-20 20:24     ` Kirti Wankhede
2019-08-23  0:54       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 08/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-07-22  8:34   ` Yan Zhao
2019-08-20 20:33     ` Kirti Wankhede
2019-08-23  1:23       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 09/13] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2019-07-12  2:44   ` Yan Zhao
2019-07-18 18:45     ` Kirti Wankhede
2019-07-17  2:50   ` Yan Zhao
2019-08-20 20:30     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 10/13] vfio: Add load " Kirti Wankhede
2019-07-12  2:52   ` Yan Zhao
2019-07-18 19:00     ` Kirti Wankhede
2019-07-22  3:20       ` Yan Zhao
2019-07-22 19:07         ` Alex Williamson
2019-07-22 21:50           ` Yan Zhao
2019-08-20 20:35             ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 11/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-07-12  0:33   ` Yan Zhao
2019-07-18 18:39     ` Kirti Wankhede
2019-07-19  1:24       ` Yan Zhao [this message]
2019-07-22  8:39   ` Yan Zhao
2019-08-20 20:34     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 12/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-07-23 13:18   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 13/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-07-11  2:55 ` [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device Yan Zhao
2019-07-11 10:50   ` Dr. David Alan Gilbert
2019-07-11 11:47     ` Yan Zhao
2019-07-11 16:23       ` Dr. David Alan Gilbert
2019-07-11 19:08         ` Kirti Wankhede
2019-07-12  0:32           ` Yan Zhao
2019-07-18 18:32             ` Kirti Wankhede
2019-07-19  1:23               ` Yan Zhao
2019-07-24 11:32                 ` Dr. David Alan Gilbert
2019-07-12 17:42           ` Dr. David Alan Gilbert
2019-07-15  0:35             ` Yan Zhao
2019-07-12  0:14         ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190719012456.GG8912@joy-OptiPlex-7040 \
    --to=yan.y.zhao@intel.com \
    --cc=Ken.Xue@amd.com \
    --cc=Zhengxiao.zx@Alibaba-inc.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=changpeng.liu@intel.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eauger@redhat.com \
    --cc=eskultet@redhat.com \
    --cc=felipe@nutanix.com \
    --cc=jonathan.davies@nutanix.com \
    --cc=kevin.tian@intel.com \
    --cc=kwankhede@nvidia.com \
    --cc=mlevitsk@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuangtai.tst@alibaba-inc.com \
    --cc=yi.l.liu@intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=ziye.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).