kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
@ 2020-05-18  5:56 Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 1/8] vfio: UAPI for migration interface for device state Kirti Wankhede
                   ` (8 more replies)
  0 siblings, 9 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Hi,

This patch set adds:
* IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
  respect to IOMMU container rather than per device. All pages pinned by
  vendor driver through vfio_pin_pages external API has to be marked as
  dirty during  migration. When IOMMU capable device is present in the
  container and all pages are pinned and mapped, then all pages are marked
  dirty.
  When there are CPU writes, CPU dirty page tracking can identify dirtied
  pages, but any page pinned by vendor driver can also be written by
  device. As of now there is no device which has hardware support for
  dirty page tracking. So all pages which are pinned should be considered
  as dirty.
  This ioctl is also used to start/stop dirty pages tracking for pinned and
  unpinned pages while migration is active.

* Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
  unmapping IO virtual address range.
  With vIOMMU, during pre-copy phase of migration, while CPUs are still
  running, IO virtual address unmap can happen while device still keeping
  reference of guest pfns. Those pages should be reported as dirty before
  unmap, so that VFIO user space application can copy content of those
  pages from source to destination.

* Patch 8 detect if IOMMU capable device driver is smart to report pages
  to be marked dirty by pinning pages using vfio_pin_pages() API.


Yet TODO:
Since there is no device which has hardware support for system memmory
dirty bitmap tracking, right now there is no other API from vendor driver
to VFIO IOMMU module to report dirty pages. In future, when such hardware
support will be implemented, an API will be required such that vendor
driver could report dirty pages to VFIO module during migration phases.

Adding revision history from previous QEMU patch set to understand KABI
changes done till now

v21 -> v22
- Fixed issue raised by Alex :
https://lore.kernel.org/kvm/20200515163307.72951dd2@w520.home/

v20 -> v21
- Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
- Updated unmap ioctl function - as suggested by Alex.
- Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
  Cornelia.

v19 -> v20
- Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
- Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
- Removed flag definition from migration capability.

v18 -> v19
- Updated migration capability with supported page sizes bitmap for dirty
  page tracking and  maximum bitmap size supported by kernel module.
- Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
  is updated.
- Removed extra buffers added in previous version for bitmap manipulation
  and optimised the code.

v17 -> v18
- Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
  ioctl
- Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma

v16 -> v17
- Fixed errors reported by kbuild test robot <lkp@intel.com> on i386

v15 -> v16
- Minor edits and nit picks (Auger Eric)
- On copying bitmap to user, re-populated bitmap only for pinned pages,
  excluding unmapped pages and CPU dirtied pages.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v14 -> v15
- Minor edits and nit picks.
- In the verification of user allocated bitmap memory, added check of
   maximum size.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v13 -> v14
- Added struct vfio_bitmap to kabi. updated structure
  vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
- All small changes suggested by Alex.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v12 -> v13
- Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
- Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
- Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
  field.

v11 -> v12
- Changed bitmap allocation in vfio_iommu_type1.
- Remove atomicity of ref_count.
- Updated comments for migration device state structure about error
  reporting.
- Nit picks from v11 reviews

v10 -> v11
- Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
- Added proposal to detect if IOMMU capable device calls external pin pages
  API to mark pages dirty.
- Nit picks from v10 reviews

v9 -> v10:
- Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
  during unmap while migration is active
- Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
  tracking.
- If iommu_mapped, mark all pages dirty.
- Added unpinned pages tracking while migration is active.
- Updated comments for migration device state structure with bit
  combination table and state transition details.

v8 -> v9:
- Split patch set in 2 sets, Kernel and QEMU.
- Dirty pages bitmap is queried from IOMMU container rather than from
  vendor driver for per device. Added 2 ioctls to achieve this.

v7 -> v8:
- Updated comments for KABI
- Added BAR address validation check during PCI device's config space load
  as suggested by Dr. David Alan Gilbert.
- Changed vfio_migration_set_state() to set or clear device state flags.
- Some nit fixes.

v6 -> v7:
- Fix build failures.

v5 -> v6:
- Fix build failure.

v4 -> v5:
- Added decriptive comment about the sequence of access of members of
  structure vfio_device_migration_info to be followed based on Alex's
  suggestion
- Updated get dirty pages sequence.
- As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
  get_object, save_config and load_config.
- Fixed multiple nit picks.
- Tested live migration with multiple vfio device assigned to a VM.

v3 -> v4:
- Added one more bit for _RESUMING flag to be set explicitly.
- data_offset field is read-only for user space application.
- data_size is read for every iteration before reading data from migration,
  that is removed assumption that data will be till end of migration
  region.
- If vendor driver supports mappable sparsed region, map those region
  during setup state of save/load, similarly unmap those from cleanup
  routines.
- Handles race condition that causes data corruption in migration region
  during save device state by adding mutex and serialiaing save_buffer and
  get_dirty_pages routines.
- Skip called get_dirty_pages routine for mapped MMIO region of device.
- Added trace events.
- Split into multiple functional patches.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2
  bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined
  action on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with
  region type capability.
- Re-structured vfio_device_migration_info. This structure will be placed
  at 0th offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section
  of the region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested
  pages are copied.

Thanks,
Kirti




Kirti Wankhede (8):
  vfio: UAPI for migration interface for device state
  vfio iommu: Remove atomicity of ref_count of pinned pages
  vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
  vfio iommu: Add ioctl definition for dirty pages tracking
  vfio iommu: Implementation of ioctl for dirty pages tracking
  vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  vfio iommu: Add migration capability to report supported features
  vfio: Selective dirty page tracking if IOMMU backed device pins pages

 drivers/vfio/vfio.c             |  13 +-
 drivers/vfio/vfio_iommu_type1.c | 576 ++++++++++++++++++++++++++++++++++++----
 include/linux/vfio.h            |   4 +-
 include/uapi/linux/vfio.h       | 315 ++++++++++++++++++++++
 4 files changed, 849 insertions(+), 59 deletions(-)

-- 
2.7.0


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 1/8] vfio: UAPI for migration interface for device state
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 2/8] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

- Defined MIGRATION region type and sub-type.

- Defined vfio_device_migration_info structure which will be placed at the
  0th offset of migration region to get/set VFIO device related
  information. Defined members of structure and usage on read/write access.

- Defined device states and state transition details.

- Defined sequence to be followed while saving and resuming VFIO device.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 include/uapi/linux/vfio.h | 228 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 228 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 015516bcfaa3..ad9bb5af3463 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
 #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
 #define VFIO_REGION_TYPE_GFX                    (1)
 #define VFIO_REGION_TYPE_CCW			(2)
+#define VFIO_REGION_TYPE_MIGRATION              (3)
 
 /* sub-types for VFIO_REGION_TYPE_PCI_* */
 
@@ -379,6 +380,233 @@ struct vfio_region_gfx_edid {
 /* sub-types for VFIO_REGION_TYPE_CCW */
 #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
 
+/* sub-types for VFIO_REGION_TYPE_MIGRATION */
+#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
+
+/*
+ * The structure vfio_device_migration_info is placed at the 0th offset of
+ * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
+ * migration information. Field accesses from this structure are only supported
+ * at their native width and alignment. Otherwise, the result is undefined and
+ * vendor drivers should return an error.
+ *
+ * device_state: (read/write)
+ *      - The user application writes to this field to inform the vendor driver
+ *        about the device state to be transitioned to.
+ *      - The vendor driver should take the necessary actions to change the
+ *        device state. After successful transition to a given state, the
+ *        vendor driver should return success on write(device_state, state)
+ *        system call. If the device state transition fails, the vendor driver
+ *        should return an appropriate -errno for the fault condition.
+ *      - On the user application side, if the device state transition fails,
+ *	  that is, if write(device_state, state) returns an error, read
+ *	  device_state again to determine the current state of the device from
+ *	  the vendor driver.
+ *      - The vendor driver should return previous state of the device unless
+ *        the vendor driver has encountered an internal error, in which case
+ *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
+ *      - The user application must use the device reset ioctl to recover the
+ *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
+ *        indicated to be in a valid device state by reading device_state, the
+ *        user application may attempt to transition the device to any valid
+ *        state reachable from the current state or terminate itself.
+ *
+ *      device_state consists of 3 bits:
+ *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
+ *        it indicates the _STOP state. When the device state is changed to
+ *        _STOP, driver should stop the device before write() returns.
+ *      - If bit 1 is set, it indicates the _SAVING state, which means that the
+ *        driver should start gathering device state information that will be
+ *        provided to the VFIO user application to save the device's state.
+ *      - If bit 2 is set, it indicates the _RESUMING state, which means that
+ *        the driver should prepare to resume the device. Data provided through
+ *        the migration region should be used to resume the device.
+ *      Bits 3 - 31 are reserved for future use. To preserve them, the user
+ *      application should perform a read-modify-write operation on this
+ *      field when modifying the specified bits.
+ *
+ *  +------- _RESUMING
+ *  |+------ _SAVING
+ *  ||+----- _RUNNING
+ *  |||
+ *  000b => Device Stopped, not saving or resuming
+ *  001b => Device running, which is the default state
+ *  010b => Stop the device & save the device state, stop-and-copy state
+ *  011b => Device running and save the device state, pre-copy state
+ *  100b => Device stopped and the device state is resuming
+ *  101b => Invalid state
+ *  110b => Error state
+ *  111b => Invalid state
+ *
+ * State transitions:
+ *
+ *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
+ *                (100b)     (001b)     (011b)        (010b)       (000b)
+ * 0. Running or default state
+ *                             |
+ *
+ * 1. Normal Shutdown (optional)
+ *                             |------------------------------------->|
+ *
+ * 2. Save the state or suspend
+ *                             |------------------------->|---------->|
+ *
+ * 3. Save the state during live migration
+ *                             |----------->|------------>|---------->|
+ *
+ * 4. Resuming
+ *                  |<---------|
+ *
+ * 5. Resumed
+ *                  |--------->|
+ *
+ * 0. Default state of VFIO device is _RUNNNG when the user application starts.
+ * 1. During normal shutdown of the user application, the user application may
+ *    optionally change the VFIO device state from _RUNNING to _STOP. This
+ *    transition is optional. The vendor driver must support this transition but
+ *    must not require it.
+ * 2. When the user application saves state or suspends the application, the
+ *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
+ *    On state transition from _RUNNING to stop-and-copy, driver must stop the
+ *    device, save the device state and send it to the application through the
+ *    migration region. The sequence to be followed for such transition is given
+ *    below.
+ * 3. In live migration of user application, the state transitions from _RUNNING
+ *    to pre-copy, to stop-and-copy, and to _STOP.
+ *    On state transition from _RUNNING to pre-copy, the driver should start
+ *    gathering the device state while the application is still running and send
+ *    the device state data to application through the migration region.
+ *    On state transition from pre-copy to stop-and-copy, the driver must stop
+ *    the device, save the device state and send it to the user application
+ *    through the migration region.
+ *    Vendor drivers must support the pre-copy state even for implementations
+ *    where no data is provided to the user before the stop-and-copy state. The
+ *    user must not be required to consume all migration data before the device
+ *    transitions to a new state, including the stop-and-copy state.
+ *    The sequence to be followed for above two transitions is given below.
+ * 4. To start the resuming phase, the device state should be transitioned from
+ *    the _RUNNING to the _RESUMING state.
+ *    In the _RESUMING state, the driver should use the device state data
+ *    received through the migration region to resume the device.
+ * 5. After providing saved device data to the driver, the application should
+ *    change the state from _RESUMING to _RUNNING.
+ *
+ * reserved:
+ *      Reads on this field return zero and writes are ignored.
+ *
+ * pending_bytes: (read only)
+ *      The number of pending bytes still to be migrated from the vendor driver.
+ *
+ * data_offset: (read only)
+ *      The user application should read data_offset field from the migration
+ *      region. The user application should read the device data from this
+ *      offset within the migration region during the _SAVING state or write
+ *      the device data during the _RESUMING state. See below for details of
+ *      sequence to be followed.
+ *
+ * data_size: (read/write)
+ *      The user application should read data_size to get the size in bytes of
+ *      the data copied in the migration region during the _SAVING state and
+ *      write the size in bytes of the data copied in the migration region
+ *      during the _RESUMING state.
+ *
+ * The format of the migration region is as follows:
+ *  ------------------------------------------------------------------
+ * |vfio_device_migration_info|    data section                      |
+ * |                          |     ///////////////////////////////  |
+ * ------------------------------------------------------------------
+ *   ^                              ^
+ *  offset 0-trapped part        data_offset
+ *
+ * The structure vfio_device_migration_info is always followed by the data
+ * section in the region, so data_offset will always be nonzero. The offset
+ * from where the data is copied is decided by the kernel driver. The data
+ * section can be trapped, mmapped, or partitioned, depending on how the kernel
+ * driver defines the data section. The data section partition can be defined
+ * as mapped by the sparse mmap capability. If mmapped, data_offset must be
+ * page aligned, whereas initial section which contains the
+ * vfio_device_migration_info structure, might not end at the offset, which is
+ * page aligned. The user is not required to access through mmap regardless
+ * of the capabilities of the region mmap.
+ * The vendor driver should determine whether and how to partition the data
+ * section. The vendor driver should return data_offset accordingly.
+ *
+ * The sequence to be followed while in pre-copy state and stop-and-copy state
+ * is as follows:
+ * a. Read pending_bytes, indicating the start of a new iteration to get device
+ *    data. Repeated read on pending_bytes at this stage should have no side
+ *    effects.
+ *    If pending_bytes == 0, the user application should not iterate to get data
+ *    for that device.
+ *    If pending_bytes > 0, perform the following steps.
+ * b. Read data_offset, indicating that the vendor driver should make data
+ *    available through the data section. The vendor driver should return this
+ *    read operation only after data is available from (region + data_offset)
+ *    to (region + data_offset + data_size).
+ * c. Read data_size, which is the amount of data in bytes available through
+ *    the migration region.
+ *    Read on data_offset and data_size should return the offset and size of
+ *    the current buffer if the user application reads data_offset and
+ *    data_size more than once here.
+ * d. Read data_size bytes of data from (region + data_offset) from the
+ *    migration region.
+ * e. Process the data.
+ * f. Read pending_bytes, which indicates that the data from the previous
+ *    iteration has been read. If pending_bytes > 0, go to step b.
+ *
+ * The user application can transition from the _SAVING|_RUNNING
+ * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
+ * number of pending bytes. The user application should iterate in _SAVING
+ * (stop-and-copy) until pending_bytes is 0.
+ *
+ * The sequence to be followed while _RESUMING device state is as follows:
+ * While data for this device is available, repeat the following steps:
+ * a. Read data_offset from where the user application should write data.
+ * b. Write migration data starting at the migration region + data_offset for
+ *    the length determined by data_size from the migration source.
+ * c. Write data_size, which indicates to the vendor driver that data is
+ *    written in the migration region. Vendor driver must return this write
+ *    operations on consuming data. Vendor driver should apply the
+ *    user-provided migration region data to the device resume state.
+ *
+ * If an error occurs during the above sequences, the vendor driver can return
+ * an error code for next read() or write() operation, which will terminate the
+ * loop. The user application should then take the next necessary action, for
+ * example, failing migration or terminating the user application.
+ *
+ * For the user application, data is opaque. The user application should write
+ * data in the same order as the data is received and the data should be of
+ * same transaction size at the source.
+ */
+
+struct vfio_device_migration_info {
+	__u32 device_state;         /* VFIO device state */
+#define VFIO_DEVICE_STATE_STOP      (0)
+#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
+#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
+#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
+#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
+				     VFIO_DEVICE_STATE_SAVING |  \
+				     VFIO_DEVICE_STATE_RESUMING)
+
+#define VFIO_DEVICE_STATE_VALID(state) \
+	(state & VFIO_DEVICE_STATE_RESUMING ? \
+	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
+
+#define VFIO_DEVICE_STATE_IS_ERROR(state) \
+	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
+					      VFIO_DEVICE_STATE_RESUMING))
+
+#define VFIO_DEVICE_STATE_SET_ERROR(state) \
+	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
+					     VFIO_DEVICE_STATE_RESUMING)
+
+	__u32 reserved;
+	__u64 pending_bytes;
+	__u64 data_offset;
+	__u64 data_size;
+};
+
 /*
  * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
  * which allows direct access to non-MSIX registers which happened to be within
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 2/8] vfio iommu: Remove atomicity of ref_count of pinned pages
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 1/8] vfio: UAPI for migration interface for device state Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Kirti Wankhede
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

vfio_pfn.ref_count is always updated while holding iommu->lock, using
atomic variable is overkill.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
---
 drivers/vfio/vfio_iommu_type1.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index a0c60f895b24..fa735047b04d 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -112,7 +112,7 @@ struct vfio_pfn {
 	struct rb_node		node;
 	dma_addr_t		iova;		/* Device address */
 	unsigned long		pfn;		/* Host pfn */
-	atomic_t		ref_count;
+	unsigned int		ref_count;
 };
 
 struct vfio_regions {
@@ -233,7 +233,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, dma_addr_t iova,
 
 	vpfn->iova = iova;
 	vpfn->pfn = pfn;
-	atomic_set(&vpfn->ref_count, 1);
+	vpfn->ref_count = 1;
 	vfio_link_pfn(dma, vpfn);
 	return 0;
 }
@@ -251,7 +251,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
 	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
 
 	if (vpfn)
-		atomic_inc(&vpfn->ref_count);
+		vpfn->ref_count++;
 	return vpfn;
 }
 
@@ -259,7 +259,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
 {
 	int ret = 0;
 
-	if (atomic_dec_and_test(&vpfn->ref_count)) {
+	vpfn->ref_count--;
+	if (!vpfn->ref_count) {
 		ret = put_pfn(vpfn->pfn, dma->prot);
 		vfio_remove_from_pfn_list(dma, vpfn);
 	}
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 1/8] vfio: UAPI for migration interface for device state Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 2/8] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-20 10:08   ` Cornelia Huck
  2020-05-18  5:56 ` [PATCH Kernel v22 4/8] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Calculate and cache pgsize_bitmap when iommu->domain_list is updated
and iommu->external_domain is set for mdev device.
Add iommu->lock protection when cached pgsize_bitmap is accessed.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 88 +++++++++++++++++++++++------------------
 1 file changed, 49 insertions(+), 39 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index fa735047b04d..de17787ffece 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -69,6 +69,7 @@ struct vfio_iommu {
 	struct rb_root		dma_list;
 	struct blocking_notifier_head notifier;
 	unsigned int		dma_avail;
+	uint64_t		pgsize_bitmap;
 	bool			v2;
 	bool			nesting;
 };
@@ -805,15 +806,14 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
 	iommu->dma_avail++;
 }
 
-static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
+static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 {
 	struct vfio_domain *domain;
-	unsigned long bitmap = ULONG_MAX;
 
-	mutex_lock(&iommu->lock);
+	iommu->pgsize_bitmap = ULONG_MAX;
+
 	list_for_each_entry(domain, &iommu->domain_list, next)
-		bitmap &= domain->domain->pgsize_bitmap;
-	mutex_unlock(&iommu->lock);
+		iommu->pgsize_bitmap &= domain->domain->pgsize_bitmap;
 
 	/*
 	 * In case the IOMMU supports page sizes smaller than PAGE_SIZE
@@ -823,12 +823,10 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	 * granularity while iommu driver can use the sub-PAGE_SIZE size
 	 * to map the buffer.
 	 */
-	if (bitmap & ~PAGE_MASK) {
-		bitmap &= PAGE_MASK;
-		bitmap |= PAGE_SIZE;
+	if (iommu->pgsize_bitmap & ~PAGE_MASK) {
+		iommu->pgsize_bitmap &= PAGE_MASK;
+		iommu->pgsize_bitmap |= PAGE_SIZE;
 	}
-
-	return bitmap;
 }
 
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
@@ -839,19 +837,28 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 	size_t unmapped = 0;
 	int ret = 0, retries = 0;
 
-	mask = ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1;
+	mutex_lock(&iommu->lock);
+
+	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
+
+	if (unmap->iova & mask) {
+		ret = -EINVAL;
+		goto unlock;
+	}
+
+	if (!unmap->size || unmap->size & mask) {
+		ret = -EINVAL;
+		goto unlock;
+	}
 
-	if (unmap->iova & mask)
-		return -EINVAL;
-	if (!unmap->size || unmap->size & mask)
-		return -EINVAL;
 	if (unmap->iova + unmap->size - 1 < unmap->iova ||
-	    unmap->size > SIZE_MAX)
-		return -EINVAL;
+	    unmap->size > SIZE_MAX) {
+		ret = -EINVAL;
+		goto unlock;
+	}
 
 	WARN_ON(mask & PAGE_MASK);
 again:
-	mutex_lock(&iommu->lock);
 
 	/*
 	 * vfio-iommu-type1 (v1) - User mappings were coalesced together to
@@ -930,6 +937,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			blocking_notifier_call_chain(&iommu->notifier,
 						    VFIO_IOMMU_NOTIFY_DMA_UNMAP,
 						    &nb_unmap);
+			mutex_lock(&iommu->lock);
 			goto again;
 		}
 		unmapped += dma->size;
@@ -1045,24 +1053,28 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 	if (map->size != size || map->vaddr != vaddr || map->iova != iova)
 		return -EINVAL;
 
-	mask = ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1;
-
-	WARN_ON(mask & PAGE_MASK);
-
 	/* READ/WRITE from device perspective */
 	if (map->flags & VFIO_DMA_MAP_FLAG_WRITE)
 		prot |= IOMMU_WRITE;
 	if (map->flags & VFIO_DMA_MAP_FLAG_READ)
 		prot |= IOMMU_READ;
 
-	if (!prot || !size || (size | iova | vaddr) & mask)
-		return -EINVAL;
+	mutex_lock(&iommu->lock);
 
-	/* Don't allow IOVA or virtual address wrap */
-	if (iova + size - 1 < iova || vaddr + size - 1 < vaddr)
-		return -EINVAL;
+	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
 
-	mutex_lock(&iommu->lock);
+	WARN_ON(mask & PAGE_MASK);
+
+	if (!prot || !size || (size | iova | vaddr) & mask) {
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+
+	/* Don't allow IOVA or virtual address wrap */
+	if (iova + size - 1 < iova || vaddr + size - 1 < vaddr) {
+		ret = -EINVAL;
+		goto out_unlock;
+	}
 
 	if (vfio_find_dma(iommu, iova, size)) {
 		ret = -EEXIST;
@@ -1668,6 +1680,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 			if (!iommu->external_domain) {
 				INIT_LIST_HEAD(&domain->group_list);
 				iommu->external_domain = domain;
+				vfio_pgsize_bitmap(iommu);
 			} else {
 				kfree(domain);
 			}
@@ -1793,6 +1806,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	}
 
 	list_add(&domain->next, &iommu->domain_list);
+	vfio_pgsize_bitmap(iommu);
 done:
 	/* Delete the old one and insert new iova list */
 	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
@@ -2004,6 +2018,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 			list_del(&domain->next);
 			kfree(domain);
 			vfio_iommu_aper_expand(iommu, &iova_copy);
+			vfio_pgsize_bitmap(iommu);
 		}
 		break;
 	}
@@ -2136,8 +2151,6 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
 	size_t size;
 	int iovas = 0, i = 0, ret;
 
-	mutex_lock(&iommu->lock);
-
 	list_for_each_entry(iova, &iommu->iova_list, list)
 		iovas++;
 
@@ -2146,17 +2159,14 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
 		 * Return 0 as a container with a single mdev device
 		 * will have an empty list
 		 */
-		ret = 0;
-		goto out_unlock;
+		return 0;
 	}
 
 	size = sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges));
 
 	cap_iovas = kzalloc(size, GFP_KERNEL);
-	if (!cap_iovas) {
-		ret = -ENOMEM;
-		goto out_unlock;
-	}
+	if (!cap_iovas)
+		return -ENOMEM;
 
 	cap_iovas->nr_iovas = iovas;
 
@@ -2169,8 +2179,6 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
 	ret = vfio_iommu_iova_add_cap(caps, cap_iovas, size);
 
 	kfree(cap_iovas);
-out_unlock:
-	mutex_unlock(&iommu->lock);
 	return ret;
 }
 
@@ -2215,11 +2223,13 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 			info.cap_offset = 0; /* output, no-recopy necessary */
 		}
 
+		mutex_lock(&iommu->lock);
 		info.flags = VFIO_IOMMU_INFO_PGSIZES;
 
-		info.iova_pgsizes = vfio_pgsize_bitmap(iommu);
+		info.iova_pgsizes = iommu->pgsize_bitmap;
 
 		ret = vfio_iommu_iova_build_caps(iommu, &caps);
+		mutex_unlock(&iommu->lock);
 		if (ret)
 			return ret;
 
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 4/8] vfio iommu: Add ioctl definition for dirty pages tracking
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
                   ` (2 preceding siblings ...)
  2020-05-18  5:56 ` [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl " Kirti Wankhede
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

IOMMU container maintains a list of all pages pinned by vfio_pin_pages API.
All pages pinned by vendor driver through this API should be considered as
dirty during migration. When container consists of IOMMU capable device and
all pages are pinned and mapped, then all pages are marked dirty.
Added support to start/stop dirtied pages tracking and to get bitmap of all
dirtied pages for requested IO virtual address range.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 include/uapi/linux/vfio.h | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index ad9bb5af3463..4850c1fef1f8 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1033,6 +1033,12 @@ struct vfio_iommu_type1_dma_map {
 
 #define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)
 
+struct vfio_bitmap {
+	__u64        pgsize;	/* page size for bitmap in bytes */
+	__u64        size;	/* in bytes */
+	__u64 __user *data;	/* one bit per page */
+};
+
 /**
  * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
  *							struct vfio_dma_unmap)
@@ -1059,6 +1065,55 @@ struct vfio_iommu_type1_dma_unmap {
 #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
 #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
 
+/**
+ * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *                                     struct vfio_iommu_type1_dirty_bitmap)
+ * IOCTL is used for dirty pages tracking.
+ * Caller should set flag depending on which operation to perform, details as
+ * below:
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_START flag set, instructs
+ * the IOMMU driver to track pages that are dirtied or potentially dirtied by
+ * device; designed to be used when a migration is in progress. Dirty pages are
+ * tracked until tracking is stopped by user application by calling the IOCTL
+ * with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag set, instructs
+ * the IOMMU driver to stop tracking dirtied pages.
+ *
+ * Calling the  IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set
+ * returns the dirty pages bitmap for IOMMU container for a given IOVA range.
+ * User must specify the IOVA range and the pgsize through the structure
+ * vfio_iommu_type1_dirty_bitmap_get in the data[] portion. This interface
+ * supports to get bitmap of smallest supported pgsize only and can be modified
+ * in future to get bitmap of specified pgsize. The user must provide a zeroed
+ * memory area for the bitmap memory and specify its size in bitmap.size.
+ * One bit is used to represent one page consecutively starting from iova
+ * offset. The user should provide page size in bitmap.pgsize field. A bit set
+ * in the bitmap indicates that the page at that offset from iova is dirty.
+ * The caller must set argsz including size of structure
+ * vfio_iommu_type1_dirty_bitmap_get.
+ *
+ * Only one of the flags _START, _STOP and _GET may be specified at a time.
+ *
+ */
+struct vfio_iommu_type1_dirty_bitmap {
+	__u32        argsz;
+	__u32        flags;
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
+	__u8         data[];
+};
+
+struct vfio_iommu_type1_dirty_bitmap_get {
+	__u64              iova;	/* IO virtual address */
+	__u64              size;	/* Size of iova range */
+	struct vfio_bitmap bitmap;
+};
+
+#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
+
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl for dirty pages tracking
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
                   ` (3 preceding siblings ...)
  2020-05-18  5:56 ` [PATCH Kernel v22 4/8] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-18 21:53   ` Alex Williamson
  2020-05-18  5:56 ` [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
- Start dirty pages tracking while migration is active
- Stop dirty pages tracking.
- Get dirty pages bitmap. Its user space application's responsibility to
  copy content of dirty pages from source to destination during migration.

To prevent DoS attack, memory for bitmap is allocated per vfio_dma
structure. Bitmap size is calculated considering smallest supported page
size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled

Bitmap is populated for already pinned pages when bitmap is allocated for
a vfio_dma with the smallest supported page size. Update bitmap from
pinning functions when tracking is enabled. When user application queries
bitmap, check if requested page size is same as page size used to
populated bitmap. If it is equal, copy bitmap, but if not equal, return
error.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>

Fixed error reported by build bot by changing pgsize type from uint64_t
to size_t.
Reported-by: kbuild test robot <lkp@intel.com>
---
 drivers/vfio/vfio_iommu_type1.c | 313 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 307 insertions(+), 6 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index de17787ffece..bf740fef196f 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -72,6 +72,7 @@ struct vfio_iommu {
 	uint64_t		pgsize_bitmap;
 	bool			v2;
 	bool			nesting;
+	bool			dirty_page_tracking;
 };
 
 struct vfio_domain {
@@ -92,6 +93,7 @@ struct vfio_dma {
 	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
 	struct task_struct	*task;
 	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
+	unsigned long		*bitmap;
 };
 
 struct vfio_group {
@@ -126,6 +128,19 @@ struct vfio_regions {
 #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
 					(!list_empty(&iommu->domain_list))
 
+#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
+
+/*
+ * Input argument of number of bits to bitmap_set() is unsigned integer, which
+ * further casts to signed integer for unaligned multi-bit operation,
+ * __bitmap_set().
+ * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byte,
+ * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
+ * system.
+ */
+#define DIRTY_BITMAP_PAGES_MAX	 ((u64)INT_MAX)
+#define DIRTY_BITMAP_SIZE_MAX	 DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
+
 static int put_pfn(unsigned long pfn, int prot);
 
 /*
@@ -176,6 +191,74 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
 	rb_erase(&old->node, &iommu->dma_list);
 }
 
+
+static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
+{
+	uint64_t npages = dma->size / pgsize;
+
+	if (npages > DIRTY_BITMAP_PAGES_MAX)
+		return -EINVAL;
+
+	dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
+	if (!dma->bitmap)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void vfio_dma_bitmap_free(struct vfio_dma *dma)
+{
+	kfree(dma->bitmap);
+	dma->bitmap = NULL;
+}
+
+static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
+{
+	struct rb_node *p;
+
+	for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
+		struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, node);
+
+		bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1);
+	}
+}
+
+static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
+{
+	struct rb_node *n = rb_first(&iommu->dma_list);
+
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+		int ret;
+
+		ret = vfio_dma_bitmap_alloc(dma, pgsize);
+		if (ret) {
+			struct rb_node *p = rb_prev(n);
+
+			for (; p; p = rb_prev(p)) {
+				struct vfio_dma *dma = rb_entry(n,
+							struct vfio_dma, node);
+
+				vfio_dma_bitmap_free(dma);
+			}
+			return ret;
+		}
+		vfio_dma_populate_bitmap(dma, pgsize);
+	}
+	return 0;
+}
+
+static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu)
+{
+	struct rb_node *n = rb_first(&iommu->dma_list);
+
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+
+		vfio_dma_bitmap_free(dma);
+	}
+}
+
 /*
  * Helper Functions for host iova-pfn list
  */
@@ -568,6 +651,17 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 			vfio_unpin_page_external(dma, iova, do_accounting);
 			goto pin_unwind;
 		}
+
+		if (iommu->dirty_page_tracking) {
+			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
+
+			/*
+			 * Bitmap populated with the smallest supported page
+			 * size
+			 */
+			bitmap_set(dma->bitmap,
+				   (iova - dma->iova) >> pgshift, 1);
+		}
 	}
 
 	ret = i;
@@ -802,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
 	vfio_unmap_unpin(iommu, dma, true);
 	vfio_unlink_dma(iommu, dma);
 	put_task_struct(dma->task);
+	vfio_dma_bitmap_free(dma);
 	kfree(dma);
 	iommu->dma_avail++;
 }
@@ -829,6 +924,99 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	}
 }
 
+static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
+			      dma_addr_t base_iova, size_t pgsize)
+{
+	unsigned long pgshift = __ffs(pgsize);
+	unsigned long nbits = dma->size >> pgshift;
+	unsigned long bit_offset = (dma->iova - base_iova) >> pgshift;
+	unsigned long copy_offset = bit_offset / BITS_PER_LONG;
+	unsigned long shift = bit_offset % BITS_PER_LONG;
+	unsigned long leftover;
+
+	/* mark all pages dirty if all pages are pinned and mapped. */
+	if (dma->iommu_mapped)
+		bitmap_set(dma->bitmap, 0, dma->size >> pgshift);
+
+	if (shift) {
+		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
+				  nbits + shift);
+
+		if (copy_from_user(&leftover, (u64 *)bitmap + copy_offset,
+				   sizeof(leftover)))
+			return -EFAULT;
+
+		bitmap_or(dma->bitmap, dma->bitmap, &leftover, shift);
+	}
+
+	if (copy_to_user((u64 *)bitmap + copy_offset, dma->bitmap,
+			 DIRTY_BITMAP_BYTES(nbits + shift)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
+				  dma_addr_t iova, size_t size, size_t pgsize)
+{
+	struct vfio_dma *dma;
+	struct rb_node *n;
+	unsigned long pgshift = __ffs(pgsize);
+	int ret;
+
+	/*
+	 * GET_BITMAP request must fully cover vfio_dma mappings.  Multiple
+	 * vfio_dma mappings may be clubbed by specifying large ranges, but
+	 * there must not be any previous mappings bisected by the range.
+	 * An error will be returned if these conditions are not met.
+	 */
+	dma = vfio_find_dma(iommu, iova, 1);
+	if (dma && dma->iova != iova)
+		return -EINVAL;
+
+	dma = vfio_find_dma(iommu, iova + size - 1, 0);
+	if (dma && dma->iova + dma->size != iova + size)
+		return -EINVAL;
+
+	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
+		struct vfio_dma *ldma = rb_entry(n, struct vfio_dma, node);
+
+		if (ldma->iova >= iova)
+			break;
+	}
+
+	dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
+
+	while (dma && (dma->iova >= iova) &&
+		(dma->iova + dma->size <= iova + size)) {
+
+		ret = update_user_bitmap(bitmap, dma, iova, pgsize);
+		if (ret)
+			return ret;
+
+		/*
+		 * Re-populate bitmap to include all pinned pages which are
+		 * considered as dirty but exclude pages which are unpinned and
+		 * pages which are marked dirty by vfio_dma_rw()
+		 */
+		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
+		vfio_dma_populate_bitmap(dma, pgsize);
+
+		n = rb_next(&dma->node);
+		dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
+	}
+	return 0;
+}
+
+static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
+{
+	if (!npages || !bitmap_size || (bitmap_size > DIRTY_BITMAP_SIZE_MAX) ||
+	    (bitmap_size < DIRTY_BITMAP_BYTES(npages)))
+		return -EINVAL;
+
+	return 0;
+}
+
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			     struct vfio_iommu_type1_dma_unmap *unmap)
 {
@@ -1046,7 +1234,7 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 	unsigned long vaddr = map->vaddr;
 	size_t size = map->size;
 	int ret = 0, prot = 0;
-	uint64_t mask;
+	size_t pgsize;
 	struct vfio_dma *dma;
 
 	/* Verify that none of our __u64 fields overflow */
@@ -1061,11 +1249,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 
 	mutex_lock(&iommu->lock);
 
-	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
+	pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
 
-	WARN_ON(mask & PAGE_MASK);
+	WARN_ON((pgsize - 1) & PAGE_MASK);
 
-	if (!prot || !size || (size | iova | vaddr) & mask) {
+	if (!prot || !size || (size | iova | vaddr) & (pgsize - 1)) {
 		ret = -EINVAL;
 		goto out_unlock;
 	}
@@ -1142,6 +1330,12 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 	else
 		ret = vfio_pin_map_dma(iommu, dma, size);
 
+	if (!ret && iommu->dirty_page_tracking) {
+		ret = vfio_dma_bitmap_alloc(dma, pgsize);
+		if (ret)
+			vfio_remove_dma(iommu, dma);
+	}
+
 out_unlock:
 	mutex_unlock(&iommu->lock);
 	return ret;
@@ -2288,6 +2482,104 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 		return copy_to_user((void __user *)arg, &unmap, minsz) ?
 			-EFAULT : 0;
+	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
+		struct vfio_iommu_type1_dirty_bitmap dirty;
+		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
+		int ret = 0;
+
+		if (!iommu->v2)
+			return -EACCES;
+
+		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
+				    flags);
+
+		if (copy_from_user(&dirty, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (dirty.argsz < minsz || dirty.flags & ~mask)
+			return -EINVAL;
+
+		/* only one flag should be set at a time */
+		if (__ffs(dirty.flags) != __fls(dirty.flags))
+			return -EINVAL;
+
+		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
+			size_t pgsize;
+
+			mutex_lock(&iommu->lock);
+			pgsize = 1 << __ffs(iommu->pgsize_bitmap);
+			if (!iommu->dirty_page_tracking) {
+				ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
+				if (!ret)
+					iommu->dirty_page_tracking = true;
+			}
+			mutex_unlock(&iommu->lock);
+			return ret;
+		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
+			mutex_lock(&iommu->lock);
+			if (iommu->dirty_page_tracking) {
+				iommu->dirty_page_tracking = false;
+				vfio_dma_bitmap_free_all(iommu);
+			}
+			mutex_unlock(&iommu->lock);
+			return 0;
+		} else if (dirty.flags &
+				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
+			struct vfio_iommu_type1_dirty_bitmap_get range;
+			unsigned long pgshift;
+			size_t data_size = dirty.argsz - minsz;
+			size_t iommu_pgsize;
+
+			if (!data_size || data_size < sizeof(range))
+				return -EINVAL;
+
+			if (copy_from_user(&range, (void __user *)(arg + minsz),
+					   sizeof(range)))
+				return -EFAULT;
+
+			if (range.iova + range.size < range.iova)
+				return -EINVAL;
+			if (!access_ok((void __user *)range.bitmap.data,
+				       range.bitmap.size))
+				return -EINVAL;
+
+			pgshift = __ffs(range.bitmap.pgsize);
+			ret = verify_bitmap_size(range.size >> pgshift,
+						 range.bitmap.size);
+			if (ret)
+				return ret;
+
+			mutex_lock(&iommu->lock);
+
+			iommu_pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
+
+			/* allow only smallest supported pgsize */
+			if (range.bitmap.pgsize != iommu_pgsize) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+			if (range.iova & (iommu_pgsize - 1)) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+			if (!range.size || range.size & (iommu_pgsize - 1)) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+
+			if (iommu->dirty_page_tracking)
+				ret = vfio_iova_dirty_bitmap(range.bitmap.data,
+						iommu, range.iova, range.size,
+						range.bitmap.pgsize);
+			else
+				ret = -EINVAL;
+out_unlock:
+			mutex_unlock(&iommu->lock);
+
+			return ret;
+		}
 	}
 
 	return -ENOTTY;
@@ -2355,10 +2647,19 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
 
 	vaddr = dma->vaddr + offset;
 
-	if (write)
+	if (write) {
 		*copied = copy_to_user((void __user *)vaddr, data,
 					 count) ? 0 : count;
-	else
+		if (*copied && iommu->dirty_page_tracking) {
+			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
+			/*
+			 * Bitmap populated with the smallest supported page
+			 * size
+			 */
+			bitmap_set(dma->bitmap, offset >> pgshift,
+				   *copied >> pgshift);
+		}
+	} else
 		*copied = copy_from_user(data, (void __user *)vaddr,
 					   count) ? 0 : count;
 	if (kthread)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
                   ` (4 preceding siblings ...)
  2020-05-18  5:56 ` [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl " Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-19  6:54   ` Kirti Wankhede
  2020-05-18  5:56 ` [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features Kirti Wankhede
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

DMA mapped pages, including those pinned by mdev vendor drivers, might
get unpinned and unmapped while migration is active and device is still
running. For example, in pre-copy phase while guest driver could access
those pages, host device or vendor driver can dirty these mapped pages.
Such pages should be marked dirty so as to maintain memory consistency
for a user making use of dirty page tracking.

To get bitmap during unmap, user should allocate memory for bitmap, set
it all zeros, set size of allocated memory, set page size to be
considered for bitmap and set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 68 +++++++++++++++++++++++++++++++++--------
 include/uapi/linux/vfio.h       | 10 ++++++
 2 files changed, 66 insertions(+), 12 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index bf740fef196f..b9ee78a615a4 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -195,11 +195,15 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
 static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
 {
 	uint64_t npages = dma->size / pgsize;
+	size_t bitmap_size;
 
 	if (npages > DIRTY_BITMAP_PAGES_MAX)
 		return -EINVAL;
 
-	dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
+	/* Allocate extra 64 bits which are used for bitmap manipulation */
+	bitmap_size = DIRTY_BITMAP_BYTES(npages) + sizeof(u64);
+
+	dma->bitmap = kvzalloc(bitmap_size, GFP_KERNEL);
 	if (!dma->bitmap)
 		return -ENOMEM;
 
@@ -1018,23 +1022,25 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
 }
 
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
-			     struct vfio_iommu_type1_dma_unmap *unmap)
+			     struct vfio_iommu_type1_dma_unmap *unmap,
+			     struct vfio_bitmap *bitmap)
 {
-	uint64_t mask;
 	struct vfio_dma *dma, *dma_last = NULL;
-	size_t unmapped = 0;
+	size_t unmapped = 0, pgsize;
 	int ret = 0, retries = 0;
+	unsigned long pgshift;
 
 	mutex_lock(&iommu->lock);
 
-	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
+	pgshift = __ffs(iommu->pgsize_bitmap);
+	pgsize = (size_t)1 << pgshift;
 
-	if (unmap->iova & mask) {
+	if (unmap->iova & (pgsize - 1)) {
 		ret = -EINVAL;
 		goto unlock;
 	}
 
-	if (!unmap->size || unmap->size & mask) {
+	if (!unmap->size || unmap->size & (pgsize - 1)) {
 		ret = -EINVAL;
 		goto unlock;
 	}
@@ -1045,9 +1051,15 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 		goto unlock;
 	}
 
-	WARN_ON(mask & PAGE_MASK);
-again:
+	/* When dirty tracking is enabled, allow only min supported pgsize */
+	if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
+	    (!iommu->dirty_page_tracking || (bitmap->pgsize != pgsize))) {
+		ret = -EINVAL;
+		goto unlock;
+	}
 
+	WARN_ON((pgsize - 1) & PAGE_MASK);
+again:
 	/*
 	 * vfio-iommu-type1 (v1) - User mappings were coalesced together to
 	 * avoid tracking individual mappings.  This means that the granularity
@@ -1085,6 +1097,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			ret = -EINVAL;
 			goto unlock;
 		}
+
 		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
 		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
 			ret = -EINVAL;
@@ -1128,6 +1141,14 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			mutex_lock(&iommu->lock);
 			goto again;
 		}
+
+		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
+			ret = update_user_bitmap(bitmap->data, dma,
+						 unmap->iova, pgsize);
+			if (ret)
+				break;
+		}
+
 		unmapped += dma->size;
 		vfio_remove_dma(iommu, dma);
 	}
@@ -2466,17 +2487,40 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
 		struct vfio_iommu_type1_dma_unmap unmap;
-		long ret;
+		struct vfio_bitmap bitmap = { 0 };
+		int ret;
 
 		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
 
 		if (copy_from_user(&unmap, (void __user *)arg, minsz))
 			return -EFAULT;
 
-		if (unmap.argsz < minsz || unmap.flags)
+		if (unmap.argsz < minsz ||
+		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
 			return -EINVAL;
 
-		ret = vfio_dma_do_unmap(iommu, &unmap);
+		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
+			unsigned long pgshift;
+
+			if (unmap.argsz < (minsz + sizeof(bitmap)))
+				return -EINVAL;
+
+			if (copy_from_user(&bitmap,
+					   (void __user *)(arg + minsz),
+					   sizeof(bitmap)))
+				return -EFAULT;
+
+			if (!access_ok((void __user *)bitmap.data, bitmap.size))
+				return -EINVAL;
+
+			pgshift = __ffs(bitmap.pgsize);
+			ret = verify_bitmap_size(unmap.size >> pgshift,
+						 bitmap.size);
+			if (ret)
+				return ret;
+		}
+
+		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
 		if (ret)
 			return ret;
 
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 4850c1fef1f8..a1dd2150971e 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1048,12 +1048,22 @@ struct vfio_bitmap {
  * field.  No guarantee is made to the user that arbitrary unmaps of iova
  * or size different from those used in the original mapping call will
  * succeed.
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, user must
+ * provide data[] as structure vfio_bitmap. User must allocate memory to get
+ * bitmap, zero the bitmap memory and must set size of allocated memory in
+ * vfio_bitmap.size field. A bit in bitmap represents one page of user provided
+ * page size in 'pgsize', consecutively starting from iova offset. Bit set
+ * indicates page at that offset from iova is dirty. Bitmap of pages in the
+ * range of unmapped size is returned in vfio_bitmap.data
  */
 struct vfio_iommu_type1_dma_unmap {
 	__u32	argsz;
 	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
 	__u64	iova;				/* IO virtual address */
 	__u64	size;				/* Size of mapping (bytes) */
+	__u8    data[];
 };
 
 #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
                   ` (5 preceding siblings ...)
  2020-05-18  5:56 ` [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-20 10:42   ` Cornelia Huck
  2020-05-18  5:56 ` [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
  2020-05-19 16:58 ` [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
  8 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Added migration capability in IOMMU info chain.
User application should check IOMMU info chain for migration capability
to use dirty page tracking feature provided by kernel module.
User application must check page sizes supported and maximum dirty
bitmap size returned by this capability structure for ioctls used to get
dirty bitmap.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 23 ++++++++++++++++++++++-
 include/uapi/linux/vfio.h       | 22 ++++++++++++++++++++++
 2 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index b9ee78a615a4..5c3dc5863893 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2397,6 +2397,22 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
 	return ret;
 }
 
+static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu,
+					   struct vfio_info_cap *caps)
+{
+	struct vfio_iommu_type1_info_cap_migration cap_mig;
+
+	cap_mig.header.id = VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION;
+	cap_mig.header.version = 1;
+
+	cap_mig.flags = 0;
+	/* support minimum pgsize */
+	cap_mig.pgsize_bitmap = (size_t)1 << __ffs(iommu->pgsize_bitmap);
+	cap_mig.max_dirty_bitmap_size = DIRTY_BITMAP_SIZE_MAX;
+
+	return vfio_info_add_capability(caps, &cap_mig.header, sizeof(cap_mig));
+}
+
 static long vfio_iommu_type1_ioctl(void *iommu_data,
 				   unsigned int cmd, unsigned long arg)
 {
@@ -2443,8 +2459,13 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 		info.iova_pgsizes = iommu->pgsize_bitmap;
 
-		ret = vfio_iommu_iova_build_caps(iommu, &caps);
+		ret = vfio_iommu_migration_build_caps(iommu, &caps);
+
+		if (!ret)
+			ret = vfio_iommu_iova_build_caps(iommu, &caps);
+
 		mutex_unlock(&iommu->lock);
+
 		if (ret)
 			return ret;
 
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index a1dd2150971e..aa8aa9dcf02a 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1013,6 +1013,28 @@ struct vfio_iommu_type1_info_cap_iova_range {
 	struct	vfio_iova_range iova_ranges[];
 };
 
+/*
+ * The migration capability allows to report supported features for migration.
+ *
+ * The structures below define version 1 of this capability.
+ *
+ * The existence of this capability indicates IOMMU kernel driver supports
+ * dirty page tracking.
+ *
+ * pgsize_bitmap: Kernel driver returns supported page sizes bitmap for dirty
+ * page tracking.
+ * max_dirty_bitmap_size: Kernel driver returns maximum supported dirty bitmap
+ * size in bytes to be used by user application for ioctls to get dirty bitmap.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION  1
+
+struct vfio_iommu_type1_info_cap_migration {
+	struct	vfio_info_cap_header header;
+	__u32	flags;
+	__u64	pgsize_bitmap;
+	__u64	max_dirty_bitmap_size;		/* in bytes */
+};
+
 #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
 
 /**
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
                   ` (6 preceding siblings ...)
  2020-05-18  5:56 ` [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features Kirti Wankhede
@ 2020-05-18  5:56 ` Kirti Wankhede
  2020-05-19  6:54   ` Kirti Wankhede
  2020-05-19 16:58 ` [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
  8 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-18  5:56 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Added a check such that only singleton IOMMU groups can pin pages.
From the point when vendor driver pins any pages, consider IOMMU group
dirty page scope to be limited to pinned pages.

To optimize to avoid walking list often, added flag
pinned_page_dirty_scope to indicate if all of the vfio_groups for each
vfio_domain in the domain_list dirty page scope is limited to pinned
pages. This flag is updated on first pinned pages request for that IOMMU
group and on attaching/detaching group.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio.c             |  13 +++--
 drivers/vfio/vfio_iommu_type1.c | 103 +++++++++++++++++++++++++++++++++++++---
 include/linux/vfio.h            |   4 +-
 3 files changed, 109 insertions(+), 11 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 765e0e5d83ed..580099afeaff 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -85,6 +85,7 @@ struct vfio_group {
 	atomic_t			opened;
 	wait_queue_head_t		container_q;
 	bool				noiommu;
+	unsigned int			dev_counter;
 	struct kvm			*kvm;
 	struct blocking_notifier_head	notifier;
 };
@@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
 
 	mutex_lock(&group->device_lock);
 	list_add(&device->group_next, &group->device_list);
+	group->dev_counter++;
 	mutex_unlock(&group->device_lock);
 
 	return device;
@@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
 	struct vfio_group *group = device->group;
 
 	list_del(&device->group_next);
+	group->dev_counter--;
 	mutex_unlock(&group->device_lock);
 
 	dev_set_drvdata(device->dev, NULL);
@@ -1945,6 +1948,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	if (!group)
 		return -ENODEV;
 
+	if (group->dev_counter > 1)
+		return -EINVAL;
+
 	ret = vfio_group_add_container_user(group);
 	if (ret)
 		goto err_pin_pages;
@@ -1952,7 +1958,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
+		ret = driver->ops->pin_pages(container->iommu_data,
+					     group->iommu_group, user_pfn,
 					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
@@ -2050,8 +2057,8 @@ int vfio_group_pin_pages(struct vfio_group *group,
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
 		ret = driver->ops->pin_pages(container->iommu_data,
-					     user_iova_pfn, npage,
-					     prot, phys_pfn);
+					     group->iommu_group, user_iova_pfn,
+					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
 
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 5c3dc5863893..a6b9e13ef57c 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -73,6 +73,7 @@ struct vfio_iommu {
 	bool			v2;
 	bool			nesting;
 	bool			dirty_page_tracking;
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_domain {
@@ -100,6 +101,7 @@ struct vfio_group {
 	struct iommu_group	*iommu_group;
 	struct list_head	next;
 	bool			mdev_group;	/* An mdev group */
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_iova {
@@ -143,6 +145,10 @@ struct vfio_regions {
 
 static int put_pfn(unsigned long pfn, int prot);
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group);
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
 /*
  * This code handles mapping and unmapping of user data buffers
  * into DMA'ble space using the IOMMU
@@ -590,11 +596,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
 }
 
 static int vfio_iommu_type1_pin_pages(void *iommu_data,
+				      struct iommu_group *iommu_group,
 				      unsigned long *user_pfn,
 				      int npage, int prot,
 				      unsigned long *phys_pfn)
 {
 	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group;
 	int i, j, ret;
 	unsigned long remote_vaddr;
 	struct vfio_dma *dma;
@@ -667,8 +675,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 				   (iova - dma->iova) >> pgshift, 1);
 		}
 	}
-
 	ret = i;
+
+	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
+	if (!group->pinned_page_dirty_scope) {
+		group->pinned_page_dirty_scope = true;
+		update_pinned_page_dirty_scope(iommu);
+	}
+
 	goto pin_done;
 
 pin_unwind:
@@ -928,8 +942,9 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	}
 }
 
-static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
-			      dma_addr_t base_iova, size_t pgsize)
+static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
+			      struct vfio_dma *dma, dma_addr_t base_iova,
+			      size_t pgsize)
 {
 	unsigned long pgshift = __ffs(pgsize);
 	unsigned long nbits = dma->size >> pgshift;
@@ -938,8 +953,11 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
 	unsigned long shift = bit_offset % BITS_PER_LONG;
 	unsigned long leftover;
 
-	/* mark all pages dirty if all pages are pinned and mapped. */
-	if (dma->iommu_mapped)
+	/*
+	 * mark all pages dirty if any IOMMU capable device is not able
+	 * to report dirty pages and all pages are pinned and mapped.
+	 */
+	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
 		bitmap_set(dma->bitmap, 0, dma->size >> pgshift);
 
 	if (shift) {
@@ -994,7 +1012,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
 	while (dma && (dma->iova >= iova) &&
 		(dma->iova + dma->size <= iova + size)) {
 
-		ret = update_user_bitmap(bitmap, dma, iova, pgsize);
+		ret = update_user_bitmap(bitmap, iommu, dma, iova, pgsize);
 		if (ret)
 			return ret;
 
@@ -1143,7 +1161,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 		}
 
 		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
-			ret = update_user_bitmap(bitmap->data, dma,
+			ret = update_user_bitmap(bitmap->data, iommu, dma,
 						 unmap->iova, pgsize);
 			if (ret)
 				break;
@@ -1495,6 +1513,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
 	return NULL;
 }
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group = NULL;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		group = find_iommu_group(domain, iommu_group);
+		if (group)
+			return group;
+	}
+
+	if (iommu->external_domain)
+		group = find_iommu_group(iommu->external_domain, iommu_group);
+
+	return group;
+}
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->pinned_page_dirty_scope) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	if (iommu->external_domain) {
+		domain = iommu->external_domain;
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->pinned_page_dirty_scope) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	iommu->pinned_page_dirty_scope = true;
+}
+
 static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
 				  phys_addr_t *base)
 {
@@ -1902,6 +1965,16 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 
 			list_add(&group->next,
 				 &iommu->external_domain->group_list);
+			/*
+			 * Non-iommu backed group cannot dirty memory directly,
+			 * it can only use interfaces that provide dirty
+			 * tracking.
+			 * The iommu scope can only be promoted with the
+			 * addition of a dirty tracking group.
+			 */
+			group->pinned_page_dirty_scope = true;
+			if (!iommu->pinned_page_dirty_scope)
+				update_pinned_page_dirty_scope(iommu);
 			mutex_unlock(&iommu->lock);
 
 			return 0;
@@ -2025,6 +2098,13 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 done:
 	/* Delete the old one and insert new iova list */
 	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
+
+	/*
+	 * An iommu backed group can dirty memory directly and therefore
+	 * demotes the iommu scope until it declares itself dirty tracking
+	 * capable via the page pinning interface.
+	 */
+	iommu->pinned_page_dirty_scope = false;
 	mutex_unlock(&iommu->lock);
 	vfio_iommu_resv_free(&group_resv_regions);
 
@@ -2177,6 +2257,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 	struct vfio_iommu *iommu = iommu_data;
 	struct vfio_domain *domain;
 	struct vfio_group *group;
+	bool update_dirty_scope = false;
 	LIST_HEAD(iova_copy);
 
 	mutex_lock(&iommu->lock);
@@ -2184,6 +2265,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 	if (iommu->external_domain) {
 		group = find_iommu_group(iommu->external_domain, iommu_group);
 		if (group) {
+			update_dirty_scope = !group->pinned_page_dirty_scope;
 			list_del(&group->next);
 			kfree(group);
 
@@ -2213,6 +2295,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 			continue;
 
 		vfio_iommu_detach_group(domain, group);
+		update_dirty_scope = !group->pinned_page_dirty_scope;
 		list_del(&group->next);
 		kfree(group);
 		/*
@@ -2244,6 +2327,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 		vfio_iommu_iova_free(&iova_copy);
 
 detach_group_done:
+	/*
+	 * Removal of a group without dirty tracking may allow the iommu scope
+	 * to be promoted.
+	 */
+	if (update_dirty_scope)
+		update_pinned_page_dirty_scope(iommu);
 	mutex_unlock(&iommu->lock);
 }
 
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 5d92ee15d098..38d3c6a8dc7e 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -76,7 +76,9 @@ struct vfio_iommu_driver_ops {
 					struct iommu_group *group);
 	void		(*detach_group)(void *iommu_data,
 					struct iommu_group *group);
-	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
+	int		(*pin_pages)(void *iommu_data,
+				     struct iommu_group *group,
+				     unsigned long *user_pfn,
 				     int npage, int prot,
 				     unsigned long *phys_pfn);
 	int		(*unpin_pages)(void *iommu_data,
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl for dirty pages tracking
  2020-05-18  5:56 ` [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl " Kirti Wankhede
@ 2020-05-18 21:53   ` Alex Williamson
  2020-05-19  7:11     ` Kirti Wankhede
  0 siblings, 1 reply; 40+ messages in thread
From: Alex Williamson @ 2020-05-18 21:53 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Mon, 18 May 2020 11:26:34 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> - Start dirty pages tracking while migration is active
> - Stop dirty pages tracking.
> - Get dirty pages bitmap. Its user space application's responsibility to
>   copy content of dirty pages from source to destination during migration.
> 
> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> structure. Bitmap size is calculated considering smallest supported page
> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> 
> Bitmap is populated for already pinned pages when bitmap is allocated for
> a vfio_dma with the smallest supported page size. Update bitmap from
> pinning functions when tracking is enabled. When user application queries
> bitmap, check if requested page size is same as page size used to
> populated bitmap. If it is equal, copy bitmap, but if not equal, return
> error.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> 
> Fixed error reported by build bot by changing pgsize type from uint64_t
> to size_t.
> Reported-by: kbuild test robot <lkp@intel.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 313 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 307 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index de17787ffece..bf740fef196f 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -72,6 +72,7 @@ struct vfio_iommu {
>  	uint64_t		pgsize_bitmap;
>  	bool			v2;
>  	bool			nesting;
> +	bool			dirty_page_tracking;
>  };
>  
>  struct vfio_domain {
> @@ -92,6 +93,7 @@ struct vfio_dma {
>  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>  	struct task_struct	*task;
>  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> +	unsigned long		*bitmap;
>  };
>  
>  struct vfio_group {
> @@ -126,6 +128,19 @@ struct vfio_regions {
>  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>  					(!list_empty(&iommu->domain_list))
>  
> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> +
> +/*
> + * Input argument of number of bits to bitmap_set() is unsigned integer, which
> + * further casts to signed integer for unaligned multi-bit operation,
> + * __bitmap_set().
> + * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byte,
> + * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
> + * system.
> + */
> +#define DIRTY_BITMAP_PAGES_MAX	 ((u64)INT_MAX)
> +#define DIRTY_BITMAP_SIZE_MAX	 DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
> +
>  static int put_pfn(unsigned long pfn, int prot);
>  
>  /*
> @@ -176,6 +191,74 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>  	rb_erase(&old->node, &iommu->dma_list);
>  }
>  
> +
> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
> +{
> +	uint64_t npages = dma->size / pgsize;
> +
> +	if (npages > DIRTY_BITMAP_PAGES_MAX)
> +		return -EINVAL;
> +
> +	dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);

Curious that the extra 8-bytes are added in the next patch, but they're
just as necessary here.

We also have the explanation above about why we have the signed int
size limitation, but we sort of ignore that when adding the bytes here.
That limitation is derived from __bitmap_set(), whereas we only need
these extra bits for bitmap_shift_left(), where I can't spot a signed
int limitation.  Do you come to the same conclusion?  Maybe worth a
comment why we think we can exceed DIRTY_BITMAP_PAGES_MAX for that
extra padding.

> +	if (!dma->bitmap)
> +		return -ENOMEM;
> +
> +	return 0;
> +}
> +
> +static void vfio_dma_bitmap_free(struct vfio_dma *dma)
> +{
> +	kfree(dma->bitmap);
> +	dma->bitmap = NULL;
> +}
> +
> +static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
> +{
> +	struct rb_node *p;
> +
> +	for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> +		struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, node);
> +
> +		bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1);
> +	}
> +}
> +
> +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {

Nit, the previous function above sets the initial value in the for()
statement, it looks like it would fit in 80 columns here too.  We have
examples either way in the code, so not a must fix.

> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +		int ret;
> +
> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
> +		if (ret) {
> +			struct rb_node *p = rb_prev(n);
> +
> +			for (; p; p = rb_prev(p)) {

Same.

> +				struct vfio_dma *dma = rb_entry(n,
> +							struct vfio_dma, node);
> +
> +				vfio_dma_bitmap_free(dma);
> +			}
> +			return ret;
> +		}
> +		vfio_dma_populate_bitmap(dma, pgsize);
> +	}
> +	return 0;
> +}
> +
> +static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {

And another.

> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> +		vfio_dma_bitmap_free(dma);
> +	}
> +}
> +
>  /*
>   * Helper Functions for host iova-pfn list
>   */
> @@ -568,6 +651,17 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  			vfio_unpin_page_external(dma, iova, do_accounting);
>  			goto pin_unwind;
>  		}
> +
> +		if (iommu->dirty_page_tracking) {
> +			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> +
> +			/*
> +			 * Bitmap populated with the smallest supported page
> +			 * size
> +			 */
> +			bitmap_set(dma->bitmap,
> +				   (iova - dma->iova) >> pgshift, 1);
> +		}
>  	}
>  
>  	ret = i;
> @@ -802,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unmap_unpin(iommu, dma, true);
>  	vfio_unlink_dma(iommu, dma);
>  	put_task_struct(dma->task);
> +	vfio_dma_bitmap_free(dma);
>  	kfree(dma);
>  	iommu->dma_avail++;
>  }
> @@ -829,6 +924,99 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>  	}
>  }
>  
> +static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
> +			      dma_addr_t base_iova, size_t pgsize)
> +{
> +	unsigned long pgshift = __ffs(pgsize);
> +	unsigned long nbits = dma->size >> pgshift;
> +	unsigned long bit_offset = (dma->iova - base_iova) >> pgshift;
> +	unsigned long copy_offset = bit_offset / BITS_PER_LONG;
> +	unsigned long shift = bit_offset % BITS_PER_LONG;
> +	unsigned long leftover;
> +
> +	/* mark all pages dirty if all pages are pinned and mapped. */
> +	if (dma->iommu_mapped)
> +		bitmap_set(dma->bitmap, 0, dma->size >> pgshift);

We already calculated 'dma->size >> pgshift' as nbits above, we should
use nbits here.  I imagine the compiler will optimize this, so take it
as a nit.

> +
> +	if (shift) {
> +		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
> +				  nbits + shift);
> +
> +		if (copy_from_user(&leftover, (u64 *)bitmap + copy_offset,
> +				   sizeof(leftover)))
> +			return -EFAULT;
> +
> +		bitmap_or(dma->bitmap, dma->bitmap, &leftover, shift);
> +	}
> +
> +	if (copy_to_user((u64 *)bitmap + copy_offset, dma->bitmap,
> +			 DIRTY_BITMAP_BYTES(nbits + shift)))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +
> +static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
> +				  dma_addr_t iova, size_t size, size_t pgsize)
> +{
> +	struct vfio_dma *dma;
> +	struct rb_node *n;
> +	unsigned long pgshift = __ffs(pgsize);
> +	int ret;
> +
> +	/*
> +	 * GET_BITMAP request must fully cover vfio_dma mappings.  Multiple
> +	 * vfio_dma mappings may be clubbed by specifying large ranges, but
> +	 * there must not be any previous mappings bisected by the range.
> +	 * An error will be returned if these conditions are not met.
> +	 */
> +	dma = vfio_find_dma(iommu, iova, 1);
> +	if (dma && dma->iova != iova)
> +		return -EINVAL;
> +
> +	dma = vfio_find_dma(iommu, iova + size - 1, 0);
> +	if (dma && dma->iova + dma->size != iova + size)
> +		return -EINVAL;
> +
> +	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> +		struct vfio_dma *ldma = rb_entry(n, struct vfio_dma, node);
> +
> +		if (ldma->iova >= iova)
> +			break;
> +	}
> +
> +	dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
> +
> +	while (dma && (dma->iova >= iova) &&

'dma->iova >= iova' is necessarily true per the above loop, right?
We'd have NULL if we never reach an iova within range.

> +		(dma->iova + dma->size <= iova + size)) {

I think 'dma->iova < iova + size' is sufficient here, we've already
tested that there are no dmas overlapping the ends, they're all either
fully contained or fully outside.

> +

The double loop here is a little unnecessary, we could combine them
into:

for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
	struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);

	if (dma->iova < iova)
		continue;

	if (dma->iova > iova + size)
		break;

	ret = update_user_bitmap(bitmap, dma, iova, pgsize);
	if (ret)
		return ret;

	/*
	 * Re-populate bitmap to include all pinned pages which are
	 * considered as dirty but exclude pages which are unpinned and
	 * pages which are marked dirty by vfio_dma_rw()
	 */
	bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
	vfio_dma_populate_bitmap(dma, pgsize);
}

I think what you have works, but it's a little more complicated than it
needs to be.  Thanks,

Alex

> +		ret = update_user_bitmap(bitmap, dma, iova, pgsize);
> +		if (ret)
> +			return ret;
> +
> +		/*
> +		 * Re-populate bitmap to include all pinned pages which are
> +		 * considered as dirty but exclude pages which are unpinned and
> +		 * pages which are marked dirty by vfio_dma_rw()
> +		 */
> +		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
> +		vfio_dma_populate_bitmap(dma, pgsize);
> +
> +		n = rb_next(&dma->node);
> +		dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
> +	}
> +	return 0;
> +}
> +
> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> +{
> +	if (!npages || !bitmap_size || (bitmap_size > DIRTY_BITMAP_SIZE_MAX) ||
> +	    (bitmap_size < DIRTY_BITMAP_BYTES(npages)))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			     struct vfio_iommu_type1_dma_unmap *unmap)
>  {
> @@ -1046,7 +1234,7 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>  	unsigned long vaddr = map->vaddr;
>  	size_t size = map->size;
>  	int ret = 0, prot = 0;
> -	uint64_t mask;
> +	size_t pgsize;
>  	struct vfio_dma *dma;
>  
>  	/* Verify that none of our __u64 fields overflow */
> @@ -1061,11 +1249,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>  
>  	mutex_lock(&iommu->lock);
>  
> -	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
> +	pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
>  
> -	WARN_ON(mask & PAGE_MASK);
> +	WARN_ON((pgsize - 1) & PAGE_MASK);
>  
> -	if (!prot || !size || (size | iova | vaddr) & mask) {
> +	if (!prot || !size || (size | iova | vaddr) & (pgsize - 1)) {
>  		ret = -EINVAL;
>  		goto out_unlock;
>  	}
> @@ -1142,6 +1330,12 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>  	else
>  		ret = vfio_pin_map_dma(iommu, dma, size);
>  
> +	if (!ret && iommu->dirty_page_tracking) {
> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
> +		if (ret)
> +			vfio_remove_dma(iommu, dma);
> +	}
> +
>  out_unlock:
>  	mutex_unlock(&iommu->lock);
>  	return ret;
> @@ -2288,6 +2482,104 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> +		int ret = 0;
> +
> +		if (!iommu->v2)
> +			return -EACCES;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> +				    flags);
> +
> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> +			return -EINVAL;
> +
> +		/* only one flag should be set at a time */
> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> +			return -EINVAL;
> +
> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> +			size_t pgsize;
> +
> +			mutex_lock(&iommu->lock);
> +			pgsize = 1 << __ffs(iommu->pgsize_bitmap);
> +			if (!iommu->dirty_page_tracking) {
> +				ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
> +				if (!ret)
> +					iommu->dirty_page_tracking = true;
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return ret;
> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking) {
> +				iommu->dirty_page_tracking = false;
> +				vfio_dma_bitmap_free_all(iommu);
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return 0;
> +		} else if (dirty.flags &
> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> +			unsigned long pgshift;
> +			size_t data_size = dirty.argsz - minsz;
> +			size_t iommu_pgsize;
> +
> +			if (!data_size || data_size < sizeof(range))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> +					   sizeof(range)))
> +				return -EFAULT;
> +
> +			if (range.iova + range.size < range.iova)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)range.bitmap.data,
> +				       range.bitmap.size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(range.bitmap.pgsize);
> +			ret = verify_bitmap_size(range.size >> pgshift,
> +						 range.bitmap.size);
> +			if (ret)
> +				return ret;
> +
> +			mutex_lock(&iommu->lock);
> +
> +			iommu_pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
> +
> +			/* allow only smallest supported pgsize */
> +			if (range.bitmap.pgsize != iommu_pgsize) {
> +				ret = -EINVAL;
> +				goto out_unlock;
> +			}
> +			if (range.iova & (iommu_pgsize - 1)) {
> +				ret = -EINVAL;
> +				goto out_unlock;
> +			}
> +			if (!range.size || range.size & (iommu_pgsize - 1)) {
> +				ret = -EINVAL;
> +				goto out_unlock;
> +			}
> +
> +			if (iommu->dirty_page_tracking)
> +				ret = vfio_iova_dirty_bitmap(range.bitmap.data,
> +						iommu, range.iova, range.size,
> +						range.bitmap.pgsize);
> +			else
> +				ret = -EINVAL;
> +out_unlock:
> +			mutex_unlock(&iommu->lock);
> +
> +			return ret;
> +		}
>  	}
>  
>  	return -ENOTTY;
> @@ -2355,10 +2647,19 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
>  
>  	vaddr = dma->vaddr + offset;
>  
> -	if (write)
> +	if (write) {
>  		*copied = copy_to_user((void __user *)vaddr, data,
>  					 count) ? 0 : count;
> -	else
> +		if (*copied && iommu->dirty_page_tracking) {
> +			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> +			/*
> +			 * Bitmap populated with the smallest supported page
> +			 * size
> +			 */
> +			bitmap_set(dma->bitmap, offset >> pgshift,
> +				   *copied >> pgshift);
> +		}
> +	} else
>  		*copied = copy_from_user(data, (void __user *)vaddr,
>  					   count) ? 0 : count;
>  	if (kthread)


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl for dirty pages tracking
  2020-05-19  7:11     ` Kirti Wankhede
@ 2020-05-19  6:52       ` Kirti Wankhede
  0 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-19  6:52 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
- Start dirty pages tracking while migration is active
- Stop dirty pages tracking.
- Get dirty pages bitmap. Its user space application's responsibility to
  copy content of dirty pages from source to destination during migration.

To prevent DoS attack, memory for bitmap is allocated per vfio_dma
structure. Bitmap size is calculated considering smallest supported page
size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled

Bitmap is populated for already pinned pages when bitmap is allocated for
a vfio_dma with the smallest supported page size. Update bitmap from
pinning functions when tracking is enabled. When user application queries
bitmap, check if requested page size is same as page size used to
populated bitmap. If it is equal, copy bitmap, but if not equal, return
error.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>

Fixed error reported by build bot by changing pgsize type from uint64_t
to size_t.
Reported-by: kbuild test robot <lkp@intel.com>
---
 drivers/vfio/vfio_iommu_type1.c | 313 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 307 insertions(+), 6 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index de17787ffece..0a420594483a 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -72,6 +72,7 @@ struct vfio_iommu {
 	uint64_t		pgsize_bitmap;
 	bool			v2;
 	bool			nesting;
+	bool			dirty_page_tracking;
 };
 
 struct vfio_domain {
@@ -92,6 +93,7 @@ struct vfio_dma {
 	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
 	struct task_struct	*task;
 	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
+	unsigned long		*bitmap;
 };
 
 struct vfio_group {
@@ -126,6 +128,19 @@ struct vfio_regions {
 #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
 					(!list_empty(&iommu->domain_list))
 
+#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
+
+/*
+ * Input argument of number of bits to bitmap_set() is unsigned integer, which
+ * further casts to signed integer for unaligned multi-bit operation,
+ * __bitmap_set().
+ * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byte,
+ * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
+ * system.
+ */
+#define DIRTY_BITMAP_PAGES_MAX	 ((u64)INT_MAX)
+#define DIRTY_BITMAP_SIZE_MAX	 DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
+
 static int put_pfn(unsigned long pfn, int prot);
 
 /*
@@ -176,6 +191,80 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
 	rb_erase(&old->node, &iommu->dma_list);
 }
 
+
+static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
+{
+	uint64_t npages = dma->size / pgsize;
+
+	if (npages > DIRTY_BITMAP_PAGES_MAX)
+		return -EINVAL;
+
+	/*
+	 * Allocate extra 64 bits that are used to calculate shift required for
+	 * bitmap_shift_left() to manipulate and club unaligned number of pages
+	 * in adjacent vfio_dma ranges.
+	 */
+	dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages) + sizeof(u64),
+			       GFP_KERNEL);
+	if (!dma->bitmap)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void vfio_dma_bitmap_free(struct vfio_dma *dma)
+{
+	kfree(dma->bitmap);
+	dma->bitmap = NULL;
+}
+
+static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
+{
+	struct rb_node *p;
+
+	for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
+		struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, node);
+
+		bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1);
+	}
+}
+
+static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
+{
+	struct rb_node *n;
+
+	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+		int ret;
+
+		ret = vfio_dma_bitmap_alloc(dma, pgsize);
+		if (ret) {
+			struct rb_node *p;
+
+			for (p = rb_prev(n); p; p = rb_prev(p)) {
+				struct vfio_dma *dma = rb_entry(n,
+							struct vfio_dma, node);
+
+				vfio_dma_bitmap_free(dma);
+			}
+			return ret;
+		}
+		vfio_dma_populate_bitmap(dma, pgsize);
+	}
+	return 0;
+}
+
+static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu)
+{
+	struct rb_node *n;
+
+	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+
+		vfio_dma_bitmap_free(dma);
+	}
+}
+
 /*
  * Helper Functions for host iova-pfn list
  */
@@ -568,6 +657,17 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 			vfio_unpin_page_external(dma, iova, do_accounting);
 			goto pin_unwind;
 		}
+
+		if (iommu->dirty_page_tracking) {
+			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
+
+			/*
+			 * Bitmap populated with the smallest supported page
+			 * size
+			 */
+			bitmap_set(dma->bitmap,
+				   (iova - dma->iova) >> pgshift, 1);
+		}
 	}
 
 	ret = i;
@@ -802,6 +902,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
 	vfio_unmap_unpin(iommu, dma, true);
 	vfio_unlink_dma(iommu, dma);
 	put_task_struct(dma->task);
+	vfio_dma_bitmap_free(dma);
 	kfree(dma);
 	iommu->dma_avail++;
 }
@@ -829,6 +930,93 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	}
 }
 
+static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
+			      dma_addr_t base_iova, size_t pgsize)
+{
+	unsigned long pgshift = __ffs(pgsize);
+	unsigned long nbits = dma->size >> pgshift;
+	unsigned long bit_offset = (dma->iova - base_iova) >> pgshift;
+	unsigned long copy_offset = bit_offset / BITS_PER_LONG;
+	unsigned long shift = bit_offset % BITS_PER_LONG;
+	unsigned long leftover;
+
+	/* mark all pages dirty if all pages are pinned and mapped. */
+	if (dma->iommu_mapped)
+		bitmap_set(dma->bitmap, 0, nbits);
+
+	if (shift) {
+		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
+				  nbits + shift);
+
+		if (copy_from_user(&leftover, (u64 *)bitmap + copy_offset,
+				   sizeof(leftover)))
+			return -EFAULT;
+
+		bitmap_or(dma->bitmap, dma->bitmap, &leftover, shift);
+	}
+
+	if (copy_to_user((u64 *)bitmap + copy_offset, dma->bitmap,
+			 DIRTY_BITMAP_BYTES(nbits + shift)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
+				  dma_addr_t iova, size_t size, size_t pgsize)
+{
+	struct vfio_dma *dma;
+	struct rb_node *n;
+	unsigned long pgshift = __ffs(pgsize);
+	int ret;
+
+	/*
+	 * GET_BITMAP request must fully cover vfio_dma mappings.  Multiple
+	 * vfio_dma mappings may be clubbed by specifying large ranges, but
+	 * there must not be any previous mappings bisected by the range.
+	 * An error will be returned if these conditions are not met.
+	 */
+	dma = vfio_find_dma(iommu, iova, 1);
+	if (dma && dma->iova != iova)
+		return -EINVAL;
+
+	dma = vfio_find_dma(iommu, iova + size - 1, 0);
+	if (dma && dma->iova + dma->size != iova + size)
+		return -EINVAL;
+
+	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+
+		if (dma->iova < iova)
+			continue;
+
+		if (dma->iova > iova + size)
+			break;
+
+		ret = update_user_bitmap(bitmap, dma, iova, pgsize);
+		if (ret)
+			return ret;
+
+		/*
+		 * Re-populate bitmap to include all pinned pages which are
+		 * considered as dirty but exclude pages which are unpinned and
+		 * pages which are marked dirty by vfio_dma_rw()
+		 */
+		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
+		vfio_dma_populate_bitmap(dma, pgsize);
+	}
+	return 0;
+}
+
+static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
+{
+	if (!npages || !bitmap_size || (bitmap_size > DIRTY_BITMAP_SIZE_MAX) ||
+	    (bitmap_size < DIRTY_BITMAP_BYTES(npages)))
+		return -EINVAL;
+
+	return 0;
+}
+
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			     struct vfio_iommu_type1_dma_unmap *unmap)
 {
@@ -1046,7 +1234,7 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 	unsigned long vaddr = map->vaddr;
 	size_t size = map->size;
 	int ret = 0, prot = 0;
-	uint64_t mask;
+	size_t pgsize;
 	struct vfio_dma *dma;
 
 	/* Verify that none of our __u64 fields overflow */
@@ -1061,11 +1249,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 
 	mutex_lock(&iommu->lock);
 
-	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
+	pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
 
-	WARN_ON(mask & PAGE_MASK);
+	WARN_ON((pgsize - 1) & PAGE_MASK);
 
-	if (!prot || !size || (size | iova | vaddr) & mask) {
+	if (!prot || !size || (size | iova | vaddr) & (pgsize - 1)) {
 		ret = -EINVAL;
 		goto out_unlock;
 	}
@@ -1142,6 +1330,12 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 	else
 		ret = vfio_pin_map_dma(iommu, dma, size);
 
+	if (!ret && iommu->dirty_page_tracking) {
+		ret = vfio_dma_bitmap_alloc(dma, pgsize);
+		if (ret)
+			vfio_remove_dma(iommu, dma);
+	}
+
 out_unlock:
 	mutex_unlock(&iommu->lock);
 	return ret;
@@ -2288,6 +2482,104 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 		return copy_to_user((void __user *)arg, &unmap, minsz) ?
 			-EFAULT : 0;
+	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
+		struct vfio_iommu_type1_dirty_bitmap dirty;
+		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
+		int ret = 0;
+
+		if (!iommu->v2)
+			return -EACCES;
+
+		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
+				    flags);
+
+		if (copy_from_user(&dirty, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (dirty.argsz < minsz || dirty.flags & ~mask)
+			return -EINVAL;
+
+		/* only one flag should be set at a time */
+		if (__ffs(dirty.flags) != __fls(dirty.flags))
+			return -EINVAL;
+
+		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
+			size_t pgsize;
+
+			mutex_lock(&iommu->lock);
+			pgsize = 1 << __ffs(iommu->pgsize_bitmap);
+			if (!iommu->dirty_page_tracking) {
+				ret = vfio_dma_bitmap_alloc_all(iommu, pgsize);
+				if (!ret)
+					iommu->dirty_page_tracking = true;
+			}
+			mutex_unlock(&iommu->lock);
+			return ret;
+		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
+			mutex_lock(&iommu->lock);
+			if (iommu->dirty_page_tracking) {
+				iommu->dirty_page_tracking = false;
+				vfio_dma_bitmap_free_all(iommu);
+			}
+			mutex_unlock(&iommu->lock);
+			return 0;
+		} else if (dirty.flags &
+				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
+			struct vfio_iommu_type1_dirty_bitmap_get range;
+			unsigned long pgshift;
+			size_t data_size = dirty.argsz - minsz;
+			size_t iommu_pgsize;
+
+			if (!data_size || data_size < sizeof(range))
+				return -EINVAL;
+
+			if (copy_from_user(&range, (void __user *)(arg + minsz),
+					   sizeof(range)))
+				return -EFAULT;
+
+			if (range.iova + range.size < range.iova)
+				return -EINVAL;
+			if (!access_ok((void __user *)range.bitmap.data,
+				       range.bitmap.size))
+				return -EINVAL;
+
+			pgshift = __ffs(range.bitmap.pgsize);
+			ret = verify_bitmap_size(range.size >> pgshift,
+						 range.bitmap.size);
+			if (ret)
+				return ret;
+
+			mutex_lock(&iommu->lock);
+
+			iommu_pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
+
+			/* allow only smallest supported pgsize */
+			if (range.bitmap.pgsize != iommu_pgsize) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+			if (range.iova & (iommu_pgsize - 1)) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+			if (!range.size || range.size & (iommu_pgsize - 1)) {
+				ret = -EINVAL;
+				goto out_unlock;
+			}
+
+			if (iommu->dirty_page_tracking)
+				ret = vfio_iova_dirty_bitmap(range.bitmap.data,
+						iommu, range.iova, range.size,
+						range.bitmap.pgsize);
+			else
+				ret = -EINVAL;
+out_unlock:
+			mutex_unlock(&iommu->lock);
+
+			return ret;
+		}
 	}
 
 	return -ENOTTY;
@@ -2355,10 +2647,19 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
 
 	vaddr = dma->vaddr + offset;
 
-	if (write)
+	if (write) {
 		*copied = copy_to_user((void __user *)vaddr, data,
 					 count) ? 0 : count;
-	else
+		if (*copied && iommu->dirty_page_tracking) {
+			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
+			/*
+			 * Bitmap populated with the smallest supported page
+			 * size
+			 */
+			bitmap_set(dma->bitmap, offset >> pgshift,
+				   *copied >> pgshift);
+		}
+	} else
 		*copied = copy_from_user(data, (void __user *)vaddr,
 					   count) ? 0 : count;
 	if (kthread)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-05-18  5:56 ` [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
@ 2020-05-19  6:54   ` Kirti Wankhede
  2020-05-20 10:27     ` Cornelia Huck
  0 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-19  6:54 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

DMA mapped pages, including those pinned by mdev vendor drivers, might
get unpinned and unmapped while migration is active and device is still
running. For example, in pre-copy phase while guest driver could access
those pages, host device or vendor driver can dirty these mapped pages.
Such pages should be marked dirty so as to maintain memory consistency
for a user making use of dirty page tracking.

To get bitmap during unmap, user should allocate memory for bitmap, set
it all zeros, set size of allocated memory, set page size to be
considered for bitmap and set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 62 +++++++++++++++++++++++++++++++++--------
 include/uapi/linux/vfio.h       | 10 +++++++
 2 files changed, 61 insertions(+), 11 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 0a420594483a..963ae4348b3c 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1018,23 +1018,25 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
 }
 
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
-			     struct vfio_iommu_type1_dma_unmap *unmap)
+			     struct vfio_iommu_type1_dma_unmap *unmap,
+			     struct vfio_bitmap *bitmap)
 {
-	uint64_t mask;
 	struct vfio_dma *dma, *dma_last = NULL;
-	size_t unmapped = 0;
+	size_t unmapped = 0, pgsize;
 	int ret = 0, retries = 0;
+	unsigned long pgshift;
 
 	mutex_lock(&iommu->lock);
 
-	mask = ((uint64_t)1 << __ffs(iommu->pgsize_bitmap)) - 1;
+	pgshift = __ffs(iommu->pgsize_bitmap);
+	pgsize = (size_t)1 << pgshift;
 
-	if (unmap->iova & mask) {
+	if (unmap->iova & (pgsize - 1)) {
 		ret = -EINVAL;
 		goto unlock;
 	}
 
-	if (!unmap->size || unmap->size & mask) {
+	if (!unmap->size || unmap->size & (pgsize - 1)) {
 		ret = -EINVAL;
 		goto unlock;
 	}
@@ -1045,9 +1047,15 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 		goto unlock;
 	}
 
-	WARN_ON(mask & PAGE_MASK);
-again:
+	/* When dirty tracking is enabled, allow only min supported pgsize */
+	if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
+	    (!iommu->dirty_page_tracking || (bitmap->pgsize != pgsize))) {
+		ret = -EINVAL;
+		goto unlock;
+	}
 
+	WARN_ON((pgsize - 1) & PAGE_MASK);
+again:
 	/*
 	 * vfio-iommu-type1 (v1) - User mappings were coalesced together to
 	 * avoid tracking individual mappings.  This means that the granularity
@@ -1085,6 +1093,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			ret = -EINVAL;
 			goto unlock;
 		}
+
 		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
 		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
 			ret = -EINVAL;
@@ -1128,6 +1137,14 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			mutex_lock(&iommu->lock);
 			goto again;
 		}
+
+		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
+			ret = update_user_bitmap(bitmap->data, dma,
+						 unmap->iova, pgsize);
+			if (ret)
+				break;
+		}
+
 		unmapped += dma->size;
 		vfio_remove_dma(iommu, dma);
 	}
@@ -2466,17 +2483,40 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
 		struct vfio_iommu_type1_dma_unmap unmap;
-		long ret;
+		struct vfio_bitmap bitmap = { 0 };
+		int ret;
 
 		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
 
 		if (copy_from_user(&unmap, (void __user *)arg, minsz))
 			return -EFAULT;
 
-		if (unmap.argsz < minsz || unmap.flags)
+		if (unmap.argsz < minsz ||
+		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
 			return -EINVAL;
 
-		ret = vfio_dma_do_unmap(iommu, &unmap);
+		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
+			unsigned long pgshift;
+
+			if (unmap.argsz < (minsz + sizeof(bitmap)))
+				return -EINVAL;
+
+			if (copy_from_user(&bitmap,
+					   (void __user *)(arg + minsz),
+					   sizeof(bitmap)))
+				return -EFAULT;
+
+			if (!access_ok((void __user *)bitmap.data, bitmap.size))
+				return -EINVAL;
+
+			pgshift = __ffs(bitmap.pgsize);
+			ret = verify_bitmap_size(unmap.size >> pgshift,
+						 bitmap.size);
+			if (ret)
+				return ret;
+		}
+
+		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
 		if (ret)
 			return ret;
 
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 4850c1fef1f8..a1dd2150971e 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1048,12 +1048,22 @@ struct vfio_bitmap {
  * field.  No guarantee is made to the user that arbitrary unmaps of iova
  * or size different from those used in the original mapping call will
  * succeed.
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, user must
+ * provide data[] as structure vfio_bitmap. User must allocate memory to get
+ * bitmap, zero the bitmap memory and must set size of allocated memory in
+ * vfio_bitmap.size field. A bit in bitmap represents one page of user provided
+ * page size in 'pgsize', consecutively starting from iova offset. Bit set
+ * indicates page at that offset from iova is dirty. Bitmap of pages in the
+ * range of unmapped size is returned in vfio_bitmap.data
  */
 struct vfio_iommu_type1_dma_unmap {
 	__u32	argsz;
 	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
 	__u64	iova;				/* IO virtual address */
 	__u64	size;				/* Size of mapping (bytes) */
+	__u8    data[];
 };
 
 #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-05-18  5:56 ` [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
@ 2020-05-19  6:54   ` Kirti Wankhede
  0 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-19  6:54 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Added a check such that only singleton IOMMU groups can pin pages.
From the point when vendor driver pins any pages, consider IOMMU group
dirty page scope to be limited to pinned pages.

To optimize to avoid walking list often, added flag
pinned_page_dirty_scope to indicate if all of the vfio_groups for each
vfio_domain in the domain_list dirty page scope is limited to pinned
pages. This flag is updated on first pinned pages request for that IOMMU
group and on attaching/detaching group.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio.c             |  13 +++--
 drivers/vfio/vfio_iommu_type1.c | 103 +++++++++++++++++++++++++++++++++++++---
 include/linux/vfio.h            |   4 +-
 3 files changed, 109 insertions(+), 11 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 765e0e5d83ed..580099afeaff 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -85,6 +85,7 @@ struct vfio_group {
 	atomic_t			opened;
 	wait_queue_head_t		container_q;
 	bool				noiommu;
+	unsigned int			dev_counter;
 	struct kvm			*kvm;
 	struct blocking_notifier_head	notifier;
 };
@@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
 
 	mutex_lock(&group->device_lock);
 	list_add(&device->group_next, &group->device_list);
+	group->dev_counter++;
 	mutex_unlock(&group->device_lock);
 
 	return device;
@@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
 	struct vfio_group *group = device->group;
 
 	list_del(&device->group_next);
+	group->dev_counter--;
 	mutex_unlock(&group->device_lock);
 
 	dev_set_drvdata(device->dev, NULL);
@@ -1945,6 +1948,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	if (!group)
 		return -ENODEV;
 
+	if (group->dev_counter > 1)
+		return -EINVAL;
+
 	ret = vfio_group_add_container_user(group);
 	if (ret)
 		goto err_pin_pages;
@@ -1952,7 +1958,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
+		ret = driver->ops->pin_pages(container->iommu_data,
+					     group->iommu_group, user_pfn,
 					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
@@ -2050,8 +2057,8 @@ int vfio_group_pin_pages(struct vfio_group *group,
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
 		ret = driver->ops->pin_pages(container->iommu_data,
-					     user_iova_pfn, npage,
-					     prot, phys_pfn);
+					     group->iommu_group, user_iova_pfn,
+					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
 
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index d74b76919cbb..f5b79a71e9f7 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -73,6 +73,7 @@ struct vfio_iommu {
 	bool			v2;
 	bool			nesting;
 	bool			dirty_page_tracking;
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_domain {
@@ -100,6 +101,7 @@ struct vfio_group {
 	struct iommu_group	*iommu_group;
 	struct list_head	next;
 	bool			mdev_group;	/* An mdev group */
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_iova {
@@ -143,6 +145,10 @@ struct vfio_regions {
 
 static int put_pfn(unsigned long pfn, int prot);
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group);
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
 /*
  * This code handles mapping and unmapping of user data buffers
  * into DMA'ble space using the IOMMU
@@ -592,11 +598,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
 }
 
 static int vfio_iommu_type1_pin_pages(void *iommu_data,
+				      struct iommu_group *iommu_group,
 				      unsigned long *user_pfn,
 				      int npage, int prot,
 				      unsigned long *phys_pfn)
 {
 	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group;
 	int i, j, ret;
 	unsigned long remote_vaddr;
 	struct vfio_dma *dma;
@@ -669,8 +677,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 				   (iova - dma->iova) >> pgshift, 1);
 		}
 	}
-
 	ret = i;
+
+	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
+	if (!group->pinned_page_dirty_scope) {
+		group->pinned_page_dirty_scope = true;
+		update_pinned_page_dirty_scope(iommu);
+	}
+
 	goto pin_done;
 
 pin_unwind:
@@ -930,8 +944,9 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	}
 }
 
-static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
-			      dma_addr_t base_iova, size_t pgsize)
+static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
+			      struct vfio_dma *dma, dma_addr_t base_iova,
+			      size_t pgsize)
 {
 	unsigned long pgshift = __ffs(pgsize);
 	unsigned long nbits = dma->size >> pgshift;
@@ -940,8 +955,11 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
 	unsigned long shift = bit_offset % BITS_PER_LONG;
 	unsigned long leftover;
 
-	/* mark all pages dirty if all pages are pinned and mapped. */
-	if (dma->iommu_mapped)
+	/*
+	 * mark all pages dirty if any IOMMU capable device is not able
+	 * to report dirty pages and all pages are pinned and mapped.
+	 */
+	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
 		bitmap_set(dma->bitmap, 0, nbits);
 
 	if (shift) {
@@ -993,7 +1011,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
 		if (dma->iova > iova + size)
 			break;
 
-		ret = update_user_bitmap(bitmap, dma, iova, pgsize);
+		ret = update_user_bitmap(bitmap, iommu, dma, iova, pgsize);
 		if (ret)
 			return ret;
 
@@ -1139,7 +1157,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 		}
 
 		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
-			ret = update_user_bitmap(bitmap->data, dma,
+			ret = update_user_bitmap(bitmap->data, iommu, dma,
 						 unmap->iova, pgsize);
 			if (ret)
 				break;
@@ -1491,6 +1509,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
 	return NULL;
 }
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group = NULL;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		group = find_iommu_group(domain, iommu_group);
+		if (group)
+			return group;
+	}
+
+	if (iommu->external_domain)
+		group = find_iommu_group(iommu->external_domain, iommu_group);
+
+	return group;
+}
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->pinned_page_dirty_scope) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	if (iommu->external_domain) {
+		domain = iommu->external_domain;
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->pinned_page_dirty_scope) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	iommu->pinned_page_dirty_scope = true;
+}
+
 static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
 				  phys_addr_t *base)
 {
@@ -1898,6 +1961,16 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 
 			list_add(&group->next,
 				 &iommu->external_domain->group_list);
+			/*
+			 * Non-iommu backed group cannot dirty memory directly,
+			 * it can only use interfaces that provide dirty
+			 * tracking.
+			 * The iommu scope can only be promoted with the
+			 * addition of a dirty tracking group.
+			 */
+			group->pinned_page_dirty_scope = true;
+			if (!iommu->pinned_page_dirty_scope)
+				update_pinned_page_dirty_scope(iommu);
 			mutex_unlock(&iommu->lock);
 
 			return 0;
@@ -2021,6 +2094,13 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 done:
 	/* Delete the old one and insert new iova list */
 	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
+
+	/*
+	 * An iommu backed group can dirty memory directly and therefore
+	 * demotes the iommu scope until it declares itself dirty tracking
+	 * capable via the page pinning interface.
+	 */
+	iommu->pinned_page_dirty_scope = false;
 	mutex_unlock(&iommu->lock);
 	vfio_iommu_resv_free(&group_resv_regions);
 
@@ -2173,6 +2253,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 	struct vfio_iommu *iommu = iommu_data;
 	struct vfio_domain *domain;
 	struct vfio_group *group;
+	bool update_dirty_scope = false;
 	LIST_HEAD(iova_copy);
 
 	mutex_lock(&iommu->lock);
@@ -2180,6 +2261,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 	if (iommu->external_domain) {
 		group = find_iommu_group(iommu->external_domain, iommu_group);
 		if (group) {
+			update_dirty_scope = !group->pinned_page_dirty_scope;
 			list_del(&group->next);
 			kfree(group);
 
@@ -2209,6 +2291,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 			continue;
 
 		vfio_iommu_detach_group(domain, group);
+		update_dirty_scope = !group->pinned_page_dirty_scope;
 		list_del(&group->next);
 		kfree(group);
 		/*
@@ -2240,6 +2323,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 		vfio_iommu_iova_free(&iova_copy);
 
 detach_group_done:
+	/*
+	 * Removal of a group without dirty tracking may allow the iommu scope
+	 * to be promoted.
+	 */
+	if (update_dirty_scope)
+		update_pinned_page_dirty_scope(iommu);
 	mutex_unlock(&iommu->lock);
 }
 
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 5d92ee15d098..38d3c6a8dc7e 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -76,7 +76,9 @@ struct vfio_iommu_driver_ops {
 					struct iommu_group *group);
 	void		(*detach_group)(void *iommu_data,
 					struct iommu_group *group);
-	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
+	int		(*pin_pages)(void *iommu_data,
+				     struct iommu_group *group,
+				     unsigned long *user_pfn,
 				     int npage, int prot,
 				     unsigned long *phys_pfn);
 	int		(*unpin_pages)(void *iommu_data,
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl for dirty pages tracking
  2020-05-18 21:53   ` Alex Williamson
@ 2020-05-19  7:11     ` Kirti Wankhede
  2020-05-19  6:52       ` Kirti Wankhede
  0 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-19  7:11 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 5/19/2020 3:23 AM, Alex Williamson wrote:
> On Mon, 18 May 2020 11:26:34 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
>> - Start dirty pages tracking while migration is active
>> - Stop dirty pages tracking.
>> - Get dirty pages bitmap. Its user space application's responsibility to
>>    copy content of dirty pages from source to destination during migration.
>>
>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
>> structure. Bitmap size is calculated considering smallest supported page
>> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
>>
>> Bitmap is populated for already pinned pages when bitmap is allocated for
>> a vfio_dma with the smallest supported page size. Update bitmap from
>> pinning functions when tracking is enabled. When user application queries
>> bitmap, check if requested page size is same as page size used to
>> populated bitmap. If it is equal, copy bitmap, but if not equal, return
>> error.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>
>> Fixed error reported by build bot by changing pgsize type from uint64_t
>> to size_t.
>> Reported-by: kbuild test robot <lkp@intel.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 313 +++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 307 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index de17787ffece..bf740fef196f 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -72,6 +72,7 @@ struct vfio_iommu {
>>   	uint64_t		pgsize_bitmap;
>>   	bool			v2;
>>   	bool			nesting;
>> +	bool			dirty_page_tracking;
>>   };
>>   
>>   struct vfio_domain {
>> @@ -92,6 +93,7 @@ struct vfio_dma {
>>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>>   	struct task_struct	*task;
>>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
>> +	unsigned long		*bitmap;
>>   };
>>   
>>   struct vfio_group {
>> @@ -126,6 +128,19 @@ struct vfio_regions {
>>   #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>>   					(!list_empty(&iommu->domain_list))
>>   
>> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
>> +
>> +/*
>> + * Input argument of number of bits to bitmap_set() is unsigned integer, which
>> + * further casts to signed integer for unaligned multi-bit operation,
>> + * __bitmap_set().
>> + * Then maximum bitmap size supported is 2^31 bits divided by 2^3 bits/byte,
>> + * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
>> + * system.
>> + */
>> +#define DIRTY_BITMAP_PAGES_MAX	 ((u64)INT_MAX)
>> +#define DIRTY_BITMAP_SIZE_MAX	 DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
>> +
>>   static int put_pfn(unsigned long pfn, int prot);
>>   
>>   /*
>> @@ -176,6 +191,74 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>>   	rb_erase(&old->node, &iommu->dma_list);
>>   }
>>   
>> +
>> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, size_t pgsize)
>> +{
>> +	uint64_t npages = dma->size / pgsize;
>> +
>> +	if (npages > DIRTY_BITMAP_PAGES_MAX)
>> +		return -EINVAL;
>> +
>> +	dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> 
> Curious that the extra 8-bytes are added in the next patch, but they're
> just as necessary here.
>

Yes, moving it in this patch.
While resolving patches, I had to update 6/8 and 8/8 patches also. So 
updating 3 patches.

> We also have the explanation above about why we have the signed int
> size limitation, but we sort of ignore that when adding the bytes here.
> That limitation is derived from __bitmap_set(), whereas we only need
> these extra bits for bitmap_shift_left(), where I can't spot a signed
> int limitation.  Do you come to the same conclusion?  

That's right.

> Maybe worth a
> comment why we think we can exceed DIRTY_BITMAP_PAGES_MAX for that
> extra padding.
> 

ok.

>> +	if (!dma->bitmap)
>> +		return -ENOMEM;
>> +
>> +	return 0;
>> +}
>> +
>> +static void vfio_dma_bitmap_free(struct vfio_dma *dma)
>> +{
>> +	kfree(dma->bitmap);
>> +	dma->bitmap = NULL;
>> +}
>> +
>> +static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>> +{
>> +	struct rb_node *p;
>> +
>> +	for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
>> +		struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn, node);
>> +
>> +		bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1);
>> +	}
>> +}
>> +
>> +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
>> +{
>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> +	for (; n; n = rb_next(n)) {
> 
> Nit, the previous function above sets the initial value in the for()
> statement, it looks like it would fit in 80 columns here too.  We have
> examples either way in the code, so not a must fix.
> 
>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +		int ret;
>> +
>> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
>> +		if (ret) {
>> +			struct rb_node *p = rb_prev(n);
>> +
>> +			for (; p; p = rb_prev(p)) {
> 
> Same.
> 
>> +				struct vfio_dma *dma = rb_entry(n,
>> +							struct vfio_dma, node);
>> +
>> +				vfio_dma_bitmap_free(dma);
>> +			}
>> +			return ret;
>> +		}
>> +		vfio_dma_populate_bitmap(dma, pgsize);
>> +	}
>> +	return 0;
>> +}
>> +
>> +static void vfio_dma_bitmap_free_all(struct vfio_iommu *iommu)
>> +{
>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> +	for (; n; n = rb_next(n)) {
> 
> And another.
> 
>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +
>> +		vfio_dma_bitmap_free(dma);
>> +	}
>> +}
>> +
>>   /*
>>    * Helper Functions for host iova-pfn list
>>    */
>> @@ -568,6 +651,17 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>   			vfio_unpin_page_external(dma, iova, do_accounting);
>>   			goto pin_unwind;
>>   		}
>> +
>> +		if (iommu->dirty_page_tracking) {
>> +			unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
>> +
>> +			/*
>> +			 * Bitmap populated with the smallest supported page
>> +			 * size
>> +			 */
>> +			bitmap_set(dma->bitmap,
>> +				   (iova - dma->iova) >> pgshift, 1);
>> +		}
>>   	}
>>   
>>   	ret = i;
>> @@ -802,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>>   	vfio_unmap_unpin(iommu, dma, true);
>>   	vfio_unlink_dma(iommu, dma);
>>   	put_task_struct(dma->task);
>> +	vfio_dma_bitmap_free(dma);
>>   	kfree(dma);
>>   	iommu->dma_avail++;
>>   }
>> @@ -829,6 +924,99 @@ static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>>   	}
>>   }
>>   
>> +static int update_user_bitmap(u64 __user *bitmap, struct vfio_dma *dma,
>> +			      dma_addr_t base_iova, size_t pgsize)
>> +{
>> +	unsigned long pgshift = __ffs(pgsize);
>> +	unsigned long nbits = dma->size >> pgshift;
>> +	unsigned long bit_offset = (dma->iova - base_iova) >> pgshift;
>> +	unsigned long copy_offset = bit_offset / BITS_PER_LONG;
>> +	unsigned long shift = bit_offset % BITS_PER_LONG;
>> +	unsigned long leftover;
>> +
>> +	/* mark all pages dirty if all pages are pinned and mapped. */
>> +	if (dma->iommu_mapped)
>> +		bitmap_set(dma->bitmap, 0, dma->size >> pgshift);
> 
> We already calculated 'dma->size >> pgshift' as nbits above, we should
> use nbits here.  I imagine the compiler will optimize this, so take it
> as a nit.
> 
>> +
>> +	if (shift) {
>> +		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
>> +				  nbits + shift);
>> +
>> +		if (copy_from_user(&leftover, (u64 *)bitmap + copy_offset,
>> +				   sizeof(leftover)))
>> +			return -EFAULT;
>> +
>> +		bitmap_or(dma->bitmap, dma->bitmap, &leftover, shift);
>> +	}
>> +
>> +	if (copy_to_user((u64 *)bitmap + copy_offset, dma->bitmap,
>> +			 DIRTY_BITMAP_BYTES(nbits + shift)))
>> +		return -EFAULT;
>> +
>> +	return 0;
>> +}
>> +
>> +static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>> +				  dma_addr_t iova, size_t size, size_t pgsize)
>> +{
>> +	struct vfio_dma *dma;
>> +	struct rb_node *n;
>> +	unsigned long pgshift = __ffs(pgsize);
>> +	int ret;
>> +
>> +	/*
>> +	 * GET_BITMAP request must fully cover vfio_dma mappings.  Multiple
>> +	 * vfio_dma mappings may be clubbed by specifying large ranges, but
>> +	 * there must not be any previous mappings bisected by the range.
>> +	 * An error will be returned if these conditions are not met.
>> +	 */
>> +	dma = vfio_find_dma(iommu, iova, 1);
>> +	if (dma && dma->iova != iova)
>> +		return -EINVAL;
>> +
>> +	dma = vfio_find_dma(iommu, iova + size - 1, 0);
>> +	if (dma && dma->iova + dma->size != iova + size)
>> +		return -EINVAL;
>> +
>> +	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
>> +		struct vfio_dma *ldma = rb_entry(n, struct vfio_dma, node);
>> +
>> +		if (ldma->iova >= iova)
>> +			break;
>> +	}
>> +
>> +	dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
>> +
>> +	while (dma && (dma->iova >= iova) &&
> 
> 'dma->iova >= iova' is necessarily true per the above loop, right?
> We'd have NULL if we never reach an iova within range.
> 
>> +		(dma->iova + dma->size <= iova + size)) {
> 
> I think 'dma->iova < iova + size' is sufficient here, we've already
> tested that there are no dmas overlapping the ends, they're all either
> fully contained or fully outside.
> 
>> +
> 
> The double loop here is a little unnecessary, we could combine them
> into:
> 
> for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
> 	struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> 
> 	if (dma->iova < iova)
> 		continue;
> 
> 	if (dma->iova > iova + size)
> 		break;
> 
> 	ret = update_user_bitmap(bitmap, dma, iova, pgsize);
> 	if (ret)
> 		return ret;
> 
> 	/*
> 	 * Re-populate bitmap to include all pinned pages which are
> 	 * considered as dirty but exclude pages which are unpinned and
> 	 * pages which are marked dirty by vfio_dma_rw()
> 	 */
> 	bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
> 	vfio_dma_populate_bitmap(dma, pgsize);
> }
> 
> I think what you have works, but it's a little more complicated than it
> needs to be.  Thanks,
> 

Ok. Chaning it.

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
                   ` (7 preceding siblings ...)
  2020-05-18  5:56 ` [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
@ 2020-05-19 16:58 ` Alex Williamson
  2020-05-20  2:55   ` Yan Zhao
  2020-05-25  6:59   ` Yan Zhao
  8 siblings, 2 replies; 40+ messages in thread
From: Alex Williamson @ 2020-05-19 16:58 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

Hi folks,

My impression is that we're getting pretty close to a workable
implementation here with v22 plus respins of patches 5, 6, and 8.  We
also have a matching QEMU series and a proposal for a new i40e
consumer, as well as I assume GVT-g updates happening internally at
Intel.  I expect all of the latter needs further review and discussion,
but we should be at the point where we can validate these proposed
kernel interfaces.  Therefore I'd like to make a call for reviews so
that we can get this wrapped up for the v5.8 merge window.  I know
Connie has some outstanding documentation comments and I'd like to make
sure everyone has an opportunity to check that their comments have been
addressed and we don't discover any new blocking issues.  Please send
your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
interface and implementation.  Thanks!

Alex

On Mon, 18 May 2020 11:26:29 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Hi,
> 
> This patch set adds:
> * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
>   respect to IOMMU container rather than per device. All pages pinned by
>   vendor driver through vfio_pin_pages external API has to be marked as
>   dirty during  migration. When IOMMU capable device is present in the
>   container and all pages are pinned and mapped, then all pages are marked
>   dirty.
>   When there are CPU writes, CPU dirty page tracking can identify dirtied
>   pages, but any page pinned by vendor driver can also be written by
>   device. As of now there is no device which has hardware support for
>   dirty page tracking. So all pages which are pinned should be considered
>   as dirty.
>   This ioctl is also used to start/stop dirty pages tracking for pinned and
>   unpinned pages while migration is active.
> 
> * Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
>   unmapping IO virtual address range.
>   With vIOMMU, during pre-copy phase of migration, while CPUs are still
>   running, IO virtual address unmap can happen while device still keeping
>   reference of guest pfns. Those pages should be reported as dirty before
>   unmap, so that VFIO user space application can copy content of those
>   pages from source to destination.
> 
> * Patch 8 detect if IOMMU capable device driver is smart to report pages
>   to be marked dirty by pinning pages using vfio_pin_pages() API.
> 
> 
> Yet TODO:
> Since there is no device which has hardware support for system memmory
> dirty bitmap tracking, right now there is no other API from vendor driver
> to VFIO IOMMU module to report dirty pages. In future, when such hardware
> support will be implemented, an API will be required such that vendor
> driver could report dirty pages to VFIO module during migration phases.
> 
> Adding revision history from previous QEMU patch set to understand KABI
> changes done till now
> 
> v21 -> v22
> - Fixed issue raised by Alex :
> https://lore.kernel.org/kvm/20200515163307.72951dd2@w520.home/
> 
> v20 -> v21
> - Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
> - Updated unmap ioctl function - as suggested by Alex.
> - Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
>   Cornelia.
> 
> v19 -> v20
> - Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
> - Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
> - Removed flag definition from migration capability.
> 
> v18 -> v19
> - Updated migration capability with supported page sizes bitmap for dirty
>   page tracking and  maximum bitmap size supported by kernel module.
> - Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
>   is updated.
> - Removed extra buffers added in previous version for bitmap manipulation
>   and optimised the code.
> 
> v17 -> v18
> - Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
>   ioctl
> - Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma
> 
> v16 -> v17
> - Fixed errors reported by kbuild test robot <lkp@intel.com> on i386
> 
> v15 -> v16
> - Minor edits and nit picks (Auger Eric)
> - On copying bitmap to user, re-populated bitmap only for pinned pages,
>   excluding unmapped pages and CPU dirtied pages.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v14 -> v15
> - Minor edits and nit picks.
> - In the verification of user allocated bitmap memory, added check of
>    maximum size.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v13 -> v14
> - Added struct vfio_bitmap to kabi. updated structure
>   vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
> - All small changes suggested by Alex.
> - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
>   https://lkml.org/lkml/2020/3/12/1255
> 
> v12 -> v13
> - Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
> - Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
> - Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
>   field.
> 
> v11 -> v12
> - Changed bitmap allocation in vfio_iommu_type1.
> - Remove atomicity of ref_count.
> - Updated comments for migration device state structure about error
>   reporting.
> - Nit picks from v11 reviews
> 
> v10 -> v11
> - Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
> - Added proposal to detect if IOMMU capable device calls external pin pages
>   API to mark pages dirty.
> - Nit picks from v10 reviews
> 
> v9 -> v10:
> - Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
>   during unmap while migration is active
> - Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
>   tracking.
> - If iommu_mapped, mark all pages dirty.
> - Added unpinned pages tracking while migration is active.
> - Updated comments for migration device state structure with bit
>   combination table and state transition details.
> 
> v8 -> v9:
> - Split patch set in 2 sets, Kernel and QEMU.
> - Dirty pages bitmap is queried from IOMMU container rather than from
>   vendor driver for per device. Added 2 ioctls to achieve this.
> 
> v7 -> v8:
> - Updated comments for KABI
> - Added BAR address validation check during PCI device's config space load
>   as suggested by Dr. David Alan Gilbert.
> - Changed vfio_migration_set_state() to set or clear device state flags.
> - Some nit fixes.
> 
> v6 -> v7:
> - Fix build failures.
> 
> v5 -> v6:
> - Fix build failure.
> 
> v4 -> v5:
> - Added decriptive comment about the sequence of access of members of
>   structure vfio_device_migration_info to be followed based on Alex's
>   suggestion
> - Updated get dirty pages sequence.
> - As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
>   get_object, save_config and load_config.
> - Fixed multiple nit picks.
> - Tested live migration with multiple vfio device assigned to a VM.
> 
> v3 -> v4:
> - Added one more bit for _RESUMING flag to be set explicitly.
> - data_offset field is read-only for user space application.
> - data_size is read for every iteration before reading data from migration,
>   that is removed assumption that data will be till end of migration
>   region.
> - If vendor driver supports mappable sparsed region, map those region
>   during setup state of save/load, similarly unmap those from cleanup
>   routines.
> - Handles race condition that causes data corruption in migration region
>   during save device state by adding mutex and serialiaing save_buffer and
>   get_dirty_pages routines.
> - Skip called get_dirty_pages routine for mapped MMIO region of device.
> - Added trace events.
> - Split into multiple functional patches.
> 
> v2 -> v3:
> - Removed enum of VFIO device states. Defined VFIO device state with 2
>   bits.
> - Re-structured vfio_device_migration_info to keep it minimal and defined
>   action on read and write access on its members.
> 
> v1 -> v2:
> - Defined MIGRATION region type and sub-type which should be used with
>   region type capability.
> - Re-structured vfio_device_migration_info. This structure will be placed
>   at 0th offset of migration region.
> - Replaced ioctl with read/write for trapped part of migration region.
> - Added both type of access support, trapped or mmapped, for data section
>   of the region.
> - Moved PCI device functions to pci file.
> - Added iteration to get dirty page bitmap until bitmap for all requested
>   pages are copied.
> 
> Thanks,
> Kirti
> 
> 
> 
> 
> Kirti Wankhede (8):
>   vfio: UAPI for migration interface for device state
>   vfio iommu: Remove atomicity of ref_count of pinned pages
>   vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
>   vfio iommu: Add ioctl definition for dirty pages tracking
>   vfio iommu: Implementation of ioctl for dirty pages tracking
>   vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
>   vfio iommu: Add migration capability to report supported features
>   vfio: Selective dirty page tracking if IOMMU backed device pins pages
> 
>  drivers/vfio/vfio.c             |  13 +-
>  drivers/vfio/vfio_iommu_type1.c | 576 ++++++++++++++++++++++++++++++++++++----
>  include/linux/vfio.h            |   4 +-
>  include/uapi/linux/vfio.h       | 315 ++++++++++++++++++++++
>  4 files changed, 849 insertions(+), 59 deletions(-)
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-19 16:58 ` [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
@ 2020-05-20  2:55   ` Yan Zhao
  2020-05-20 13:40     ` Kirti Wankhede
  2020-05-25  6:59   ` Yan Zhao
  1 sibling, 1 reply; 40+ messages in thread
From: Yan Zhao @ 2020-05-20  2:55 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> Hi folks,
> 
> My impression is that we're getting pretty close to a workable
> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> also have a matching QEMU series and a proposal for a new i40e
> consumer, as well as I assume GVT-g updates happening internally at
> Intel.  I expect all of the latter needs further review and discussion,
> but we should be at the point where we can validate these proposed
> kernel interfaces.  Therefore I'd like to make a call for reviews so
> that we can get this wrapped up for the v5.8 merge window.  I know
> Connie has some outstanding documentation comments and I'd like to make
> sure everyone has an opportunity to check that their comments have been
> addressed and we don't discover any new blocking issues.  Please send
> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> interface and implementation.  Thanks!
> 
hi Alex and Kirti,
after porting to qemu v22 and kernel v22, it is found out that
it can not even pass basic live migration test with error like

"Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"

Thanks
Yan

> 
> On Mon, 18 May 2020 11:26:29 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> > Hi,
> > 
> > This patch set adds:
> > * IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
> >   respect to IOMMU container rather than per device. All pages pinned by
> >   vendor driver through vfio_pin_pages external API has to be marked as
> >   dirty during  migration. When IOMMU capable device is present in the
> >   container and all pages are pinned and mapped, then all pages are marked
> >   dirty.
> >   When there are CPU writes, CPU dirty page tracking can identify dirtied
> >   pages, but any page pinned by vendor driver can also be written by
> >   device. As of now there is no device which has hardware support for
> >   dirty page tracking. So all pages which are pinned should be considered
> >   as dirty.
> >   This ioctl is also used to start/stop dirty pages tracking for pinned and
> >   unpinned pages while migration is active.
> > 
> > * Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
> >   unmapping IO virtual address range.
> >   With vIOMMU, during pre-copy phase of migration, while CPUs are still
> >   running, IO virtual address unmap can happen while device still keeping
> >   reference of guest pfns. Those pages should be reported as dirty before
> >   unmap, so that VFIO user space application can copy content of those
> >   pages from source to destination.
> > 
> > * Patch 8 detect if IOMMU capable device driver is smart to report pages
> >   to be marked dirty by pinning pages using vfio_pin_pages() API.
> > 
> > 
> > Yet TODO:
> > Since there is no device which has hardware support for system memmory
> > dirty bitmap tracking, right now there is no other API from vendor driver
> > to VFIO IOMMU module to report dirty pages. In future, when such hardware
> > support will be implemented, an API will be required such that vendor
> > driver could report dirty pages to VFIO module during migration phases.
> > 
> > Adding revision history from previous QEMU patch set to understand KABI
> > changes done till now
> > 
> > v21 -> v22
> > - Fixed issue raised by Alex :
> > https://lore.kernel.org/kvm/20200515163307.72951dd2@w520.home/
> > 
> > v20 -> v21
> > - Added checkin for GET_BITMAP ioctl for vfio_dma boundaries.
> > - Updated unmap ioctl function - as suggested by Alex.
> > - Updated comments in DIRTY_TRACKING ioctl definition - as suggested by
> >   Cornelia.
> > 
> > v19 -> v20
> > - Fixed ioctl to get dirty bitmap to get bitmap of multiple vfio_dmas
> > - Fixed unmap ioctl to get dirty bitmap of multiple vfio_dmas.
> > - Removed flag definition from migration capability.
> > 
> > v18 -> v19
> > - Updated migration capability with supported page sizes bitmap for dirty
> >   page tracking and  maximum bitmap size supported by kernel module.
> > - Added patch to calculate and cache pgsize_bitmap when iommu->domain_list
> >   is updated.
> > - Removed extra buffers added in previous version for bitmap manipulation
> >   and optimised the code.
> > 
> > v17 -> v18
> > - Add migration capability to the capability chain for VFIO_IOMMU_GET_INFO
> >   ioctl
> > - Updated UMAP_DMA ioctl to return bitmap of multiple vfio_dma
> > 
> > v16 -> v17
> > - Fixed errors reported by kbuild test robot <lkp@intel.com> on i386
> > 
> > v15 -> v16
> > - Minor edits and nit picks (Auger Eric)
> > - On copying bitmap to user, re-populated bitmap only for pinned pages,
> >   excluding unmapped pages and CPU dirtied pages.
> > - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
> >   https://lkml.org/lkml/2020/3/12/1255
> > 
> > v14 -> v15
> > - Minor edits and nit picks.
> > - In the verification of user allocated bitmap memory, added check of
> >    maximum size.
> > - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
> >   https://lkml.org/lkml/2020/3/12/1255
> > 
> > v13 -> v14
> > - Added struct vfio_bitmap to kabi. updated structure
> >   vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
> > - All small changes suggested by Alex.
> > - Patches are on tag: next-20200318 and 1-3 patches from Yan's series
> >   https://lkml.org/lkml/2020/3/12/1255
> > 
> > v12 -> v13
> > - Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
> > - Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
> > - Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
> >   field.
> > 
> > v11 -> v12
> > - Changed bitmap allocation in vfio_iommu_type1.
> > - Remove atomicity of ref_count.
> > - Updated comments for migration device state structure about error
> >   reporting.
> > - Nit picks from v11 reviews
> > 
> > v10 -> v11
> > - Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
> > - Added proposal to detect if IOMMU capable device calls external pin pages
> >   API to mark pages dirty.
> > - Nit picks from v10 reviews
> > 
> > v9 -> v10:
> > - Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
> >   during unmap while migration is active
> > - Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
> >   tracking.
> > - If iommu_mapped, mark all pages dirty.
> > - Added unpinned pages tracking while migration is active.
> > - Updated comments for migration device state structure with bit
> >   combination table and state transition details.
> > 
> > v8 -> v9:
> > - Split patch set in 2 sets, Kernel and QEMU.
> > - Dirty pages bitmap is queried from IOMMU container rather than from
> >   vendor driver for per device. Added 2 ioctls to achieve this.
> > 
> > v7 -> v8:
> > - Updated comments for KABI
> > - Added BAR address validation check during PCI device's config space load
> >   as suggested by Dr. David Alan Gilbert.
> > - Changed vfio_migration_set_state() to set or clear device state flags.
> > - Some nit fixes.
> > 
> > v6 -> v7:
> > - Fix build failures.
> > 
> > v5 -> v6:
> > - Fix build failure.
> > 
> > v4 -> v5:
> > - Added decriptive comment about the sequence of access of members of
> >   structure vfio_device_migration_info to be followed based on Alex's
> >   suggestion
> > - Updated get dirty pages sequence.
> > - As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
> >   get_object, save_config and load_config.
> > - Fixed multiple nit picks.
> > - Tested live migration with multiple vfio device assigned to a VM.
> > 
> > v3 -> v4:
> > - Added one more bit for _RESUMING flag to be set explicitly.
> > - data_offset field is read-only for user space application.
> > - data_size is read for every iteration before reading data from migration,
> >   that is removed assumption that data will be till end of migration
> >   region.
> > - If vendor driver supports mappable sparsed region, map those region
> >   during setup state of save/load, similarly unmap those from cleanup
> >   routines.
> > - Handles race condition that causes data corruption in migration region
> >   during save device state by adding mutex and serialiaing save_buffer and
> >   get_dirty_pages routines.
> > - Skip called get_dirty_pages routine for mapped MMIO region of device.
> > - Added trace events.
> > - Split into multiple functional patches.
> > 
> > v2 -> v3:
> > - Removed enum of VFIO device states. Defined VFIO device state with 2
> >   bits.
> > - Re-structured vfio_device_migration_info to keep it minimal and defined
> >   action on read and write access on its members.
> > 
> > v1 -> v2:
> > - Defined MIGRATION region type and sub-type which should be used with
> >   region type capability.
> > - Re-structured vfio_device_migration_info. This structure will be placed
> >   at 0th offset of migration region.
> > - Replaced ioctl with read/write for trapped part of migration region.
> > - Added both type of access support, trapped or mmapped, for data section
> >   of the region.
> > - Moved PCI device functions to pci file.
> > - Added iteration to get dirty page bitmap until bitmap for all requested
> >   pages are copied.
> > 
> > Thanks,
> > Kirti
> > 
> > 
> > 
> > 
> > Kirti Wankhede (8):
> >   vfio: UAPI for migration interface for device state
> >   vfio iommu: Remove atomicity of ref_count of pinned pages
> >   vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
> >   vfio iommu: Add ioctl definition for dirty pages tracking
> >   vfio iommu: Implementation of ioctl for dirty pages tracking
> >   vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
> >   vfio iommu: Add migration capability to report supported features
> >   vfio: Selective dirty page tracking if IOMMU backed device pins pages
> > 
> >  drivers/vfio/vfio.c             |  13 +-
> >  drivers/vfio/vfio_iommu_type1.c | 576 ++++++++++++++++++++++++++++++++++++----
> >  include/linux/vfio.h            |   4 +-
> >  include/uapi/linux/vfio.h       | 315 ++++++++++++++++++++++
> >  4 files changed, 849 insertions(+), 59 deletions(-)
> > 
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
  2020-05-18  5:56 ` [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Kirti Wankhede
@ 2020-05-20 10:08   ` Cornelia Huck
  2020-05-20 14:46     ` Kirti Wankhede
  0 siblings, 1 reply; 40+ messages in thread
From: Cornelia Huck @ 2020-05-20 10:08 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm

On Mon, 18 May 2020 11:26:32 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Calculate and cache pgsize_bitmap when iommu->domain_list is updated
> and iommu->external_domain is set for mdev device.
> Add iommu->lock protection when cached pgsize_bitmap is accessed.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 88 +++++++++++++++++++++++------------------
>  1 file changed, 49 insertions(+), 39 deletions(-)
> 

(...)

> @@ -805,15 +806,14 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	iommu->dma_avail++;
>  }
>  
> -static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> +static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)

Minor nit: I'd have renamed this function to
vfio_update_pgsize_bitmap().

>  {
>  	struct vfio_domain *domain;
> -	unsigned long bitmap = ULONG_MAX;
>  
> -	mutex_lock(&iommu->lock);
> +	iommu->pgsize_bitmap = ULONG_MAX;
> +
>  	list_for_each_entry(domain, &iommu->domain_list, next)
> -		bitmap &= domain->domain->pgsize_bitmap;
> -	mutex_unlock(&iommu->lock);
> +		iommu->pgsize_bitmap &= domain->domain->pgsize_bitmap;
>  
>  	/*
>  	 * In case the IOMMU supports page sizes smaller than PAGE_SIZE

(...)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-05-19  6:54   ` Kirti Wankhede
@ 2020-05-20 10:27     ` Cornelia Huck
  2020-05-20 15:16       ` Kirti Wankhede
  0 siblings, 1 reply; 40+ messages in thread
From: Cornelia Huck @ 2020-05-20 10:27 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm

On Tue, 19 May 2020 12:24:13 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> DMA mapped pages, including those pinned by mdev vendor drivers, might
> get unpinned and unmapped while migration is active and device is still
> running. For example, in pre-copy phase while guest driver could access
> those pages, host device or vendor driver can dirty these mapped pages.
> Such pages should be marked dirty so as to maintain memory consistency
> for a user making use of dirty page tracking.
> 
> To get bitmap during unmap, user should allocate memory for bitmap, set
> it all zeros, set size of allocated memory, set page size to be
> considered for bitmap and set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 62 +++++++++++++++++++++++++++++++++--------
>  include/uapi/linux/vfio.h       | 10 +++++++
>  2 files changed, 61 insertions(+), 11 deletions(-)

(...)

> @@ -1085,6 +1093,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			ret = -EINVAL;
>  			goto unlock;
>  		}
> +

Nit: unrelated whitespace change.

>  		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
>  		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
>  			ret = -EINVAL;

(...)

> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 4850c1fef1f8..a1dd2150971e 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1048,12 +1048,22 @@ struct vfio_bitmap {
>   * field.  No guarantee is made to the user that arbitrary unmaps of iova
>   * or size different from those used in the original mapping call will
>   * succeed.
> + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap

s/dirty bitmap/the dirty bitmap/

> + * before unmapping IO virtual addresses. When this flag is set, user must

s/user/the user/

> + * provide data[] as structure vfio_bitmap. User must allocate memory to get

"provide a struct vfio_bitmap in data[]" ?


> + * bitmap, zero the bitmap memory and must set size of allocated memory in
> + * vfio_bitmap.size field.

"The user must provide zero-allocated memory via vfio_bitmap.data and
its size in the vfio_bitmap.size field." ?


> A bit in bitmap represents one page of user provided

s/bitmap/the bitmap/

> + * page size in 'pgsize', consecutively starting from iova offset. Bit set

s/Bit set/A set bit/

> + * indicates page at that offset from iova is dirty. Bitmap of pages in the

s/indicates page/indicates that the page/

> + * range of unmapped size is returned in vfio_bitmap.data

"A bitmap of the pages in the range of the unmapped size is returned in
the user-provided vfio_bitmap.data." ?

>   */
>  struct vfio_iommu_type1_dma_unmap {
>  	__u32	argsz;
>  	__u32	flags;
> +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
>  	__u64	iova;				/* IO virtual address */
>  	__u64	size;				/* Size of mapping (bytes) */
> +	__u8    data[];
>  };
>  
>  #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)

With the nits addressed,
Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features
  2020-05-18  5:56 ` [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features Kirti Wankhede
@ 2020-05-20 10:42   ` Cornelia Huck
  2020-05-20 15:23     ` Kirti Wankhede
  0 siblings, 1 reply; 40+ messages in thread
From: Cornelia Huck @ 2020-05-20 10:42 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm

On Mon, 18 May 2020 11:26:36 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Added migration capability in IOMMU info chain.
> User application should check IOMMU info chain for migration capability
> to use dirty page tracking feature provided by kernel module.
> User application must check page sizes supported and maximum dirty
> bitmap size returned by this capability structure for ioctls used to get
> dirty bitmap.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 23 ++++++++++++++++++++++-
>  include/uapi/linux/vfio.h       | 22 ++++++++++++++++++++++
>  2 files changed, 44 insertions(+), 1 deletion(-)

(...)

> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index a1dd2150971e..aa8aa9dcf02a 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1013,6 +1013,28 @@ struct vfio_iommu_type1_info_cap_iova_range {
>  	struct	vfio_iova_range iova_ranges[];
>  };
>  
> +/*
> + * The migration capability allows to report supported features for migration.
> + *
> + * The structures below define version 1 of this capability.
> + *
> + * The existence of this capability indicates IOMMU kernel driver supports

s/indicates/indicates that/

> + * dirty page tracking.
> + *
> + * pgsize_bitmap: Kernel driver returns supported page sizes bitmap for dirty
> + * page tracking.

"bitmap of supported page sizes for dirty page tracking" ?

> + * max_dirty_bitmap_size: Kernel driver returns maximum supported dirty bitmap
> + * size in bytes to be used by user application for ioctls to get dirty bitmap.

"maximum supported dirty bitmap size in bytes that can be used by user
applications when getting the dirty bitmap" ?

> + */
> +#define VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION  1
> +
> +struct vfio_iommu_type1_info_cap_migration {
> +	struct	vfio_info_cap_header header;
> +	__u32	flags;
> +	__u64	pgsize_bitmap;
> +	__u64	max_dirty_bitmap_size;		/* in bytes */
> +};
> +
>  #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
>  
>  /**

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-20  2:55   ` Yan Zhao
@ 2020-05-20 13:40     ` Kirti Wankhede
  2020-05-20 16:46       ` Alex Williamson
  0 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-20 13:40 UTC (permalink / raw)
  To: Yan Zhao, Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	qemu-devel, kvm



On 5/20/2020 8:25 AM, Yan Zhao wrote:
> On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
>> Hi folks,
>>
>> My impression is that we're getting pretty close to a workable
>> implementation here with v22 plus respins of patches 5, 6, and 8.  We
>> also have a matching QEMU series and a proposal for a new i40e
>> consumer, as well as I assume GVT-g updates happening internally at
>> Intel.  I expect all of the latter needs further review and discussion,
>> but we should be at the point where we can validate these proposed
>> kernel interfaces.  Therefore I'd like to make a call for reviews so
>> that we can get this wrapped up for the v5.8 merge window.  I know
>> Connie has some outstanding documentation comments and I'd like to make
>> sure everyone has an opportunity to check that their comments have been
>> addressed and we don't discover any new blocking issues.  Please send
>> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
>> interface and implementation.  Thanks!
>>
> hi Alex and Kirti,
> after porting to qemu v22 and kernel v22, it is found out that
> it can not even pass basic live migration test with error like
> 
> "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> 

Thanks for testing Yan.
I think last moment change in below cause this failure

https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/

 > 	if (dma->iova > iova + size)
 > 		break;

Surprisingly with my basic testing with 2G sys mem QEMU didn't raise 
abort on g_free, but I do hit this with large sys mem.
With above change, that function iterated through next vfio_dma as well. 
Check should be as below:

-               if (dma->iova > iova + size)
+               if (dma->iova > iova + size -1)
                         break;

Another fix is in QEMU.
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html

 > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;
 >
 > ROUND_UP(npages/8, sizeof(u64))?
 >

If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.

Changing it as below

-        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
+        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * 
BITS_PER_BYTE) /
+                             BITS_PER_BYTE;

I'm updating patches with these fixes and Cornelia's suggestion soon.

Due to short of time I may not be able to address all the concerns 
raised on previous versions of QEMU, I'm trying make QEMU side code 
available for testing for others with latest kernel changes. Don't 
worry, I will revisit comments on QEMU patches. Right now first priority 
is to test kernel UAPI and prepare kernel patches for 5.8

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu
  2020-05-20 10:08   ` Cornelia Huck
@ 2020-05-20 14:46     ` Kirti Wankhede
  0 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-20 14:46 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm



On 5/20/2020 3:38 PM, Cornelia Huck wrote:
> On Mon, 18 May 2020 11:26:32 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> Calculate and cache pgsize_bitmap when iommu->domain_list is updated
>> and iommu->external_domain is set for mdev device.
>> Add iommu->lock protection when cached pgsize_bitmap is accessed.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 88 +++++++++++++++++++++++------------------
>>   1 file changed, 49 insertions(+), 39 deletions(-)
>>
> 
> (...)
> 
>> @@ -805,15 +806,14 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>>   	iommu->dma_avail++;
>>   }
>>   
>> -static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>> +static void vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> 
> Minor nit: I'd have renamed this function to
> vfio_update_pgsize_bitmap().
> 

Done.

>>   {
>>   	struct vfio_domain *domain;
>> -	unsigned long bitmap = ULONG_MAX;
>>   
>> -	mutex_lock(&iommu->lock);
>> +	iommu->pgsize_bitmap = ULONG_MAX;
>> +
>>   	list_for_each_entry(domain, &iommu->domain_list, next)
>> -		bitmap &= domain->domain->pgsize_bitmap;
>> -	mutex_unlock(&iommu->lock);
>> +		iommu->pgsize_bitmap &= domain->domain->pgsize_bitmap;
>>   
>>   	/*
>>   	 * In case the IOMMU supports page sizes smaller than PAGE_SIZE
> 
> (...)
> 
> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> 

Thanks.

Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-05-20 10:27     ` Cornelia Huck
@ 2020-05-20 15:16       ` Kirti Wankhede
  0 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-20 15:16 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm



On 5/20/2020 3:57 PM, Cornelia Huck wrote:
> On Tue, 19 May 2020 12:24:13 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> DMA mapped pages, including those pinned by mdev vendor drivers, might
>> get unpinned and unmapped while migration is active and device is still
>> running. For example, in pre-copy phase while guest driver could access
>> those pages, host device or vendor driver can dirty these mapped pages.
>> Such pages should be marked dirty so as to maintain memory consistency
>> for a user making use of dirty page tracking.
>>
>> To get bitmap during unmap, user should allocate memory for bitmap, set
>> it all zeros, set size of allocated memory, set page size to be
>> considered for bitmap and set flag VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 62 +++++++++++++++++++++++++++++++++--------
>>   include/uapi/linux/vfio.h       | 10 +++++++
>>   2 files changed, 61 insertions(+), 11 deletions(-)
> 
> (...)
> 
>> @@ -1085,6 +1093,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   			ret = -EINVAL;
>>   			goto unlock;
>>   		}
>> +
> 
> Nit: unrelated whitespace change.
> 
>>   		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
>>   		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
>>   			ret = -EINVAL;
> 
> (...)
> 
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 4850c1fef1f8..a1dd2150971e 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -1048,12 +1048,22 @@ struct vfio_bitmap {
>>    * field.  No guarantee is made to the user that arbitrary unmaps of iova
>>    * or size different from those used in the original mapping call will
>>    * succeed.
>> + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
> 
> s/dirty bitmap/the dirty bitmap/
> 
>> + * before unmapping IO virtual addresses. When this flag is set, user must
> 
> s/user/the user/
> 
>> + * provide data[] as structure vfio_bitmap. User must allocate memory to get
> 
> "provide a struct vfio_bitmap in data[]" ?
> 
> 
>> + * bitmap, zero the bitmap memory and must set size of allocated memory in
>> + * vfio_bitmap.size field.
> 
> "The user must provide zero-allocated memory via vfio_bitmap.data and
> its size in the vfio_bitmap.size field." ?
> 
> 
>> A bit in bitmap represents one page of user provided
> 
> s/bitmap/the bitmap/
> 
>> + * page size in 'pgsize', consecutively starting from iova offset. Bit set
> 
> s/Bit set/A set bit/
> 
>> + * indicates page at that offset from iova is dirty. Bitmap of pages in the
> 
> s/indicates page/indicates that the page/
> 
>> + * range of unmapped size is returned in vfio_bitmap.data
> 
> "A bitmap of the pages in the range of the unmapped size is returned in
> the user-provided vfio_bitmap.data." ?
> 
>>    */
>>   struct vfio_iommu_type1_dma_unmap {
>>   	__u32	argsz;
>>   	__u32	flags;
>> +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
>>   	__u64	iova;				/* IO virtual address */
>>   	__u64	size;				/* Size of mapping (bytes) */
>> +	__u8    data[];
>>   };
>>   
>>   #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
> 
> With the nits addressed,

Done.

> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> 

Thanks.

Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features
  2020-05-20 10:42   ` Cornelia Huck
@ 2020-05-20 15:23     ` Kirti Wankhede
  0 siblings, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-20 15:23 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm



On 5/20/2020 4:12 PM, Cornelia Huck wrote:
> On Mon, 18 May 2020 11:26:36 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> Added migration capability in IOMMU info chain.
>> User application should check IOMMU info chain for migration capability
>> to use dirty page tracking feature provided by kernel module.
>> User application must check page sizes supported and maximum dirty
>> bitmap size returned by this capability structure for ioctls used to get
>> dirty bitmap.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 23 ++++++++++++++++++++++-
>>   include/uapi/linux/vfio.h       | 22 ++++++++++++++++++++++
>>   2 files changed, 44 insertions(+), 1 deletion(-)
> 
> (...)
> 
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index a1dd2150971e..aa8aa9dcf02a 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -1013,6 +1013,28 @@ struct vfio_iommu_type1_info_cap_iova_range {
>>   	struct	vfio_iova_range iova_ranges[];
>>   };
>>   
>> +/*
>> + * The migration capability allows to report supported features for migration.
>> + *
>> + * The structures below define version 1 of this capability.
>> + *
>> + * The existence of this capability indicates IOMMU kernel driver supports
> 
> s/indicates/indicates that/
> 
>> + * dirty page tracking.
>> + *
>> + * pgsize_bitmap: Kernel driver returns supported page sizes bitmap for dirty
>> + * page tracking.
> 
> "bitmap of supported page sizes for dirty page tracking" ?
> 
>> + * max_dirty_bitmap_size: Kernel driver returns maximum supported dirty bitmap
>> + * size in bytes to be used by user application for ioctls to get dirty bitmap.
> 
> "maximum supported dirty bitmap size in bytes that can be used by user
> applications when getting the dirty bitmap" ?
> 

Done.

>> + */
>> +#define VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION  1
>> +
>> +struct vfio_iommu_type1_info_cap_migration {
>> +	struct	vfio_info_cap_header header;
>> +	__u32	flags;
>> +	__u64	pgsize_bitmap;
>> +	__u64	max_dirty_bitmap_size;		/* in bytes */
>> +};
>> +
>>   #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
>>   
>>   /**
> 
> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> 

Thanks.

Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-20 13:40     ` Kirti Wankhede
@ 2020-05-20 16:46       ` Alex Williamson
  2020-05-21  5:08         ` Yan Zhao
  0 siblings, 1 reply; 40+ messages in thread
From: Alex Williamson @ 2020-05-20 16:46 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: Yan Zhao, cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu,
	mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Wed, 20 May 2020 19:10:07 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> >> Hi folks,
> >>
> >> My impression is that we're getting pretty close to a workable
> >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> >> also have a matching QEMU series and a proposal for a new i40e
> >> consumer, as well as I assume GVT-g updates happening internally at
> >> Intel.  I expect all of the latter needs further review and discussion,
> >> but we should be at the point where we can validate these proposed
> >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> >> that we can get this wrapped up for the v5.8 merge window.  I know
> >> Connie has some outstanding documentation comments and I'd like to make
> >> sure everyone has an opportunity to check that their comments have been
> >> addressed and we don't discover any new blocking issues.  Please send
> >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> >> interface and implementation.  Thanks!
> >>  
> > hi Alex and Kirti,
> > after porting to qemu v22 and kernel v22, it is found out that
> > it can not even pass basic live migration test with error like
> > 
> > "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> >   
> 
> Thanks for testing Yan.
> I think last moment change in below cause this failure
> 
> https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/
> 
>  > 	if (dma->iova > iova + size)
>  > 		break;  
> 
> Surprisingly with my basic testing with 2G sys mem QEMU didn't raise 
> abort on g_free, but I do hit this with large sys mem.
> With above change, that function iterated through next vfio_dma as well. 
> Check should be as below:
> 
> -               if (dma->iova > iova + size)
> +               if (dma->iova > iova + size -1)


Or just:

	if (dma->iova >= iova + size)

Thanks,
Alex


>                          break;
> 
> Another fix is in QEMU.
> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
> 
>  > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;  
>  >
>  > ROUND_UP(npages/8, sizeof(u64))?
>  >  
> 
> If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
> 
> Changing it as below
> 
> -        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
> +        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * 
> BITS_PER_BYTE) /
> +                             BITS_PER_BYTE;
> 
> I'm updating patches with these fixes and Cornelia's suggestion soon.
> 
> Due to short of time I may not be able to address all the concerns 
> raised on previous versions of QEMU, I'm trying make QEMU side code 
> available for testing for others with latest kernel changes. Don't 
> worry, I will revisit comments on QEMU patches. Right now first priority 
> is to test kernel UAPI and prepare kernel patches for 5.8
> 
> Thanks,
> Kirti
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-20 16:46       ` Alex Williamson
@ 2020-05-21  5:08         ` Yan Zhao
  2020-05-21  7:09           ` Kirti Wankhede
  0 siblings, 1 reply; 40+ messages in thread
From: Yan Zhao @ 2020-05-21  5:08 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
> On Wed, 20 May 2020 19:10:07 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> > On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> > >> Hi folks,
> > >>
> > >> My impression is that we're getting pretty close to a workable
> > >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > >> also have a matching QEMU series and a proposal for a new i40e
> > >> consumer, as well as I assume GVT-g updates happening internally at
> > >> Intel.  I expect all of the latter needs further review and discussion,
> > >> but we should be at the point where we can validate these proposed
> > >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> > >> that we can get this wrapped up for the v5.8 merge window.  I know
> > >> Connie has some outstanding documentation comments and I'd like to make
> > >> sure everyone has an opportunity to check that their comments have been
> > >> addressed and we don't discover any new blocking issues.  Please send
> > >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > >> interface and implementation.  Thanks!
> > >>  
> > > hi Alex and Kirti,
> > > after porting to qemu v22 and kernel v22, it is found out that
> > > it can not even pass basic live migration test with error like
> > > 
> > > "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> > >   
> > 
> > Thanks for testing Yan.
> > I think last moment change in below cause this failure
> > 
> > https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/
> > 
> >  > 	if (dma->iova > iova + size)
> >  > 		break;  
> > 
> > Surprisingly with my basic testing with 2G sys mem QEMU didn't raise 
> > abort on g_free, but I do hit this with large sys mem.
> > With above change, that function iterated through next vfio_dma as well. 
> > Check should be as below:
> > 
> > -               if (dma->iova > iova + size)
> > +               if (dma->iova > iova + size -1)
> 
> 
> Or just:
> 
> 	if (dma->iova >= iova + size)
> 
> Thanks,
> Alex
> 
> 
> >                          break;
> > 
> > Another fix is in QEMU.
> > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
> > 
> >  > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;  
> >  >
> >  > ROUND_UP(npages/8, sizeof(u64))?
> >  >  
> > 
> > If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
> > 
> > Changing it as below
> > 
> > -        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
> > +        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * 
> > BITS_PER_BYTE) /
> > +                             BITS_PER_BYTE;
> > 
> > I'm updating patches with these fixes and Cornelia's suggestion soon.
> > 
> > Due to short of time I may not be able to address all the concerns 
> > raised on previous versions of QEMU, I'm trying make QEMU side code 
> > available for testing for others with latest kernel changes. Don't 
> > worry, I will revisit comments on QEMU patches. Right now first priority 
> > is to test kernel UAPI and prepare kernel patches for 5.8
> > 
>
hi Kirti
by updating kernel/qemu to v23, still met below two types of errors.
just basic migration test.
(the guest VM size is 2G for all reported bugs).

"Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"

or 

"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"


Thanks
Yan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-21  7:09           ` Kirti Wankhede
@ 2020-05-21  7:04             ` Yan Zhao
  2020-05-21  7:28               ` Kirti Wankhede
  2020-05-21  7:32               ` Kirti Wankhede
  0 siblings, 2 replies; 40+ messages in thread
From: Yan Zhao @ 2020-05-21  7:04 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: Alex Williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Thu, May 21, 2020 at 12:39:48PM +0530, Kirti Wankhede wrote:
> 
> 
> On 5/21/2020 10:38 AM, Yan Zhao wrote:
> > On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
> > > On Wed, 20 May 2020 19:10:07 +0530
> > > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> > > 
> > > > On 5/20/2020 8:25 AM, Yan Zhao wrote:
> > > > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> > > > > > Hi folks,
> > > > > > 
> > > > > > My impression is that we're getting pretty close to a workable
> > > > > > implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > > > > > also have a matching QEMU series and a proposal for a new i40e
> > > > > > consumer, as well as I assume GVT-g updates happening internally at
> > > > > > Intel.  I expect all of the latter needs further review and discussion,
> > > > > > but we should be at the point where we can validate these proposed
> > > > > > kernel interfaces.  Therefore I'd like to make a call for reviews so
> > > > > > that we can get this wrapped up for the v5.8 merge window.  I know
> > > > > > Connie has some outstanding documentation comments and I'd like to make
> > > > > > sure everyone has an opportunity to check that their comments have been
> > > > > > addressed and we don't discover any new blocking issues.  Please send
> > > > > > your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > > > > > interface and implementation.  Thanks!
> > > > > hi Alex and Kirti,
> > > > > after porting to qemu v22 and kernel v22, it is found out that
> > > > > it can not even pass basic live migration test with error like
> > > > > 
> > > > > "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
> > > > 
> > > > Thanks for testing Yan.
> > > > I think last moment change in below cause this failure
> > > > 
> > > > https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/
> > > > 
> > > >   > 	if (dma->iova > iova + size)
> > > >   > 		break;
> > > > 
> > > > Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
> > > > abort on g_free, but I do hit this with large sys mem.
> > > > With above change, that function iterated through next vfio_dma as well.
> > > > Check should be as below:
> > > > 
> > > > -               if (dma->iova > iova + size)
> > > > +               if (dma->iova > iova + size -1)
> > > 
> > > 
> > > Or just:
> > > 
> > > 	if (dma->iova >= iova + size)
> > > 
> > > Thanks,
> > > Alex
> > > 
> > > 
> > > >                           break;
> > > > 
> > > > Another fix is in QEMU.
> > > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
> > > > 
> > > >   > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;
> > > >   >
> > > >   > ROUND_UP(npages/8, sizeof(u64))?
> > > >   >
> > > > 
> > > > If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
> > > > 
> > > > Changing it as below
> > > > 
> > > > -        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
> > > > +        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
> > > > BITS_PER_BYTE) /
> > > > +                             BITS_PER_BYTE;
> > > > 
> > > > I'm updating patches with these fixes and Cornelia's suggestion soon.
> > > > 
> > > > Due to short of time I may not be able to address all the concerns
> > > > raised on previous versions of QEMU, I'm trying make QEMU side code
> > > > available for testing for others with latest kernel changes. Don't
> > > > worry, I will revisit comments on QEMU patches. Right now first priority
> > > > is to test kernel UAPI and prepare kernel patches for 5.8
> > > > 
> > > 
> > hi Kirti
> > by updating kernel/qemu to v23, still met below two types of errors.
> > just basic migration test.
> > (the guest VM size is 2G for all reported bugs).
> > 
> > "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> > 
> 
> size doesn't look correct here, below check should be failing.
>  range.size & (iommu_pgsize - 1)
> 
> > or
> > 
> > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > 
> > 
> 
> Above error is from:
>         buf = g_try_malloc0(data_size);
>         if (!buf) {
>             error_report("%s: Error allocating buffer ", __func__);
>             return -ENOMEM;
>         }
> 
> Seems you are running out of memory?
>
no. my host memory is about 60G.
just migrate with command "migrate -d xxx" without speed limit.
FYI.

Yan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-21  5:08         ` Yan Zhao
@ 2020-05-21  7:09           ` Kirti Wankhede
  2020-05-21  7:04             ` Yan Zhao
  0 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-21  7:09 UTC (permalink / raw)
  To: Yan Zhao, Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	qemu-devel, kvm



On 5/21/2020 10:38 AM, Yan Zhao wrote:
> On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
>> On Wed, 20 May 2020 19:10:07 +0530
>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>
>>> On 5/20/2020 8:25 AM, Yan Zhao wrote:
>>>> On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
>>>>> Hi folks,
>>>>>
>>>>> My impression is that we're getting pretty close to a workable
>>>>> implementation here with v22 plus respins of patches 5, 6, and 8.  We
>>>>> also have a matching QEMU series and a proposal for a new i40e
>>>>> consumer, as well as I assume GVT-g updates happening internally at
>>>>> Intel.  I expect all of the latter needs further review and discussion,
>>>>> but we should be at the point where we can validate these proposed
>>>>> kernel interfaces.  Therefore I'd like to make a call for reviews so
>>>>> that we can get this wrapped up for the v5.8 merge window.  I know
>>>>> Connie has some outstanding documentation comments and I'd like to make
>>>>> sure everyone has an opportunity to check that their comments have been
>>>>> addressed and we don't discover any new blocking issues.  Please send
>>>>> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
>>>>> interface and implementation.  Thanks!
>>>>>   
>>>> hi Alex and Kirti,
>>>> after porting to qemu v22 and kernel v22, it is found out that
>>>> it can not even pass basic live migration test with error like
>>>>
>>>> "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
>>>>    
>>>
>>> Thanks for testing Yan.
>>> I think last moment change in below cause this failure
>>>
>>> https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/
>>>
>>>   > 	if (dma->iova > iova + size)
>>>   > 		break;
>>>
>>> Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
>>> abort on g_free, but I do hit this with large sys mem.
>>> With above change, that function iterated through next vfio_dma as well.
>>> Check should be as below:
>>>
>>> -               if (dma->iova > iova + size)
>>> +               if (dma->iova > iova + size -1)
>>
>>
>> Or just:
>>
>> 	if (dma->iova >= iova + size)
>>
>> Thanks,
>> Alex
>>
>>
>>>                           break;
>>>
>>> Another fix is in QEMU.
>>> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
>>>
>>>   > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;
>>>   >
>>>   > ROUND_UP(npages/8, sizeof(u64))?
>>>   >
>>>
>>> If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
>>>
>>> Changing it as below
>>>
>>> -        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
>>> +        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
>>> BITS_PER_BYTE) /
>>> +                             BITS_PER_BYTE;
>>>
>>> I'm updating patches with these fixes and Cornelia's suggestion soon.
>>>
>>> Due to short of time I may not be able to address all the concerns
>>> raised on previous versions of QEMU, I'm trying make QEMU side code
>>> available for testing for others with latest kernel changes. Don't
>>> worry, I will revisit comments on QEMU patches. Right now first priority
>>> is to test kernel UAPI and prepare kernel patches for 5.8
>>>
>>
> hi Kirti
> by updating kernel/qemu to v23, still met below two types of errors.
> just basic migration test.
> (the guest VM size is 2G for all reported bugs).
> 
> "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> 

size doesn't look correct here, below check should be failing.
  range.size & (iommu_pgsize - 1)

> or
> 
> "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> 
> 

Above error is from:
         buf = g_try_malloc0(data_size);
         if (!buf) {
             error_report("%s: Error allocating buffer ", __func__);
             return -ENOMEM;
         }

Seems you are running out of memory?

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-21  7:04             ` Yan Zhao
@ 2020-05-21  7:28               ` Kirti Wankhede
  2020-05-21  7:32               ` Kirti Wankhede
  1 sibling, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-21  7:28 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Zhengxiao.zx, kevin.tian, yi.l.liu, cjia, kvm, eskultet,
	ziye.yang, qemu-devel, cohuck, shuangtai.tst, dgilbert,
	zhi.a.wang, mlevitsk, pasic, aik, Alex Williamson, eauger,
	felipe, jonathan.davies, changpeng.liu, Ken.Xue



On 5/21/2020 12:34 PM, Yan Zhao wrote:
> On Thu, May 21, 2020 at 12:39:48PM +0530, Kirti Wankhede wrote:
>>
>>
>> On 5/21/2020 10:38 AM, Yan Zhao wrote:
>>> On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
>>>> On Wed, 20 May 2020 19:10:07 +0530
>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>>
>>>>> On 5/20/2020 8:25 AM, Yan Zhao wrote:
>>>>>> On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
>>>>>>> Hi folks,
>>>>>>>
>>>>>>> My impression is that we're getting pretty close to a workable
>>>>>>> implementation here with v22 plus respins of patches 5, 6, and 8.  We
>>>>>>> also have a matching QEMU series and a proposal for a new i40e
>>>>>>> consumer, as well as I assume GVT-g updates happening internally at
>>>>>>> Intel.  I expect all of the latter needs further review and discussion,
>>>>>>> but we should be at the point where we can validate these proposed
>>>>>>> kernel interfaces.  Therefore I'd like to make a call for reviews so
>>>>>>> that we can get this wrapped up for the v5.8 merge window.  I know
>>>>>>> Connie has some outstanding documentation comments and I'd like to make
>>>>>>> sure everyone has an opportunity to check that their comments have been
>>>>>>> addressed and we don't discover any new blocking issues.  Please send
>>>>>>> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
>>>>>>> interface and implementation.  Thanks!
>>>>>> hi Alex and Kirti,
>>>>>> after porting to qemu v22 and kernel v22, it is found out that
>>>>>> it can not even pass basic live migration test with error like
>>>>>>
>>>>>> "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
>>>>>
>>>>> Thanks for testing Yan.
>>>>> I think last moment change in below cause this failure
>>>>>
>>>>> https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/
>>>>>
>>>>>    > 	if (dma->iova > iova + size)
>>>>>    > 		break;
>>>>>
>>>>> Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
>>>>> abort on g_free, but I do hit this with large sys mem.
>>>>> With above change, that function iterated through next vfio_dma as well.
>>>>> Check should be as below:
>>>>>
>>>>> -               if (dma->iova > iova + size)
>>>>> +               if (dma->iova > iova + size -1)
>>>>
>>>>
>>>> Or just:
>>>>
>>>> 	if (dma->iova >= iova + size)
>>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>>
>>>>>                            break;
>>>>>
>>>>> Another fix is in QEMU.
>>>>> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
>>>>>
>>>>>    > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;
>>>>>    >
>>>>>    > ROUND_UP(npages/8, sizeof(u64))?
>>>>>    >
>>>>>
>>>>> If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
>>>>>
>>>>> Changing it as below
>>>>>
>>>>> -        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
>>>>> +        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
>>>>> BITS_PER_BYTE) /
>>>>> +                             BITS_PER_BYTE;
>>>>>
>>>>> I'm updating patches with these fixes and Cornelia's suggestion soon.
>>>>>
>>>>> Due to short of time I may not be able to address all the concerns
>>>>> raised on previous versions of QEMU, I'm trying make QEMU side code
>>>>> available for testing for others with latest kernel changes. Don't
>>>>> worry, I will revisit comments on QEMU patches. Right now first priority
>>>>> is to test kernel UAPI and prepare kernel patches for 5.8
>>>>>
>>>>
>>> hi Kirti
>>> by updating kernel/qemu to v23, still met below two types of errors.
>>> just basic migration test.
>>> (the guest VM size is 2G for all reported bugs).
>>>
>>> "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
>>>
>>
>> size doesn't look correct here, below check should be failing.
>>   range.size & (iommu_pgsize - 1)
>>
>>> or
>>>
>>> "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
>>> qemu-system-x86_64-lm: error while loading state section id 49(vfio)
>>> qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
>>>
>>>
>>
>> Above error is from:
>>          buf = g_try_malloc0(data_size);
>>          if (!buf) {
>>              error_report("%s: Error allocating buffer ", __func__);
>>              return -ENOMEM;
>>          }
>>
>> Seems you are running out of memory?
>>
> no. my host memory is about 60G.
> just migrate with command "migrate -d xxx" without speed limit.
> FYI.
> 

Probably you will have to figure out why g_try_malloc0() is failing. 
what is data_size when it fails?

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-21  7:04             ` Yan Zhao
  2020-05-21  7:28               ` Kirti Wankhede
@ 2020-05-21  7:32               ` Kirti Wankhede
  1 sibling, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-21  7:32 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Zhengxiao.zx, kevin.tian, yi.l.liu, cjia, kvm, eskultet,
	ziye.yang, qemu-devel, cohuck, shuangtai.tst, dgilbert,
	zhi.a.wang, mlevitsk, pasic, aik, Alex Williamson, eauger,
	felipe, jonathan.davies, changpeng.liu, Ken.Xue



On 5/21/2020 12:34 PM, Yan Zhao wrote:
> On Thu, May 21, 2020 at 12:39:48PM +0530, Kirti Wankhede wrote:
>>
>>
>> On 5/21/2020 10:38 AM, Yan Zhao wrote:
>>> On Wed, May 20, 2020 at 10:46:12AM -0600, Alex Williamson wrote:
>>>> On Wed, 20 May 2020 19:10:07 +0530
>>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>>
>>>>> On 5/20/2020 8:25 AM, Yan Zhao wrote:
>>>>>> On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
>>>>>>> Hi folks,
>>>>>>>
>>>>>>> My impression is that we're getting pretty close to a workable
>>>>>>> implementation here with v22 plus respins of patches 5, 6, and 8.  We
>>>>>>> also have a matching QEMU series and a proposal for a new i40e
>>>>>>> consumer, as well as I assume GVT-g updates happening internally at
>>>>>>> Intel.  I expect all of the latter needs further review and discussion,
>>>>>>> but we should be at the point where we can validate these proposed
>>>>>>> kernel interfaces.  Therefore I'd like to make a call for reviews so
>>>>>>> that we can get this wrapped up for the v5.8 merge window.  I know
>>>>>>> Connie has some outstanding documentation comments and I'd like to make
>>>>>>> sure everyone has an opportunity to check that their comments have been
>>>>>>> addressed and we don't discover any new blocking issues.  Please send
>>>>>>> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
>>>>>>> interface and implementation.  Thanks!
>>>>>> hi Alex and Kirti,
>>>>>> after porting to qemu v22 and kernel v22, it is found out that
>>>>>> it can not even pass basic live migration test with error like
>>>>>>
>>>>>> "Failed to get dirty bitmap for iova: 0xca000 size: 0x3000 err: 22"
>>>>>
>>>>> Thanks for testing Yan.
>>>>> I think last moment change in below cause this failure
>>>>>
>>>>> https://lore.kernel.org/kvm/1589871178-8282-1-git-send-email-kwankhede@nvidia.com/
>>>>>
>>>>>    > 	if (dma->iova > iova + size)
>>>>>    > 		break;
>>>>>
>>>>> Surprisingly with my basic testing with 2G sys mem QEMU didn't raise
>>>>> abort on g_free, but I do hit this with large sys mem.
>>>>> With above change, that function iterated through next vfio_dma as well.
>>>>> Check should be as below:
>>>>>
>>>>> -               if (dma->iova > iova + size)
>>>>> +               if (dma->iova > iova + size -1)
>>>>
>>>>
>>>> Or just:
>>>>
>>>> 	if (dma->iova >= iova + size)
>>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>>
>>>>>                            break;
>>>>>
>>>>> Another fix is in QEMU.
>>>>> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg04751.html
>>>>>
>>>>>    > > +        range->bitmap.size = ROUND_UP(pages, 64) / 8;
>>>>>    >
>>>>>    > ROUND_UP(npages/8, sizeof(u64))?
>>>>>    >
>>>>>
>>>>> If npages < 8, npages/8 is 0 and ROUND_UP(0, 8) returns 0.
>>>>>
>>>>> Changing it as below
>>>>>
>>>>> -        range->bitmap.size = ROUND_UP(pages / 8, sizeof(uint64_t));
>>>>> +        range->bitmap.size = ROUND_UP(pages, sizeof(__u64) *
>>>>> BITS_PER_BYTE) /
>>>>> +                             BITS_PER_BYTE;
>>>>>
>>>>> I'm updating patches with these fixes and Cornelia's suggestion soon.
>>>>>
>>>>> Due to short of time I may not be able to address all the concerns
>>>>> raised on previous versions of QEMU, I'm trying make QEMU side code
>>>>> available for testing for others with latest kernel changes. Don't
>>>>> worry, I will revisit comments on QEMU patches. Right now first priority
>>>>> is to test kernel UAPI and prepare kernel patches for 5.8
>>>>>
>>>>
>>> hi Kirti
>>> by updating kernel/qemu to v23, still met below two types of errors.
>>> just basic migration test.
>>> (the guest VM size is 2G for all reported bugs).
>>>
>>> "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
>>>
>>
>> size doesn't look correct here, below check should be failing.
>>   range.size & (iommu_pgsize - 1)
>>
>>> or
>>>
>>> "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
>>> qemu-system-x86_64-lm: error while loading state section id 49(vfio)
>>> qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
>>>
>>>
>>
>> Above error is from:
>>          buf = g_try_malloc0(data_size);
>>          if (!buf) {
>>              error_report("%s: Error allocating buffer ", __func__);
>>              return -ENOMEM;
>>          }
>>
>> Seems you are running out of memory?
>>
> no. my host memory is about 60G.
> just migrate with command "migrate -d xxx" without speed limit.
> FYI.
> 

Traces are added in migration code so enabling vfio_* traces at source 
and destination qemu commandline helps to debug and analyze any 
migration related errors.

Thanks,
Kirti




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-19 16:58 ` [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
  2020-05-20  2:55   ` Yan Zhao
@ 2020-05-25  6:59   ` Yan Zhao
  2020-05-25 13:20     ` Kirti Wankhede
  1 sibling, 1 reply; 40+ messages in thread
From: Yan Zhao @ 2020-05-25  6:59 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
> Hi folks,
> 
> My impression is that we're getting pretty close to a workable
> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> also have a matching QEMU series and a proposal for a new i40e
> consumer, as well as I assume GVT-g updates happening internally at
> Intel.  I expect all of the latter needs further review and discussion,
> but we should be at the point where we can validate these proposed
> kernel interfaces.  Therefore I'd like to make a call for reviews so
> that we can get this wrapped up for the v5.8 merge window.  I know
> Connie has some outstanding documentation comments and I'd like to make
> sure everyone has an opportunity to check that their comments have been
> addressed and we don't discover any new blocking issues.  Please send
> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> interface and implementation.  Thanks!
>
hi Alex
after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
two bugs.
1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
   This is a qemu bug that the dirty bitmap query range is not the same
   as the dma map range. It can be fixed in qemu. and I just have a little
   concern for kernel to have this restriction.

2. migration abortion, reporting
"qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
qemu-system-x86_64-lm: error while loading state section id 49(vfio)
qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"

It's still a qemu bug and we can fixed it by
"
if (migration->pending_bytes == 0) {
+            qemu_put_be64(f, 0);
+            qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
"
and actually there are some extra concerns about this part, as reported in
[1][2].

[1] data_size should be read ahead of data_offset
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
[2] should not repeatedly update pending_bytes in vfio_save_iterate()
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.

but as those errors are all in qemu, and we have finished basic tests in
both gvt & i40e, we're fine with the kernel part interface in general now.
(except for my concern [1], which needs to update kernel patch 1)

so I wonder which way in your mind is better, to give our reviewed-by to
the kernel part now, or hold until next qemu fixes?
and as performance data from gvt is requested from your previous mail, is
that still required before the code is accepted?

BTW, we have also conducted some basic tests when viommu is on, and found out
errors like 
"qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error (iova=0x0, level=0x3, slpte=0x0, write=1)
qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure (dev=00:03:00, iova=0x0)
qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".

Thanks
Yan





^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-25  6:59   ` Yan Zhao
@ 2020-05-25 13:20     ` Kirti Wankhede
  2020-05-26 20:19       ` Alex Williamson
  0 siblings, 1 reply; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-25 13:20 UTC (permalink / raw)
  To: Yan Zhao, Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	qemu-devel, kvm



On 5/25/2020 12:29 PM, Yan Zhao wrote:
> On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:
>> Hi folks,
>>
>> My impression is that we're getting pretty close to a workable
>> implementation here with v22 plus respins of patches 5, 6, and 8.  We
>> also have a matching QEMU series and a proposal for a new i40e
>> consumer, as well as I assume GVT-g updates happening internally at
>> Intel.  I expect all of the latter needs further review and discussion,
>> but we should be at the point where we can validate these proposed
>> kernel interfaces.  Therefore I'd like to make a call for reviews so
>> that we can get this wrapped up for the v5.8 merge window.  I know
>> Connie has some outstanding documentation comments and I'd like to make
>> sure everyone has an opportunity to check that their comments have been
>> addressed and we don't discover any new blocking issues.  Please send
>> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
>> interface and implementation.  Thanks!
>>
> hi Alex
> after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> two bugs.
> 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
>     This is a qemu bug that the dirty bitmap query range is not the same
>     as the dma map range. It can be fixed in qemu. and I just have a little
>     concern for kernel to have this restriction.
> 

I never saw this unaligned size in my testing. In this case if you can 
provide vfio_* event traces, that will helpful.

> 2. migration abortion, reporting
> "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> 
> It's still a qemu bug and we can fixed it by
> "
> if (migration->pending_bytes == 0) {
> +            qemu_put_be64(f, 0);
> +            qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> "

In which function in QEMU do you have to add this?

> and actually there are some extra concerns about this part, as reported in
> [1][2].
> 
> [1] data_size should be read ahead of data_offset
> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> 
> but as those errors are all in qemu, and we have finished basic tests in
> both gvt & i40e, we're fine with the kernel part interface in general now.
> (except for my concern [1], which needs to update kernel patch 1)
> 

 >> what if pending_bytes is not 0, but vendor driver just does not want  to
 >> send data in this iteration? isn't it right to get data_size first 
before
 >> getting data_offset?

If vendor driver doesn't want to send data but still has data in staging 
buffer, vendor driver still can control to send pending_bytes for this 
iteration as 0 as this is a trap field.

I would defer this to Alex.

> so I wonder which way in your mind is better, to give our reviewed-by to
> the kernel part now, or hold until next qemu fixes?
> and as performance data from gvt is requested from your previous mail, is
> that still required before the code is accepted?
> 
> BTW, we have also conducted some basic tests when viommu is on, and found out
> errors like
> "qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error (iova=0x0, level=0x3, slpte=0x0, write=1)
> qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure (dev=00:03:00, iova=0x0)
> qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".
> 

I saw these errors, I'm looking into it.

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-25 13:20     ` Kirti Wankhede
@ 2020-05-26 20:19       ` Alex Williamson
  2020-05-27  6:23         ` Yan Zhao
  0 siblings, 1 reply; 40+ messages in thread
From: Alex Williamson @ 2020-05-26 20:19 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: Yan Zhao, cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu,
	mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Mon, 25 May 2020 18:50:54 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 5/25/2020 12:29 PM, Yan Zhao wrote:
> > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> >> Hi folks,
> >>
> >> My impression is that we're getting pretty close to a workable
> >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> >> also have a matching QEMU series and a proposal for a new i40e
> >> consumer, as well as I assume GVT-g updates happening internally at
> >> Intel.  I expect all of the latter needs further review and discussion,
> >> but we should be at the point where we can validate these proposed
> >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> >> that we can get this wrapped up for the v5.8 merge window.  I know
> >> Connie has some outstanding documentation comments and I'd like to make
> >> sure everyone has an opportunity to check that their comments have been
> >> addressed and we don't discover any new blocking issues.  Please send
> >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> >> interface and implementation.  Thanks!
> >>  
> > hi Alex
> > after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> > two bugs.
> > 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> >     This is a qemu bug that the dirty bitmap query range is not the same
> >     as the dma map range. It can be fixed in qemu. and I just have a little
> >     concern for kernel to have this restriction.
> >   
> 
> I never saw this unaligned size in my testing. In this case if you can 
> provide vfio_* event traces, that will helpful.

Yeah, I'm curious why we're hitting such a call path, I think we were
designing this under the assumption we wouldn't see these.  I also
wonder if we really need to enforce the dma mapping range for getting
the dirty bitmap with the current implementation (unmap+dirty obviously
still has the restriction).  We do shift the bitmap in place for
alignment, but I'm not sure why we couldn't shift it back and only
clear the range that was reported.  Kirti, do you see other issues?  I
think a patch to lift that restriction is something we could plan to
include after the initial series is included and before we've committed
to the uapi at the v5.8 release.
 
> > 2. migration abortion, reporting
> > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > 
> > It's still a qemu bug and we can fixed it by
> > "
> > if (migration->pending_bytes == 0) {
> > +            qemu_put_be64(f, 0);
> > +            qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > "  
> 
> In which function in QEMU do you have to add this?

I think this is relative to QEMU path 09/ where Yan had the questions
below on v16 and again tried to get answers to them on v22:

https://lore.kernel.org/qemu-devel/20200520031323.GB10369@joy-OptiPlex-7040/

Kirti, please address these questions.

> > and actually there are some extra concerns about this part, as reported in
> > [1][2].
> > 
> > [1] data_size should be read ahead of data_offset
> > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> > [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> > 
> > but as those errors are all in qemu, and we have finished basic tests in
> > both gvt & i40e, we're fine with the kernel part interface in general now.
> > (except for my concern [1], which needs to update kernel patch 1)
> >   
> 
>  >> what if pending_bytes is not 0, but vendor driver just does not want  to
>  >> send data in this iteration? isn't it right to get data_size first   
> before
>  >> getting data_offset?  
> 
> If vendor driver doesn't want to send data but still has data in staging 
> buffer, vendor driver still can control to send pending_bytes for this 
> iteration as 0 as this is a trap field.
> 
> I would defer this to Alex.

This is my understanding of the protocol as well, when the device is
running, pending_bytes might drop to zero if no internal state has
changed and may be non-zero on the next iteration due to device
activity.  When the device is not running, pending_bytes reporting zero
indicates the device is done, there is no further state to transmit.
Does that meet your need/expectation?

> > so I wonder which way in your mind is better, to give our reviewed-by to
> > the kernel part now, or hold until next qemu fixes?
> > and as performance data from gvt is requested from your previous mail, is
> > that still required before the code is accepted?

The QEMU series does not need to be perfect, I kind of expect we might
see a few iterations of that beyond the kernel portion being accepted.
We should have the QEMU series to the point that we've resolved any
uapi issues though, which it seems like we're pretty close to having.
Ideally I'd like to get the kernel series into my next branch before
the merge window opens, where it seems like upstream is on schedule to
have that happen this Sunday.  If you feel we're to the point were we
can iron a couple details out during the v5.8 development cycle, then
please provide your reviewed-by.  We haven't fully committed to a uapi
until we've committed to it for a non-rc release.

I think the performance request was largely due to some conversations
with Dave Gilbert wondering if all this actually works AND is practical
for a LIVE migration.  I think we're all curious about things like how
much data does a GPU have to transfer in each phase of migration, and
particularly if the final phase is going to be a barrier to claiming
the VM is actually sufficiently live.  I'm not sure we have many
options if a device simply has a very large working set, but even
anecdotal evidence that the stop-and-copy phase transfers abMB from the
device while idle or xyzMB while active would give us some idea what to
expect.  Kirti, have you done any of those sorts of tests for NVIDIA's
driver?

> > BTW, we have also conducted some basic tests when viommu is on, and found out
> > errors like
> > "qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error (iova=0x0, level=0x3, slpte=0x0, write=1)
> > qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure (dev=00:03:00, iova=0x0)
> > qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".
> >   
> 
> I saw these errors, I'm looking into it.

Let's try to at least determine if this is a uapi issue or just a QEMU
implementation bug for progressing the kernel series.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-26 20:19       ` Alex Williamson
@ 2020-05-27  6:23         ` Yan Zhao
  2020-05-27  8:48           ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 40+ messages in thread
From: Yan Zhao @ 2020-05-27  6:23 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, qemu-devel, kvm

On Tue, May 26, 2020 at 02:19:39PM -0600, Alex Williamson wrote:
> On Mon, 25 May 2020 18:50:54 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> > On 5/25/2020 12:29 PM, Yan Zhao wrote:
> > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> > >> Hi folks,
> > >>
> > >> My impression is that we're getting pretty close to a workable
> > >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > >> also have a matching QEMU series and a proposal for a new i40e
> > >> consumer, as well as I assume GVT-g updates happening internally at
> > >> Intel.  I expect all of the latter needs further review and discussion,
> > >> but we should be at the point where we can validate these proposed
> > >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> > >> that we can get this wrapped up for the v5.8 merge window.  I know
> > >> Connie has some outstanding documentation comments and I'd like to make
> > >> sure everyone has an opportunity to check that their comments have been
> > >> addressed and we don't discover any new blocking issues.  Please send
> > >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > >> interface and implementation.  Thanks!
> > >>  
> > > hi Alex
> > > after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> > > two bugs.
> > > 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> > >     This is a qemu bug that the dirty bitmap query range is not the same
> > >     as the dma map range. It can be fixed in qemu. and I just have a little
> > >     concern for kernel to have this restriction.
> > >   
> > 
> > I never saw this unaligned size in my testing. In this case if you can 
> > provide vfio_* event traces, that will helpful.
> 
> Yeah, I'm curious why we're hitting such a call path, I think we were
> designing this under the assumption we wouldn't see these.  I also
that's because the algorithm for getting dirty bitmap query range is still not exactly
matching to that for dma map range in vfio_dma_map().


> wonder if we really need to enforce the dma mapping range for getting
> the dirty bitmap with the current implementation (unmap+dirty obviously
> still has the restriction).  We do shift the bitmap in place for
> alignment, but I'm not sure why we couldn't shift it back and only
> clear the range that was reported.  Kirti, do you see other issues?  I
> think a patch to lift that restriction is something we could plan to
> include after the initial series is included and before we've committed
> to the uapi at the v5.8 release.
>  
> > > 2. migration abortion, reporting
> > > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > > 
> > > It's still a qemu bug and we can fixed it by
> > > "
> > > if (migration->pending_bytes == 0) {
> > > +            qemu_put_be64(f, 0);
> > > +            qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > "  
> > 
> > In which function in QEMU do you have to add this?
> 
> I think this is relative to QEMU path 09/ where Yan had the questions
> below on v16 and again tried to get answers to them on v22:
> 
> https://lore.kernel.org/qemu-devel/20200520031323.GB10369@joy-OptiPlex-7040/
> 
> Kirti, please address these questions.
> 
> > > and actually there are some extra concerns about this part, as reported in
> > > [1][2].
> > > 
> > > [1] data_size should be read ahead of data_offset
> > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> > > [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> > > 
> > > but as those errors are all in qemu, and we have finished basic tests in
> > > both gvt & i40e, we're fine with the kernel part interface in general now.
> > > (except for my concern [1], which needs to update kernel patch 1)
> > >   
> > 
> >  >> what if pending_bytes is not 0, but vendor driver just does not want  to
> >  >> send data in this iteration? isn't it right to get data_size first   
> > before
> >  >> getting data_offset?  
> > 
> > If vendor driver doesn't want to send data but still has data in staging 
> > buffer, vendor driver still can control to send pending_bytes for this 
> > iteration as 0 as this is a trap field.
> > 
> > I would defer this to Alex.
> 
> This is my understanding of the protocol as well, when the device is
> running, pending_bytes might drop to zero if no internal state has
> changed and may be non-zero on the next iteration due to device
> activity.  When the device is not running, pending_bytes reporting zero
> indicates the device is done, there is no further state to transmit.
> Does that meet your need/expectation?
>
(1) on one side, as in vfio_save_pending(),
vfio_save_pending()
{
    ...
    ret = vfio_update_pending(vbasedev);
    ...
    *res_precopy_only += migration->pending_bytes;
    ...
}
the pending_bytes tells migration thread how much data is still hold in
device side.
the device data includes
device internal data + running device dirty data + device state.

so the pending_bytes should include device state as well, right?
if so, the pending_bytes should never reach 0 if there's any device
state to be sent after device is stopped.

(2) on the other side,
along side we updated the pending_bytes in vfio_save_pending() and
enter into the vfio_save_iterate(), if we repeatedly update
pending_bytes in vfio_save_iterate(), it would enter into a scenario
like

initially pending_bytes=500M.
vfio_save_iterate() -->
  round 1: transmitted 500M.
  round 2: update pending bytes, pending_bytes=50M (50M dirty data).
  round 3: update pending bytes, pending_bytes=50M.
  ...
  round N: update pending bytes, pending_bytes=50M.

If there're two vfio devices, the vfio_save_iterate() for the second device
may never get chance to be called because there's always pending_bytes
produced by the first device, even the size if small.

> > > so I wonder which way in your mind is better, to give our reviewed-by to
> > > the kernel part now, or hold until next qemu fixes?
> > > and as performance data from gvt is requested from your previous mail, is
> > > that still required before the code is accepted?
> 
> The QEMU series does not need to be perfect, I kind of expect we might
> see a few iterations of that beyond the kernel portion being accepted.
> We should have the QEMU series to the point that we've resolved any
> uapi issues though, which it seems like we're pretty close to having.
> Ideally I'd like to get the kernel series into my next branch before
> the merge window opens, where it seems like upstream is on schedule to
> have that happen this Sunday.  If you feel we're to the point were we
> can iron a couple details out during the v5.8 development cycle, then
> please provide your reviewed-by.  We haven't fully committed to a uapi
> until we've committed to it for a non-rc release.
> 
got it.

> I think the performance request was largely due to some conversations
> with Dave Gilbert wondering if all this actually works AND is practical
> for a LIVE migration.  I think we're all curious about things like how
> much data does a GPU have to transfer in each phase of migration, and
> particularly if the final phase is going to be a barrier to claiming
> the VM is actually sufficiently live.  I'm not sure we have many
> options if a device simply has a very large working set, but even
> anecdotal evidence that the stop-and-copy phase transfers abMB from the
> device while idle or xyzMB while active would give us some idea what to
for intel vGPU, the data is
single-round dirty query:
data to be transferred at stop-and-copy phase: 90MB+ ~ 900MB+, including
- device state: 9MB
- system dirty memory: 80MB+ ~ 900MB+ (depending on workload type)

multi-round dirty query :
-each iteration data: 60MB ~ 400MB
-data to be transferred at stop-and-copy phase: 70MB ~ 400MB



BTW, for viommu, the downtime data is as below. under the same network
condition and guest memory size, and no running dirty data/memory produced
by device.
(1) viommu off
single-round dirty query: downtime ~100ms 
(2) viommu on
single-round dirty query: downtime 58s 

Thanks
Yan
> expect.  Kirti, have you done any of those sorts of tests for NVIDIA's
> driver?
> 
> > > BTW, we have also conducted some basic tests when viommu is on, and found out
> > > errors like
> > > "qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error (iova=0x0, level=0x3, slpte=0x0, write=1)
> > > qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure (dev=00:03:00, iova=0x0)
> > > qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".
> > >   
> > 
> > I saw these errors, I'm looking into it.
> 
> Let's try to at least determine if this is a uapi issue or just a QEMU
> implementation bug for progressing the kernel series.  Thanks,
> 
> Alex
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-27  6:23         ` Yan Zhao
@ 2020-05-27  8:48           ` Dr. David Alan Gilbert
  2020-05-28  8:01             ` Yan Zhao
  2020-05-28 22:59             ` Alex Williamson
  0 siblings, 2 replies; 40+ messages in thread
From: Dr. David Alan Gilbert @ 2020-05-27  8:48 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Alex Williamson, Kirti Wankhede, cjia, kevin.tian, ziye.yang,
	changpeng.liu, yi.l.liu, mlevitsk, eskultet, cohuck,
	jonathan.davies, eauger, aik, pasic, felipe, Zhengxiao.zx,
	shuangtai.tst, Ken.Xue, zhi.a.wang, qemu-devel, kvm

* Yan Zhao (yan.y.zhao@intel.com) wrote:
> On Tue, May 26, 2020 at 02:19:39PM -0600, Alex Williamson wrote:
> > On Mon, 25 May 2020 18:50:54 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> > 
> > > On 5/25/2020 12:29 PM, Yan Zhao wrote:
> > > > On Tue, May 19, 2020 at 10:58:04AM -0600, Alex Williamson wrote:  
> > > >> Hi folks,
> > > >>
> > > >> My impression is that we're getting pretty close to a workable
> > > >> implementation here with v22 plus respins of patches 5, 6, and 8.  We
> > > >> also have a matching QEMU series and a proposal for a new i40e
> > > >> consumer, as well as I assume GVT-g updates happening internally at
> > > >> Intel.  I expect all of the latter needs further review and discussion,
> > > >> but we should be at the point where we can validate these proposed
> > > >> kernel interfaces.  Therefore I'd like to make a call for reviews so
> > > >> that we can get this wrapped up for the v5.8 merge window.  I know
> > > >> Connie has some outstanding documentation comments and I'd like to make
> > > >> sure everyone has an opportunity to check that their comments have been
> > > >> addressed and we don't discover any new blocking issues.  Please send
> > > >> your Acked-by/Reviewed-by/Tested-by tags if you're satisfied with this
> > > >> interface and implementation.  Thanks!
> > > >>  
> > > > hi Alex
> > > > after porting gvt/i40e vf migration code to kernel/qemu v23, we spoted
> > > > two bugs.
> > > > 1. "Failed to get dirty bitmap for iova: 0xfe011000 size: 0x3fb0 err: 22"
> > > >     This is a qemu bug that the dirty bitmap query range is not the same
> > > >     as the dma map range. It can be fixed in qemu. and I just have a little
> > > >     concern for kernel to have this restriction.
> > > >   
> > > 
> > > I never saw this unaligned size in my testing. In this case if you can 
> > > provide vfio_* event traces, that will helpful.
> > 
> > Yeah, I'm curious why we're hitting such a call path, I think we were
> > designing this under the assumption we wouldn't see these.  I also
> that's because the algorithm for getting dirty bitmap query range is still not exactly
> matching to that for dma map range in vfio_dma_map().
> 
> 
> > wonder if we really need to enforce the dma mapping range for getting
> > the dirty bitmap with the current implementation (unmap+dirty obviously
> > still has the restriction).  We do shift the bitmap in place for
> > alignment, but I'm not sure why we couldn't shift it back and only
> > clear the range that was reported.  Kirti, do you see other issues?  I
> > think a patch to lift that restriction is something we could plan to
> > include after the initial series is included and before we've committed
> > to the uapi at the v5.8 release.
> >  
> > > > 2. migration abortion, reporting
> > > > "qemu-system-x86_64-lm: vfio_load_state: Error allocating buffer
> > > > qemu-system-x86_64-lm: error while loading state section id 49(vfio)
> > > > qemu-system-x86_64-lm: load of migration failed: Cannot allocate memory"
> > > > 
> > > > It's still a qemu bug and we can fixed it by
> > > > "
> > > > if (migration->pending_bytes == 0) {
> > > > +            qemu_put_be64(f, 0);
> > > > +            qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > > "  
> > > 
> > > In which function in QEMU do you have to add this?
> > 
> > I think this is relative to QEMU path 09/ where Yan had the questions
> > below on v16 and again tried to get answers to them on v22:
> > 
> > https://lore.kernel.org/qemu-devel/20200520031323.GB10369@joy-OptiPlex-7040/
> > 
> > Kirti, please address these questions.
> > 
> > > > and actually there are some extra concerns about this part, as reported in
> > > > [1][2].
> > > > 
> > > > [1] data_size should be read ahead of data_offset
> > > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02795.html.
> > > > [2] should not repeatedly update pending_bytes in vfio_save_iterate()
> > > > https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg02796.html.
> > > > 
> > > > but as those errors are all in qemu, and we have finished basic tests in
> > > > both gvt & i40e, we're fine with the kernel part interface in general now.
> > > > (except for my concern [1], which needs to update kernel patch 1)
> > > >   
> > > 
> > >  >> what if pending_bytes is not 0, but vendor driver just does not want  to
> > >  >> send data in this iteration? isn't it right to get data_size first   
> > > before
> > >  >> getting data_offset?  
> > > 
> > > If vendor driver doesn't want to send data but still has data in staging 
> > > buffer, vendor driver still can control to send pending_bytes for this 
> > > iteration as 0 as this is a trap field.
> > > 
> > > I would defer this to Alex.
> > 
> > This is my understanding of the protocol as well, when the device is
> > running, pending_bytes might drop to zero if no internal state has
> > changed and may be non-zero on the next iteration due to device
> > activity.  When the device is not running, pending_bytes reporting zero
> > indicates the device is done, there is no further state to transmit.
> > Does that meet your need/expectation?
> >
> (1) on one side, as in vfio_save_pending(),
> vfio_save_pending()
> {
>     ...
>     ret = vfio_update_pending(vbasedev);
>     ...
>     *res_precopy_only += migration->pending_bytes;
>     ...
> }
> the pending_bytes tells migration thread how much data is still hold in
> device side.
> the device data includes
> device internal data + running device dirty data + device state.
> 
> so the pending_bytes should include device state as well, right?
> if so, the pending_bytes should never reach 0 if there's any device
> state to be sent after device is stopped.

I hadn't expected the pending-bytes to include a fixed offset for device
state (If you mean a few registers etc) - I'd expect pending to drop
possibly to zero;  the heuristic as to when to switch from iteration to
stop, is based on the total pending across all iterated devices; so it's
got to be allowed to drop otherwise you'll never transition to stop.

> (2) on the other side,
> along side we updated the pending_bytes in vfio_save_pending() and
> enter into the vfio_save_iterate(), if we repeatedly update
> pending_bytes in vfio_save_iterate(), it would enter into a scenario
> like
> 
> initially pending_bytes=500M.
> vfio_save_iterate() -->
>   round 1: transmitted 500M.
>   round 2: update pending bytes, pending_bytes=50M (50M dirty data).
>   round 3: update pending bytes, pending_bytes=50M.
>   ...
>   round N: update pending bytes, pending_bytes=50M.
> 
> If there're two vfio devices, the vfio_save_iterate() for the second device
> may never get chance to be called because there's always pending_bytes
> produced by the first device, even the size if small.

And between RAM and the vfio devices?

> > > > so I wonder which way in your mind is better, to give our reviewed-by to
> > > > the kernel part now, or hold until next qemu fixes?
> > > > and as performance data from gvt is requested from your previous mail, is
> > > > that still required before the code is accepted?
> > 
> > The QEMU series does not need to be perfect, I kind of expect we might
> > see a few iterations of that beyond the kernel portion being accepted.
> > We should have the QEMU series to the point that we've resolved any
> > uapi issues though, which it seems like we're pretty close to having.
> > Ideally I'd like to get the kernel series into my next branch before
> > the merge window opens, where it seems like upstream is on schedule to
> > have that happen this Sunday.  If you feel we're to the point were we
> > can iron a couple details out during the v5.8 development cycle, then
> > please provide your reviewed-by.  We haven't fully committed to a uapi
> > until we've committed to it for a non-rc release.
> > 
> got it.
> 
> > I think the performance request was largely due to some conversations
> > with Dave Gilbert wondering if all this actually works AND is practical
> > for a LIVE migration.  I think we're all curious about things like how
> > much data does a GPU have to transfer in each phase of migration, and
> > particularly if the final phase is going to be a barrier to claiming
> > the VM is actually sufficiently live.  I'm not sure we have many
> > options if a device simply has a very large working set, but even
> > anecdotal evidence that the stop-and-copy phase transfers abMB from the
> > device while idle or xyzMB while active would give us some idea what to
> for intel vGPU, the data is
> single-round dirty query:
> data to be transferred at stop-and-copy phase: 90MB+ ~ 900MB+, including
> - device state: 9MB
> - system dirty memory: 80MB+ ~ 900MB+ (depending on workload type)
> 
> multi-round dirty query :
> -each iteration data: 60MB ~ 400MB
> -data to be transferred at stop-and-copy phase: 70MB ~ 400MB
> 
> 
> 
> BTW, for viommu, the downtime data is as below. under the same network
> condition and guest memory size, and no running dirty data/memory produced
> by device.
> (1) viommu off
> single-round dirty query: downtime ~100ms 

Fine.

> (2) viommu on
> single-round dirty query: downtime 58s 

Youch.

Dave

> 
> Thanks
> Yan
> > expect.  Kirti, have you done any of those sorts of tests for NVIDIA's
> > driver?
> > 
> > > > BTW, we have also conducted some basic tests when viommu is on, and found out
> > > > errors like
> > > > "qemu-system-x86_64-dt: vtd_iova_to_slpte: detected slpte permission error (iova=0x0, level=0x3, slpte=0x0, write=1)
> > > > qemu-system-x86_64-dt: vtd_iommu_translate: detected translation failure (dev=00:03:00, iova=0x0)
> > > > qemu-system-x86_64-dt: New fault is not recorded due to compression of faults".
> > > >   
> > > 
> > > I saw these errors, I'm looking into it.
> > 
> > Let's try to at least determine if this is a uapi issue or just a QEMU
> > implementation bug for progressing the kernel series.  Thanks,
> > 
> > Alex
> > 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-27  8:48           ` Dr. David Alan Gilbert
@ 2020-05-28  8:01             ` Yan Zhao
  2020-05-28 22:53               ` Alex Williamson
  2020-05-28 22:59             ` Alex Williamson
  1 sibling, 1 reply; 40+ messages in thread
From: Yan Zhao @ 2020-05-28  8:01 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Alex Williamson, Kirti Wankhede, cjia, kevin.tian, ziye.yang,
	changpeng.liu, yi.l.liu, mlevitsk, eskultet, cohuck,
	jonathan.davies, eauger, aik, pasic, felipe, Zhengxiao.zx,
	shuangtai.tst, Ken.Xue, zhi.a.wang, qemu-devel, kvm

> > > This is my understanding of the protocol as well, when the device is
> > > running, pending_bytes might drop to zero if no internal state has
> > > changed and may be non-zero on the next iteration due to device
> > > activity.  When the device is not running, pending_bytes reporting zero
> > > indicates the device is done, there is no further state to transmit.
> > > Does that meet your need/expectation?
> > >
> > (1) on one side, as in vfio_save_pending(),
> > vfio_save_pending()
> > {
> >     ...
> >     ret = vfio_update_pending(vbasedev);
> >     ...
> >     *res_precopy_only += migration->pending_bytes;
> >     ...
> > }
> > the pending_bytes tells migration thread how much data is still hold in
> > device side.
> > the device data includes
> > device internal data + running device dirty data + device state.
> > 
> > so the pending_bytes should include device state as well, right?
> > if so, the pending_bytes should never reach 0 if there's any device
> > state to be sent after device is stopped.
> 
> I hadn't expected the pending-bytes to include a fixed offset for device
> state (If you mean a few registers etc) - I'd expect pending to drop
> possibly to zero;  the heuristic as to when to switch from iteration to
> stop, is based on the total pending across all iterated devices; so it's
> got to be allowed to drop otherwise you'll never transition to stop.
> 
ok. got it.

> > (2) on the other side,
> > along side we updated the pending_bytes in vfio_save_pending() and
> > enter into the vfio_save_iterate(), if we repeatedly update
> > pending_bytes in vfio_save_iterate(), it would enter into a scenario
> > like
> > 
> > initially pending_bytes=500M.
> > vfio_save_iterate() -->
> >   round 1: transmitted 500M.
> >   round 2: update pending bytes, pending_bytes=50M (50M dirty data).
> >   round 3: update pending bytes, pending_bytes=50M.
> >   ...
> >   round N: update pending bytes, pending_bytes=50M.
> > 
> > If there're two vfio devices, the vfio_save_iterate() for the second device
> > may never get chance to be called because there's always pending_bytes
> > produced by the first device, even the size if small.
> 
> And between RAM and the vfio devices?

yes, is that right?

Thanks
Yan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-28  8:01             ` Yan Zhao
@ 2020-05-28 22:53               ` Alex Williamson
  2020-05-29 11:12                 ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 40+ messages in thread
From: Alex Williamson @ 2020-05-28 22:53 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Dr. David Alan Gilbert, Kirti Wankhede, cjia, kevin.tian,
	ziye.yang, changpeng.liu, yi.l.liu, mlevitsk, eskultet, cohuck,
	jonathan.davies, eauger, aik, pasic, felipe, Zhengxiao.zx,
	shuangtai.tst, Ken.Xue, zhi.a.wang, qemu-devel, kvm

On Thu, 28 May 2020 04:01:02 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> > > > This is my understanding of the protocol as well, when the device is
> > > > running, pending_bytes might drop to zero if no internal state has
> > > > changed and may be non-zero on the next iteration due to device
> > > > activity.  When the device is not running, pending_bytes reporting zero
> > > > indicates the device is done, there is no further state to transmit.
> > > > Does that meet your need/expectation?
> > > >  
> > > (1) on one side, as in vfio_save_pending(),
> > > vfio_save_pending()
> > > {
> > >     ...
> > >     ret = vfio_update_pending(vbasedev);
> > >     ...
> > >     *res_precopy_only += migration->pending_bytes;
> > >     ...
> > > }
> > > the pending_bytes tells migration thread how much data is still hold in
> > > device side.
> > > the device data includes
> > > device internal data + running device dirty data + device state.
> > > 
> > > so the pending_bytes should include device state as well, right?
> > > if so, the pending_bytes should never reach 0 if there's any device
> > > state to be sent after device is stopped.  
> > 
> > I hadn't expected the pending-bytes to include a fixed offset for device
> > state (If you mean a few registers etc) - I'd expect pending to drop
> > possibly to zero;  the heuristic as to when to switch from iteration to
> > stop, is based on the total pending across all iterated devices; so it's
> > got to be allowed to drop otherwise you'll never transition to stop.
> >   
> ok. got it.

Yeah, as I understand it, a device is not required to participate in
reporting data available while (_SAVING | _RUNNING), there will always
be an iteration while the device is !_RUNNING.  Therefore if you have
fixed device state that you're always going to send, it should only be
sent once when called during !_RUNNING.  The iterative phase should be
used where you have a good chance to avoid re-sending data at the
stop-and-copy phase.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-27  8:48           ` Dr. David Alan Gilbert
  2020-05-28  8:01             ` Yan Zhao
@ 2020-05-28 22:59             ` Alex Williamson
  2020-05-29  4:15               ` Yan Zhao
  2020-05-29 17:57               ` Kirti Wankhede
  1 sibling, 2 replies; 40+ messages in thread
From: Alex Williamson @ 2020-05-28 22:59 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Yan Zhao, Kirti Wankhede, cjia, kevin.tian, ziye.yang,
	changpeng.liu, yi.l.liu, mlevitsk, eskultet, cohuck,
	jonathan.davies, eauger, aik, pasic, felipe, Zhengxiao.zx,
	shuangtai.tst, Ken.Xue, zhi.a.wang, qemu-devel, kvm

On Wed, 27 May 2020 09:48:22 +0100
"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > BTW, for viommu, the downtime data is as below. under the same network
> > condition and guest memory size, and no running dirty data/memory produced
> > by device.
> > (1) viommu off
> > single-round dirty query: downtime ~100ms   
> 
> Fine.
> 
> > (2) viommu on
> > single-round dirty query: downtime 58s   
> 
> Youch.

Double Youch!  But we believe this is because we're getting the dirty
bitmap one IOMMU leaf page at a time, right?  We've enable the kernel
to get a dirty bitmap across multiple mappings, but QEMU isn't yet
taking advantage of it.  Do I have this correct?  Thanks,

Alex


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-28 22:59             ` Alex Williamson
@ 2020-05-29  4:15               ` Yan Zhao
  2020-05-29 17:57               ` Kirti Wankhede
  1 sibling, 0 replies; 40+ messages in thread
From: Yan Zhao @ 2020-05-29  4:15 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Dr. David Alan Gilbert, Kirti Wankhede, cjia, kevin.tian,
	ziye.yang, changpeng.liu, yi.l.liu, mlevitsk, eskultet, cohuck,
	jonathan.davies, eauger, aik, pasic, felipe, Zhengxiao.zx,
	shuangtai.tst, Ken.Xue, zhi.a.wang, qemu-devel, kvm

On Thu, May 28, 2020 at 04:59:06PM -0600, Alex Williamson wrote:
> On Wed, 27 May 2020 09:48:22 +0100
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > > BTW, for viommu, the downtime data is as below. under the same network
> > > condition and guest memory size, and no running dirty data/memory produced
> > > by device.
> > > (1) viommu off
> > > single-round dirty query: downtime ~100ms   
> > 
> > Fine.
> > 
> > > (2) viommu on
> > > single-round dirty query: downtime 58s   
> > 
> > Youch.
> 
> Double Youch!  But we believe this is because we're getting the dirty
> bitmap one IOMMU leaf page at a time, right?  We've enable the kernel
> to get a dirty bitmap across multiple mappings, but QEMU isn't yet
> taking advantage of it.  Do I have this correct?  Thanks,
>
Yes, I think so, but I haven't looked into it yet.

Thanks
Yan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-28 22:53               ` Alex Williamson
@ 2020-05-29 11:12                 ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 40+ messages in thread
From: Dr. David Alan Gilbert @ 2020-05-29 11:12 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Yan Zhao, Kirti Wankhede, cjia, kevin.tian, ziye.yang,
	changpeng.liu, yi.l.liu, mlevitsk, eskultet, cohuck,
	jonathan.davies, eauger, aik, pasic, felipe, Zhengxiao.zx,
	shuangtai.tst, Ken.Xue, zhi.a.wang, qemu-devel, kvm

* Alex Williamson (alex.williamson@redhat.com) wrote:
> On Thu, 28 May 2020 04:01:02 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > > > > This is my understanding of the protocol as well, when the device is
> > > > > running, pending_bytes might drop to zero if no internal state has
> > > > > changed and may be non-zero on the next iteration due to device
> > > > > activity.  When the device is not running, pending_bytes reporting zero
> > > > > indicates the device is done, there is no further state to transmit.
> > > > > Does that meet your need/expectation?
> > > > >  
> > > > (1) on one side, as in vfio_save_pending(),
> > > > vfio_save_pending()
> > > > {
> > > >     ...
> > > >     ret = vfio_update_pending(vbasedev);
> > > >     ...
> > > >     *res_precopy_only += migration->pending_bytes;
> > > >     ...
> > > > }
> > > > the pending_bytes tells migration thread how much data is still hold in
> > > > device side.
> > > > the device data includes
> > > > device internal data + running device dirty data + device state.
> > > > 
> > > > so the pending_bytes should include device state as well, right?
> > > > if so, the pending_bytes should never reach 0 if there's any device
> > > > state to be sent after device is stopped.  
> > > 
> > > I hadn't expected the pending-bytes to include a fixed offset for device
> > > state (If you mean a few registers etc) - I'd expect pending to drop
> > > possibly to zero;  the heuristic as to when to switch from iteration to
> > > stop, is based on the total pending across all iterated devices; so it's
> > > got to be allowed to drop otherwise you'll never transition to stop.
> > >   
> > ok. got it.
> 
> Yeah, as I understand it, a device is not required to participate in
> reporting data available while (_SAVING | _RUNNING), there will always
> be an iteration while the device is !_RUNNING.  Therefore if you have
> fixed device state that you're always going to send, it should only be
> sent once when called during !_RUNNING.  The iterative phase should be
> used where you have a good chance to avoid re-sending data at the
> stop-and-copy phase.  Thanks,

Right.

Dave

> Alex
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices
  2020-05-28 22:59             ` Alex Williamson
  2020-05-29  4:15               ` Yan Zhao
@ 2020-05-29 17:57               ` Kirti Wankhede
  1 sibling, 0 replies; 40+ messages in thread
From: Kirti Wankhede @ 2020-05-29 17:57 UTC (permalink / raw)
  To: Alex Williamson, Dr. David Alan Gilbert
  Cc: Yan Zhao, cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu,
	mlevitsk, eskultet, cohuck, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	qemu-devel, kvm



On 5/29/2020 4:29 AM, Alex Williamson wrote:
> On Wed, 27 May 2020 09:48:22 +0100
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> * Yan Zhao (yan.y.zhao@intel.com) wrote:
>>> BTW, for viommu, the downtime data is as below. under the same network
>>> condition and guest memory size, and no running dirty data/memory produced
>>> by device.
>>> (1) viommu off
>>> single-round dirty query: downtime ~100ms
>>
>> Fine.
>>
>>> (2) viommu on
>>> single-round dirty query: downtime 58s
>>
>> Youch.
> 
> Double Youch!  But we believe this is because we're getting the dirty
> bitmap one IOMMU leaf page at a time, right?  We've enable the kernel
> to get a dirty bitmap across multiple mappings, but QEMU isn't yet
> taking advantage of it.  Do I have this correct?  Thanks,
> 

That's correct.

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2020-05-29 17:58 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-18  5:56 [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 1/8] vfio: UAPI for migration interface for device state Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 2/8] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 3/8] vfio iommu: Cache pgsize_bitmap in struct vfio_iommu Kirti Wankhede
2020-05-20 10:08   ` Cornelia Huck
2020-05-20 14:46     ` Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 4/8] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 5/8] vfio iommu: Implementation of ioctl " Kirti Wankhede
2020-05-18 21:53   ` Alex Williamson
2020-05-19  7:11     ` Kirti Wankhede
2020-05-19  6:52       ` Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 6/8] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
2020-05-19  6:54   ` Kirti Wankhede
2020-05-20 10:27     ` Cornelia Huck
2020-05-20 15:16       ` Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 7/8] vfio iommu: Add migration capability to report supported features Kirti Wankhede
2020-05-20 10:42   ` Cornelia Huck
2020-05-20 15:23     ` Kirti Wankhede
2020-05-18  5:56 ` [PATCH Kernel v22 8/8] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
2020-05-19  6:54   ` Kirti Wankhede
2020-05-19 16:58 ` [PATCH Kernel v22 0/8] Add UAPIs to support migration for VFIO devices Alex Williamson
2020-05-20  2:55   ` Yan Zhao
2020-05-20 13:40     ` Kirti Wankhede
2020-05-20 16:46       ` Alex Williamson
2020-05-21  5:08         ` Yan Zhao
2020-05-21  7:09           ` Kirti Wankhede
2020-05-21  7:04             ` Yan Zhao
2020-05-21  7:28               ` Kirti Wankhede
2020-05-21  7:32               ` Kirti Wankhede
2020-05-25  6:59   ` Yan Zhao
2020-05-25 13:20     ` Kirti Wankhede
2020-05-26 20:19       ` Alex Williamson
2020-05-27  6:23         ` Yan Zhao
2020-05-27  8:48           ` Dr. David Alan Gilbert
2020-05-28  8:01             ` Yan Zhao
2020-05-28 22:53               ` Alex Williamson
2020-05-29 11:12                 ` Dr. David Alan Gilbert
2020-05-28 22:59             ` Alex Williamson
2020-05-29  4:15               ` Yan Zhao
2020-05-29 17:57               ` Kirti Wankhede

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).