kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices
@ 2020-03-18 19:41 Kirti Wankhede
  2020-03-18 19:41 ` [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
                   ` (6 more replies)
  0 siblings, 7 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Hi,

This patch set adds:
* New IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
  respect to IOMMU container rather than per device. All pages pinned by
  vendor driver through vfio_pin_pages external API has to be marked as
  dirty during  migration. When IOMMU capable device is present in the
  container and all pages are pinned and mapped, then all pages are marked
  dirty.
  When there are CPU writes, CPU dirty page tracking can identify dirtied
  pages, but any page pinned by vendor driver can also be written by
  device. As of now there is no device which has hardware support for
  dirty page tracking. So all pages which are pinned should be considered
  as dirty.
  This ioctl is also used to start/stop dirty pages tracking for pinned and
  unpinned pages while migration is active.

* Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
  unmapping IO virtual address range.
  With vIOMMU, during pre-copy phase of migration, while CPUs are still
  running, IO virtual address unmap can happen while device still keeping
  reference of guest pfns. Those pages should be reported as dirty before
  unmap, so that VFIO user space application can copy content of those
  pages from source to destination.

* Patch 7 detect if IOMMU capable device driver is smart to report pages
  to be marked dirty by pinning pages using vfio_pin_pages() API.


Yet TODO:
Since there is no device which has hardware support for system memmory
dirty bitmap tracking, right now there is no other API from vendor driver
to VFIO IOMMU module to report dirty pages. In future, when such hardware
support will be implemented, an API will be required such that vendor
driver could report dirty pages to VFIO module during migration phases.

Adding revision history from previous QEMU patch set to understand KABI
changes done till now

v13 -> v14
- Added struct vfio_bitmap to kabi. updated structure
  vfio_iommu_type1_dirty_bitmap_get and vfio_iommu_type1_dma_unmap.
- All small changes suggested by Alex.
- Patches are on tag: next-20200318 and 1-3 patches from Yan's series
  https://lkml.org/lkml/2020/3/12/1255

v12 -> v13
- Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
- Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
- Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
  field.

v11 -> v12
- Changed bitmap allocation in vfio_iommu_type1.
- Remove atomicity of ref_count.
- Updated comments for migration device state structure about error
  reporting.
- Nit picks from v11 reviews

v10 -> v11
- Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
- Added proposal to detect if IOMMU capable device calls external pin pages
  API to mark pages dirty.
- Nit picks from v10 reviews

v9 -> v10:
- Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
  during unmap while migration is active
- Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
  tracking.
- If iommu_mapped, mark all pages dirty.
- Added unpinned pages tracking while migration is active.
- Updated comments for migration device state structure with bit
  combination table and state transition details.

v8 -> v9:
- Split patch set in 2 sets, Kernel and QEMU.
- Dirty pages bitmap is queried from IOMMU container rather than from
  vendor driver for per device. Added 2 ioctls to achieve this.

v7 -> v8:
- Updated comments for KABI
- Added BAR address validation check during PCI device's config space load
  as suggested by Dr. David Alan Gilbert.
- Changed vfio_migration_set_state() to set or clear device state flags.
- Some nit fixes.

v6 -> v7:
- Fix build failures.

v5 -> v6:
- Fix build failure.

v4 -> v5:
- Added decriptive comment about the sequence of access of members of
  structure vfio_device_migration_info to be followed based on Alex's
  suggestion
- Updated get dirty pages sequence.
- As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
  get_object, save_config and load_config.
- Fixed multiple nit picks.
- Tested live migration with multiple vfio device assigned to a VM.

v3 -> v4:
- Added one more bit for _RESUMING flag to be set explicitly.
- data_offset field is read-only for user space application.
- data_size is read for every iteration before reading data from migration,
  that is removed assumption that data will be till end of migration
  region.
- If vendor driver supports mappable sparsed region, map those region
  during setup state of save/load, similarly unmap those from cleanup
  routines.
- Handles race condition that causes data corruption in migration region
  during save device state by adding mutex and serialiaing save_buffer and
  get_dirty_pages routines.
- Skip called get_dirty_pages routine for mapped MMIO region of device.
- Added trace events.
- Split into multiple functional patches.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2
  bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined
  action on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with
  region type capability.
- Re-structured vfio_device_migration_info. This structure will be placed
  at 0th offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section
  of the region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested
  pages are copied.

Thanks,
Kirti


Kirti Wankhede (7):
  vfio: KABI for migration interface for device state
  vfio iommu: Remove atomicity of ref_count of pinned pages
  vfio iommu: Add ioctl definition for dirty pages tracking.
  vfio iommu: Implementation of ioctl for dirty pages tracking.
  vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  vfio iommu: Adds flag to indicate dirty pages tracking capability
    support
  vfio: Selective dirty page tracking if IOMMU backed device pins pages

 drivers/vfio/vfio.c             |  13 +-
 drivers/vfio/vfio_iommu_type1.c | 345 ++++++++++++++++++++++++++++++++++++++--
 include/linux/vfio.h            |   4 +-
 include/uapi/linux/vfio.h       | 298 +++++++++++++++++++++++++++++++++-
 4 files changed, 642 insertions(+), 18 deletions(-)

-- 
2.7.0


^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-19  1:17   ` Yan Zhao
  2020-03-23 11:45   ` Auger Eric
  2020-03-18 19:41 ` [PATCH v14 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

- Defined MIGRATION region type and sub-type.

- Defined vfio_device_migration_info structure which will be placed at the
  0th offset of migration region to get/set VFIO device related
  information. Defined members of structure and usage on read/write access.

- Defined device states and state transition details.

- Defined sequence to be followed while saving and resuming VFIO device.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 227 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 9e843a147ead..d0021467af53 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
 #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
 #define VFIO_REGION_TYPE_GFX                    (1)
 #define VFIO_REGION_TYPE_CCW			(2)
+#define VFIO_REGION_TYPE_MIGRATION              (3)
 
 /* sub-types for VFIO_REGION_TYPE_PCI_* */
 
@@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
 /* sub-types for VFIO_REGION_TYPE_CCW */
 #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
 
+/* sub-types for VFIO_REGION_TYPE_MIGRATION */
+#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
+
+/*
+ * The structure vfio_device_migration_info is placed at the 0th offset of
+ * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
+ * migration information. Field accesses from this structure are only supported
+ * at their native width and alignment. Otherwise, the result is undefined and
+ * vendor drivers should return an error.
+ *
+ * device_state: (read/write)
+ *      - The user application writes to this field to inform the vendor driver
+ *        about the device state to be transitioned to.
+ *      - The vendor driver should take the necessary actions to change the
+ *        device state. After successful transition to a given state, the
+ *        vendor driver should return success on write(device_state, state)
+ *        system call. If the device state transition fails, the vendor driver
+ *        should return an appropriate -errno for the fault condition.
+ *      - On the user application side, if the device state transition fails,
+ *	  that is, if write(device_state, state) returns an error, read
+ *	  device_state again to determine the current state of the device from
+ *	  the vendor driver.
+ *      - The vendor driver should return previous state of the device unless
+ *        the vendor driver has encountered an internal error, in which case
+ *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
+ *      - The user application must use the device reset ioctl to recover the
+ *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
+ *        indicated to be in a valid device state by reading device_state, the
+ *        user application may attempt to transition the device to any valid
+ *        state reachable from the current state or terminate itself.
+ *
+ *      device_state consists of 3 bits:
+ *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
+ *        it indicates the _STOP state. When the device state is changed to
+ *        _STOP, driver should stop the device before write() returns.
+ *      - If bit 1 is set, it indicates the _SAVING state, which means that the
+ *        driver should start gathering device state information that will be
+ *        provided to the VFIO user application to save the device's state.
+ *      - If bit 2 is set, it indicates the _RESUMING state, which means that
+ *        the driver should prepare to resume the device. Data provided through
+ *        the migration region should be used to resume the device.
+ *      Bits 3 - 31 are reserved for future use. To preserve them, the user
+ *      application should perform a read-modify-write operation on this
+ *      field when modifying the specified bits.
+ *
+ *  +------- _RESUMING
+ *  |+------ _SAVING
+ *  ||+----- _RUNNING
+ *  |||
+ *  000b => Device Stopped, not saving or resuming
+ *  001b => Device running, which is the default state
+ *  010b => Stop the device & save the device state, stop-and-copy state
+ *  011b => Device running and save the device state, pre-copy state
+ *  100b => Device stopped and the device state is resuming
+ *  101b => Invalid state
+ *  110b => Error state
+ *  111b => Invalid state
+ *
+ * State transitions:
+ *
+ *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
+ *                (100b)     (001b)     (011b)        (010b)       (000b)
+ * 0. Running or default state
+ *                             |
+ *
+ * 1. Normal Shutdown (optional)
+ *                             |------------------------------------->|
+ *
+ * 2. Save the state or suspend
+ *                             |------------------------->|---------->|
+ *
+ * 3. Save the state during live migration
+ *                             |----------->|------------>|---------->|
+ *
+ * 4. Resuming
+ *                  |<---------|
+ *
+ * 5. Resumed
+ *                  |--------->|
+ *
+ * 0. Default state of VFIO device is _RUNNNG when the user application starts.
+ * 1. During normal shutdown of the user application, the user application may
+ *    optionally change the VFIO device state from _RUNNING to _STOP. This
+ *    transition is optional. The vendor driver must support this transition but
+ *    must not require it.
+ * 2. When the user application saves state or suspends the application, the
+ *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
+ *    On state transition from _RUNNING to stop-and-copy, driver must stop the
+ *    device, save the device state and send it to the application through the
+ *    migration region. The sequence to be followed for such transition is given
+ *    below.
+ * 3. In live migration of user application, the state transitions from _RUNNING
+ *    to pre-copy, to stop-and-copy, and to _STOP.
+ *    On state transition from _RUNNING to pre-copy, the driver should start
+ *    gathering the device state while the application is still running and send
+ *    the device state data to application through the migration region.
+ *    On state transition from pre-copy to stop-and-copy, the driver must stop
+ *    the device, save the device state and send it to the user application
+ *    through the migration region.
+ *    Vendor drivers must support the pre-copy state even for implementations
+ *    where no data is provided to the user before the stop-and-copy state. The
+ *    user must not be required to consume all migration data before the device
+ *    transitions to a new state, including the stop-and-copy state.
+ *    The sequence to be followed for above two transitions is given below.
+ * 4. To start the resuming phase, the device state should be transitioned from
+ *    the _RUNNING to the _RESUMING state.
+ *    In the _RESUMING state, the driver should use the device state data
+ *    received through the migration region to resume the device.
+ * 5. After providing saved device data to the driver, the application should
+ *    change the state from _RESUMING to _RUNNING.
+ *
+ * reserved:
+ *      Reads on this field return zero and writes are ignored.
+ *
+ * pending_bytes: (read only)
+ *      The number of pending bytes still to be migrated from the vendor driver.
+ *
+ * data_offset: (read only)
+ *      The user application should read data_offset in the migration region
+ *      from where the user application should read the device data during the
+ *      _SAVING state or write the device data during the _RESUMING state. See
+ *      below for details of sequence to be followed.
+ *
+ * data_size: (read/write)
+ *      The user application should read data_size to get the size in bytes of
+ *      the data copied in the migration region during the _SAVING state and
+ *      write the size in bytes of the data copied in the migration region
+ *      during the _RESUMING state.
+ *
+ * The format of the migration region is as follows:
+ *  ------------------------------------------------------------------
+ * |vfio_device_migration_info|    data section                      |
+ * |                          |     ///////////////////////////////  |
+ * ------------------------------------------------------------------
+ *   ^                              ^
+ *  offset 0-trapped part        data_offset
+ *
+ * The structure vfio_device_migration_info is always followed by the data
+ * section in the region, so data_offset will always be nonzero. The offset
+ * from where the data is copied is decided by the kernel driver. The data
+ * section can be trapped, mapped, or partitioned, depending on how the kernel
+ * driver defines the data section. The data section partition can be defined
+ * as mapped by the sparse mmap capability. If mmapped, data_offset should be
+ * page aligned, whereas initial section which contains the
+ * vfio_device_migration_info structure, might not end at the offset, which is
+ * page aligned. The user is not required to access through mmap regardless
+ * of the capabilities of the region mmap.
+ * The vendor driver should determine whether and how to partition the data
+ * section. The vendor driver should return data_offset accordingly.
+ *
+ * The sequence to be followed for the _SAVING|_RUNNING device state or
+ * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
+ * follows:
+ * a. Read pending_bytes, indicating the start of a new iteration to get device
+ *    data. Repeated read on pending_bytes at this stage should have no side
+ *    effects.
+ *    If pending_bytes == 0, the user application should not iterate to get data
+ *    for that device.
+ *    If pending_bytes > 0, perform the following steps.
+ * b. Read data_offset, indicating that the vendor driver should make data
+ *    available through the data section. The vendor driver should return this
+ *    read operation only after data is available from (region + data_offset)
+ *    to (region + data_offset + data_size).
+ * c. Read data_size, which is the amount of data in bytes available through
+ *    the migration region.
+ *    Read on data_offset and data_size should return the offset and size of
+ *    the current buffer if the user application reads data_offset and
+ *    data_size more than once here.
+ * d. Read data_size bytes of data from (region + data_offset) from the
+ *    migration region.
+ * e. Process the data.
+ * f. Read pending_bytes, which indicates that the data from the previous
+ *    iteration has been read. If pending_bytes > 0, go to step b.
+ *
+ * If an error occurs during the above sequence, the vendor driver can return
+ * an error code for next read() or write() operation, which will terminate the
+ * loop. The user application should then take the next necessary action, for
+ * example, failing migration or terminating the user application.
+ *
+ * The user application can transition from the _SAVING|_RUNNING
+ * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
+ * number of pending bytes. The user application should iterate in _SAVING
+ * (stop-and-copy) until pending_bytes is 0.
+ *
+ * The sequence to be followed while _RESUMING device state is as follows:
+ * While data for this device is available, repeat the following steps:
+ * a. Read data_offset from where the user application should write data.
+ * b. Write migration data starting at the migration region + data_offset for
+ *    the length determined by data_size from the migration source.
+ * c. Write data_size, which indicates to the vendor driver that data is
+ *    written in the migration region. Vendor driver should apply the
+ *    user-provided migration region data to the device resume state.
+ *
+ * For the user application, data is opaque. The user application should write
+ * data in the same order as the data is received and the data should be of
+ * same transaction size at the source.
+ */
+
+struct vfio_device_migration_info {
+	__u32 device_state;         /* VFIO device state */
+#define VFIO_DEVICE_STATE_STOP      (0)
+#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
+#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
+#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
+#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
+				     VFIO_DEVICE_STATE_SAVING |  \
+				     VFIO_DEVICE_STATE_RESUMING)
+
+#define VFIO_DEVICE_STATE_VALID(state) \
+	(state & VFIO_DEVICE_STATE_RESUMING ? \
+	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
+
+#define VFIO_DEVICE_STATE_IS_ERROR(state) \
+	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
+					      VFIO_DEVICE_STATE_RESUMING))
+
+#define VFIO_DEVICE_STATE_SET_ERROR(state) \
+	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
+					     VFIO_DEVICE_STATE_RESUMING)
+
+	__u32 reserved;
+	__u64 pending_bytes;
+	__u64 data_offset;
+	__u64 data_size;
+} __attribute__((packed));
+
 /*
  * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
  * which allows direct access to non-MSIX registers which happened to be within
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
  2020-03-18 19:41 ` [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-23 11:59   ` Auger Eric
  2020-03-18 19:41 ` [PATCH v14 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

vfio_pfn.ref_count is always updated by holding iommu->lock, using atomic
variable is overkill.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 9fdfae1cb17a..70aeab921d0f 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -112,7 +112,7 @@ struct vfio_pfn {
 	struct rb_node		node;
 	dma_addr_t		iova;		/* Device address */
 	unsigned long		pfn;		/* Host pfn */
-	atomic_t		ref_count;
+	unsigned int		ref_count;
 };
 
 struct vfio_regions {
@@ -233,7 +233,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, dma_addr_t iova,
 
 	vpfn->iova = iova;
 	vpfn->pfn = pfn;
-	atomic_set(&vpfn->ref_count, 1);
+	vpfn->ref_count = 1;
 	vfio_link_pfn(dma, vpfn);
 	return 0;
 }
@@ -251,7 +251,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
 	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
 
 	if (vpfn)
-		atomic_inc(&vpfn->ref_count);
+		vpfn->ref_count++;
 	return vpfn;
 }
 
@@ -259,7 +259,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
 {
 	int ret = 0;
 
-	if (atomic_dec_and_test(&vpfn->ref_count)) {
+	vpfn->ref_count--;
+	if (!vpfn->ref_count) {
 		ret = put_pfn(vpfn->pfn, dma->prot);
 		vfio_remove_from_pfn_list(dma, vpfn);
 	}
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking.
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
  2020-03-18 19:41 ` [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
  2020-03-18 19:41 ` [PATCH v14 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-19  3:44   ` Alex Williamson
  2020-03-18 19:41 ` [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl " Kirti Wankhede
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

IOMMU container maintains a list of all pages pinned by vfio_pin_pages API.
All pages pinned by vendor driver through this API should be considered as
dirty during migration. When container consists of IOMMU capable device and
all pages are pinned and mapped, then all pages are marked dirty.
Added support to start/stop pinned and unpinned pages tracking and to get
bitmap of all dirtied pages for requested IO virtual address range.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 include/uapi/linux/vfio.h | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index d0021467af53..043e9eafb255 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -995,6 +995,12 @@ struct vfio_iommu_type1_dma_map {
 
 #define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)
 
+struct vfio_bitmap {
+	__u64        pgsize;	/* page size for bitmap */
+	__u64        size;	/* in bytes */
+	__u64 __user *data;	/* one bit per page */
+};
+
 /**
  * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
  *							struct vfio_dma_unmap)
@@ -1021,6 +1027,55 @@ struct vfio_iommu_type1_dma_unmap {
 #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
 #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
 
+/**
+ * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *                                     struct vfio_iommu_type1_dirty_bitmap)
+ * IOCTL is used for dirty pages tracking. Caller sets argsz, which is size of
+ * struct vfio_iommu_type1_dirty_bitmap. Caller set flag depend on which
+ * operation to perform, details as below:
+ *
+ * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_START set, indicates
+ * migration is active and IOMMU module should track pages which are pinned and
+ * could be dirtied by device.
+ * Dirty pages are tracked until tracking is stopped by user application by
+ * setting VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
+ *
+ * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP set, indicates
+ * IOMMU should stop tracking pinned pages.
+ *
+ * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set,
+ * IOCTL returns dirty pages bitmap for IOMMU container during migration for
+ * given IOVA range. User must provide data[] as the structure
+ * vfio_iommu_type1_dirty_bitmap_get through which user provides IOVA range and
+ * pgsize. This interface supports to get bitmap of smallest supported pgsize
+ * only and can be modified in future to get bitmap of specified pgsize.
+ * User must allocate memory for bitmap, zero the bitmap memory and set size
+ * of allocated memory in bitmap_size field. One bit is used to represent one
+ * page consecutively starting from iova offset. User should provide page size
+ * in 'pgsize'. Bit set in bitmap indicates page at that offset from iova is
+ * dirty. Caller must set argsz including size of structure
+ * vfio_iommu_type1_dirty_bitmap_get.
+ *
+ * Only one flag should be set at a time.
+ *
+ */
+struct vfio_iommu_type1_dirty_bitmap {
+	__u32        argsz;
+	__u32        flags;
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
+	__u8         data[];
+};
+
+struct vfio_iommu_type1_dirty_bitmap_get {
+	__u64              iova;	/* IO virtual address */
+	__u64              size;	/* Size of iova range */
+	struct vfio_bitmap bitmap;
+};
+
+#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
+
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (2 preceding siblings ...)
  2020-03-18 19:41 ` [PATCH v14 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-19  3:06   ` Yan Zhao
  2020-03-19  3:45   ` Alex Williamson
  2020-03-18 19:41 ` [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
- Start dirty pages tracking while migration is active
- Stop dirty pages tracking.
- Get dirty pages bitmap. Its user space application's responsibility to
  copy content of dirty pages from source to destination during migration.

To prevent DoS attack, memory for bitmap is allocated per vfio_dma
structure. Bitmap size is calculated considering smallest supported page
size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled

Bitmap is populated for already pinned pages when bitmap is allocated for
a vfio_dma with the smallest supported page size. Update bitmap from
pinning functions when tracking is enabled. When user application queries
bitmap, check if requested page size is same as page size used to
populated bitmap. If it is equal, copy bitmap, but if not equal, return
error.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 203 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 70aeab921d0f..d6417fb02174 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -71,6 +71,7 @@ struct vfio_iommu {
 	unsigned int		dma_avail;
 	bool			v2;
 	bool			nesting;
+	bool			dirty_page_tracking;
 };
 
 struct vfio_domain {
@@ -91,6 +92,7 @@ struct vfio_dma {
 	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
 	struct task_struct	*task;
 	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
+	unsigned long		*bitmap;
 };
 
 struct vfio_group {
@@ -125,7 +127,10 @@ struct vfio_regions {
 #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
 					(!list_empty(&iommu->domain_list))
 
+#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
+
 static int put_pfn(unsigned long pfn, int prot);
+static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
 
 /*
  * This code handles mapping and unmapping of user data buffers
@@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
 	rb_erase(&old->node, &iommu->dma_list);
 }
 
+static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
+{
+	struct rb_node *n = rb_first(&iommu->dma_list);
+
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+		struct rb_node *p;
+		unsigned long npages = dma->size / pgsize;
+
+		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
+		if (!dma->bitmap) {
+			struct rb_node *p = rb_prev(n);
+
+			for (; p; p = rb_prev(p)) {
+				struct vfio_dma *dma = rb_entry(n,
+							struct vfio_dma, node);
+
+				kfree(dma->bitmap);
+				dma->bitmap = NULL;
+			}
+			return -ENOMEM;
+		}
+
+		if (RB_EMPTY_ROOT(&dma->pfn_list))
+			continue;
+
+		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
+			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
+							 node);
+
+			bitmap_set(dma->bitmap,
+					(vpfn->iova - dma->iova) / pgsize, 1);
+		}
+	}
+	return 0;
+}
+
+static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
+{
+	struct rb_node *n = rb_first(&iommu->dma_list);
+
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+
+		kfree(dma->bitmap);
+		dma->bitmap = NULL;
+	}
+}
+
 /*
  * Helper Functions for host iova-pfn list
  */
@@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 			vfio_unpin_page_external(dma, iova, do_accounting);
 			goto pin_unwind;
 		}
+
+		if (iommu->dirty_page_tracking) {
+			unsigned long pgshift =
+					 __ffs(vfio_pgsize_bitmap(iommu));
+
+			bitmap_set(dma->bitmap,
+				   (vpfn->iova - dma->iova) >> pgshift, 1);
+		}
 	}
 
 	ret = i;
@@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
 	vfio_unmap_unpin(iommu, dma, true);
 	vfio_unlink_dma(iommu, dma);
 	put_task_struct(dma->task);
+	kfree(dma->bitmap);
 	kfree(dma);
 	iommu->dma_avail++;
 }
@@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	return bitmap;
 }
 
+static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
+				  size_t size, uint64_t pgsize,
+				  unsigned char __user *bitmap)
+{
+	struct vfio_dma *dma;
+	unsigned long pgshift = __ffs(pgsize);
+	unsigned int npages, bitmap_size;
+
+	dma = vfio_find_dma(iommu, iova, 1);
+
+	if (!dma)
+		return -EINVAL;
+
+	if (dma->iova != iova || dma->size != size)
+		return -EINVAL;
+
+	npages = dma->size >> pgshift;
+	bitmap_size = DIRTY_BITMAP_BYTES(npages);
+
+	/* mark all pages dirty if all pages are pinned and mapped. */
+	if (dma->iommu_mapped)
+		bitmap_set(dma->bitmap, 0, npages);
+
+	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
+{
+	uint64_t bsize;
+
+	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
+		return -EINVAL;
+
+	bsize = DIRTY_BITMAP_BYTES(npages);
+
+	if (bitmap_size < bsize)
+		return -EINVAL;
+
+	return 0;
+}
+
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			     struct vfio_iommu_type1_dma_unmap *unmap)
 {
@@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 		return copy_to_user((void __user *)arg, &unmap, minsz) ?
 			-EFAULT : 0;
+	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
+		struct vfio_iommu_type1_dirty_bitmap dirty;
+		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
+		int ret = 0;
+
+		if (!iommu->v2)
+			return -EACCES;
+
+		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
+				    flags);
+
+		if (copy_from_user(&dirty, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (dirty.argsz < minsz || dirty.flags & ~mask)
+			return -EINVAL;
+
+		/* only one flag should be set at a time */
+		if (__ffs(dirty.flags) != __fls(dirty.flags))
+			return -EINVAL;
+
+		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
+			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
+
+			mutex_lock(&iommu->lock);
+			if (!iommu->dirty_page_tracking) {
+				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
+				if (!ret)
+					iommu->dirty_page_tracking = true;
+			}
+			mutex_unlock(&iommu->lock);
+			return ret;
+		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
+			mutex_lock(&iommu->lock);
+			if (iommu->dirty_page_tracking) {
+				iommu->dirty_page_tracking = false;
+				vfio_dma_bitmap_free(iommu);
+			}
+			mutex_unlock(&iommu->lock);
+			return 0;
+		} else if (dirty.flags &
+				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
+			struct vfio_iommu_type1_dirty_bitmap_get range;
+			unsigned long pgshift;
+			size_t data_size = dirty.argsz - minsz;
+			uint64_t iommu_pgsize =
+					 1 << __ffs(vfio_pgsize_bitmap(iommu));
+
+			if (!data_size || data_size < sizeof(range))
+				return -EINVAL;
+
+			if (copy_from_user(&range, (void __user *)(arg + minsz),
+					   sizeof(range)))
+				return -EFAULT;
+
+			/* allow only min supported pgsize */
+			if (range.bitmap.pgsize != iommu_pgsize)
+				return -EINVAL;
+			if (range.iova & (iommu_pgsize - 1))
+				return -EINVAL;
+			if (!range.size || range.size & (iommu_pgsize - 1))
+				return -EINVAL;
+			if (range.iova + range.size < range.iova)
+				return -EINVAL;
+			if (!access_ok((void __user *)range.bitmap.data,
+				       range.bitmap.size))
+				return -EINVAL;
+
+			pgshift = __ffs(range.bitmap.pgsize);
+			ret = verify_bitmap_size(range.size >> pgshift,
+						 range.bitmap.size);
+			if (ret)
+				return ret;
+
+			mutex_lock(&iommu->lock);
+			if (iommu->dirty_page_tracking)
+				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
+					 range.size, range.bitmap.pgsize,
+				    (unsigned char __user *)range.bitmap.data);
+			else
+				ret = -EINVAL;
+			mutex_unlock(&iommu->lock);
+
+			return ret;
+		}
 	}
 
 	return -ENOTTY;
@@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
 
 	vaddr = dma->vaddr + offset;
 
-	if (write)
+	if (write) {
 		*copied = __copy_to_user((void __user *)vaddr, data,
 					 count) ? 0 : count;
-	else
+		if (*copied && iommu->dirty_page_tracking) {
+			unsigned long pgshift =
+				__ffs(vfio_pgsize_bitmap(iommu));
+
+			bitmap_set(dma->bitmap, offset >> pgshift,
+				   *copied >> pgshift);
+		}
+	} else
 		*copied = __copy_from_user(data, (void __user *)vaddr,
 					   count) ? 0 : count;
 	if (kthread)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (3 preceding siblings ...)
  2020-03-18 19:41 ` [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl " Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-19  3:45   ` Alex Williamson
  2020-03-20  8:35   ` Yan Zhao
  2020-03-18 19:41 ` [PATCH v14 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Kirti Wankhede
  2020-03-18 19:41 ` [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
  6 siblings, 2 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

DMA mapped pages, including those pinned by mdev vendor drivers, might
get unpinned and unmapped while migration is active and device is still
running. For example, in pre-copy phase while guest driver could access
those pages, host device or vendor driver can dirty these mapped pages.
Such pages should be marked dirty so as to maintain memory consistency
for a user making use of dirty page tracking.

To get bitmap during unmap, user should set flag
VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
zeroed by user space application. Bitmap size and page size should be set
by user application.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
 include/uapi/linux/vfio.h       | 11 +++++++++
 2 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index d6417fb02174..aa1ac30f7854 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
 }
 
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
-			     struct vfio_iommu_type1_dma_unmap *unmap)
+			     struct vfio_iommu_type1_dma_unmap *unmap,
+			     struct vfio_bitmap *bitmap)
 {
 	uint64_t mask;
 	struct vfio_dma *dma, *dma_last = NULL;
@@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 	 * will be returned if these conditions are not met.  The v2 interface
 	 * will only return success and a size of zero if there were no
 	 * mappings within the range.
+	 *
+	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
+	 * must be for single mapping. Multiple mappings with this flag set is
+	 * not supported.
 	 */
 	if (iommu->v2) {
 		dma = vfio_find_dma(iommu, unmap->iova, 1);
@@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			ret = -EINVAL;
 			goto unlock;
 		}
+
+		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
+		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
+			ret = -EINVAL;
+			goto unlock;
+		}
+
 		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
 		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
 			ret = -EINVAL;
@@ -1014,6 +1026,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 		if (dma->task->mm != current->mm)
 			break;
 
+		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
+		     iommu->dirty_page_tracking)
+			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
+					bitmap->pgsize,
+					(unsigned char __user *) bitmap->data);
+
 		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
 			struct vfio_iommu_type1_dma_unmap nb_unmap;
 
@@ -2369,17 +2387,46 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
 		struct vfio_iommu_type1_dma_unmap unmap;
-		long ret;
+		struct vfio_bitmap bitmap = { 0 };
+		int ret;
 
 		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
 
 		if (copy_from_user(&unmap, (void __user *)arg, minsz))
 			return -EFAULT;
 
-		if (unmap.argsz < minsz || unmap.flags)
+		if (unmap.argsz < minsz ||
+		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
 			return -EINVAL;
 
-		ret = vfio_dma_do_unmap(iommu, &unmap);
+		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
+			unsigned long pgshift;
+			uint64_t iommu_pgsize =
+					 1 << __ffs(vfio_pgsize_bitmap(iommu));
+
+			if (unmap.argsz < (minsz + sizeof(bitmap)))
+				return -EINVAL;
+
+			if (copy_from_user(&bitmap,
+					   (void __user *)(arg + minsz),
+					   sizeof(bitmap)))
+				return -EFAULT;
+
+			/* allow only min supported pgsize */
+			if (bitmap.pgsize != iommu_pgsize)
+				return -EINVAL;
+			if (!access_ok((void __user *)bitmap.data, bitmap.size))
+				return -EINVAL;
+
+			pgshift = __ffs(bitmap.pgsize);
+			ret = verify_bitmap_size(unmap.size >> pgshift,
+						 bitmap.size);
+			if (ret)
+				return ret;
+
+		}
+
+		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
 		if (ret)
 			return ret;
 
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 043e9eafb255..a704e5380f04 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1010,12 +1010,23 @@ struct vfio_bitmap {
  * field.  No guarantee is made to the user that arbitrary unmaps of iova
  * or size different from those used in the original mapping call will
  * succeed.
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, user must
+ * provide data[] as structure vfio_bitmap. User must allocate memory to get
+ * bitmap, clear the bitmap memory by setting zero and must set size of
+ * allocated memory in vfio_bitmap.size field. One bit in bitmap
+ * represents per page, page of user provided page size in 'pgsize',
+ * consecutively starting from iova offset. Bit set indicates page at that
+ * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
+ * returned in vfio_bitmap.data
  */
 struct vfio_iommu_type1_dma_unmap {
 	__u32	argsz;
 	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
 	__u64	iova;				/* IO virtual address */
 	__u64	size;				/* Size of mapping (bytes) */
+	__u8    data[];
 };
 
 #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (4 preceding siblings ...)
  2020-03-18 19:41 ` [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-18 19:41 ` [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
  6 siblings, 0 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Flag VFIO_IOMMU_INFO_DIRTY_PGS in VFIO_IOMMU_GET_INFO indicates that driver
support dirty pages tracking.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 3 ++-
 include/uapi/linux/vfio.h       | 5 +++--
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index aa1ac30f7854..912629320719 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2340,7 +2340,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 			info.cap_offset = 0; /* output, no-recopy necessary */
 		}
 
-		info.flags = VFIO_IOMMU_INFO_PGSIZES;
+		info.flags = VFIO_IOMMU_INFO_PGSIZES |
+			     VFIO_IOMMU_INFO_DIRTY_PGS;
 
 		info.iova_pgsizes = vfio_pgsize_bitmap(iommu);
 
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index a704e5380f04..893ae7517735 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -947,8 +947,9 @@ struct vfio_device_ioeventfd {
 struct vfio_iommu_type1_info {
 	__u32	argsz;
 	__u32	flags;
-#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
-#define VFIO_IOMMU_INFO_CAPS	(1 << 1)	/* Info supports caps */
+#define VFIO_IOMMU_INFO_PGSIZES   (1 << 0) /* supported page sizes info */
+#define VFIO_IOMMU_INFO_CAPS      (1 << 1) /* Info supports caps */
+#define VFIO_IOMMU_INFO_DIRTY_PGS (1 << 2) /* supports dirty page tracking */
 	__u64	iova_pgsizes;	/* Bitmap of supported page sizes */
 	__u32   cap_offset;	/* Offset within info struct of first cap */
 };
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (5 preceding siblings ...)
  2020-03-18 19:41 ` [PATCH v14 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Kirti Wankhede
@ 2020-03-18 19:41 ` Kirti Wankhede
  2020-03-19  3:45   ` Alex Williamson
  2020-03-19  6:24   ` Yan Zhao
  6 siblings, 2 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-18 19:41 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Added a check such that only singleton IOMMU groups can pin pages.
From the point when vendor driver pins any pages, consider IOMMU group
dirty page scope to be limited to pinned pages.

To optimize to avoid walking list often, added flag
pinned_page_dirty_scope to indicate if all of the vfio_groups for each
vfio_domain in the domain_list dirty page scope is limited to pinned
pages. This flag is updated on first pinned pages request for that IOMMU
group and on attaching/detaching group.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio.c             | 13 +++++--
 drivers/vfio/vfio_iommu_type1.c | 77 +++++++++++++++++++++++++++++++++++++++--
 include/linux/vfio.h            |  4 ++-
 3 files changed, 87 insertions(+), 7 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 210fcf426643..311b5e4e111e 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -85,6 +85,7 @@ struct vfio_group {
 	atomic_t			opened;
 	wait_queue_head_t		container_q;
 	bool				noiommu;
+	unsigned int			dev_counter;
 	struct kvm			*kvm;
 	struct blocking_notifier_head	notifier;
 };
@@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
 
 	mutex_lock(&group->device_lock);
 	list_add(&device->group_next, &group->device_list);
+	group->dev_counter++;
 	mutex_unlock(&group->device_lock);
 
 	return device;
@@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
 	struct vfio_group *group = device->group;
 
 	list_del(&device->group_next);
+	group->dev_counter--;
 	mutex_unlock(&group->device_lock);
 
 	dev_set_drvdata(device->dev, NULL);
@@ -1933,6 +1936,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	if (!group)
 		return -ENODEV;
 
+	if (group->dev_counter > 1)
+		return -EINVAL;
+
 	ret = vfio_group_add_container_user(group);
 	if (ret)
 		goto err_pin_pages;
@@ -1940,7 +1946,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
+		ret = driver->ops->pin_pages(container->iommu_data,
+					     group->iommu_group, user_pfn,
 					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
@@ -2038,8 +2045,8 @@ int vfio_group_pin_pages(struct vfio_group *group,
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
 		ret = driver->ops->pin_pages(container->iommu_data,
-					     user_iova_pfn, npage,
-					     prot, phys_pfn);
+					     group->iommu_group, user_iova_pfn,
+					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
 
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 912629320719..deec09f4b0f6 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -72,6 +72,7 @@ struct vfio_iommu {
 	bool			v2;
 	bool			nesting;
 	bool			dirty_page_tracking;
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_domain {
@@ -99,6 +100,7 @@ struct vfio_group {
 	struct iommu_group	*iommu_group;
 	struct list_head	next;
 	bool			mdev_group;	/* An mdev group */
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_iova {
@@ -132,6 +134,10 @@ struct vfio_regions {
 static int put_pfn(unsigned long pfn, int prot);
 static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group);
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
 /*
  * This code handles mapping and unmapping of user data buffers
  * into DMA'ble space using the IOMMU
@@ -556,11 +562,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
 }
 
 static int vfio_iommu_type1_pin_pages(void *iommu_data,
+				      struct iommu_group *iommu_group,
 				      unsigned long *user_pfn,
 				      int npage, int prot,
 				      unsigned long *phys_pfn)
 {
 	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group;
 	int i, j, ret;
 	unsigned long remote_vaddr;
 	struct vfio_dma *dma;
@@ -630,8 +638,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 				   (vpfn->iova - dma->iova) >> pgshift, 1);
 		}
 	}
-
 	ret = i;
+
+	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
+	if (!group->pinned_page_dirty_scope) {
+		group->pinned_page_dirty_scope = true;
+		update_pinned_page_dirty_scope(iommu);
+	}
+
 	goto pin_done;
 
 pin_unwind:
@@ -913,8 +927,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
 	npages = dma->size >> pgshift;
 	bitmap_size = DIRTY_BITMAP_BYTES(npages);
 
-	/* mark all pages dirty if all pages are pinned and mapped. */
-	if (dma->iommu_mapped)
+	/*
+	 * mark all pages dirty if any IOMMU capable device is not able
+	 * to report dirty pages and all pages are pinned and mapped.
+	 */
+	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
 		bitmap_set(dma->bitmap, 0, npages);
 
 	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
@@ -1393,6 +1410,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
 	return NULL;
 }
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group = NULL;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		group = find_iommu_group(domain, iommu_group);
+		if (group)
+			return group;
+	}
+
+	if (iommu->external_domain)
+		group = find_iommu_group(iommu->external_domain, iommu_group);
+
+	return group;
+}
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->pinned_page_dirty_scope) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	if (iommu->external_domain) {
+		domain = iommu->external_domain;
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->pinned_page_dirty_scope) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	iommu->pinned_page_dirty_scope = true;
+}
+
 static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
 				  phys_addr_t *base)
 {
@@ -1799,6 +1861,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 
 			list_add(&group->next,
 				 &iommu->external_domain->group_list);
+			group->pinned_page_dirty_scope = true;
+			if (!iommu->pinned_page_dirty_scope)
+				update_pinned_page_dirty_scope(iommu);
 			mutex_unlock(&iommu->lock);
 
 			return 0;
@@ -1921,6 +1986,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 done:
 	/* Delete the old one and insert new iova list */
 	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
+	iommu->pinned_page_dirty_scope = false;
 	mutex_unlock(&iommu->lock);
 	vfio_iommu_resv_free(&group_resv_regions);
 
@@ -2073,6 +2139,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 	struct vfio_iommu *iommu = iommu_data;
 	struct vfio_domain *domain;
 	struct vfio_group *group;
+	bool update_dirty_scope = false;
 	LIST_HEAD(iova_copy);
 
 	mutex_lock(&iommu->lock);
@@ -2080,6 +2147,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 	if (iommu->external_domain) {
 		group = find_iommu_group(iommu->external_domain, iommu_group);
 		if (group) {
+			update_dirty_scope = !group->pinned_page_dirty_scope;
 			list_del(&group->next);
 			kfree(group);
 
@@ -2109,6 +2177,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 			continue;
 
 		vfio_iommu_detach_group(domain, group);
+		update_dirty_scope = !group->pinned_page_dirty_scope;
 		list_del(&group->next);
 		kfree(group);
 		/*
@@ -2139,6 +2208,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 		vfio_iommu_iova_free(&iova_copy);
 
 detach_group_done:
+	if (update_dirty_scope)
+		update_pinned_page_dirty_scope(iommu);
 	mutex_unlock(&iommu->lock);
 }
 
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index be2bd358b952..702e1d7b6e8b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
 					struct iommu_group *group);
 	void		(*detach_group)(void *iommu_data,
 					struct iommu_group *group);
-	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
+	int		(*pin_pages)(void *iommu_data,
+				     struct iommu_group *group,
+				     unsigned long *user_pfn,
 				     int npage, int prot,
 				     unsigned long *phys_pfn);
 	int		(*unpin_pages)(void *iommu_data,
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-18 19:41 ` [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
@ 2020-03-19  1:17   ` Yan Zhao
  2020-03-19  3:49     ` Alex Williamson
  2020-03-23 11:45   ` Auger Eric
  1 sibling, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-19  1:17 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:
> - Defined MIGRATION region type and sub-type.
> 
> - Defined vfio_device_migration_info structure which will be placed at the
>   0th offset of migration region to get/set VFIO device related
>   information. Defined members of structure and usage on read/write access.
> 
> - Defined device states and state transition details.
> 
> - Defined sequence to be followed while saving and resuming VFIO device.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 227 insertions(+)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 9e843a147ead..d0021467af53 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
>  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
>  #define VFIO_REGION_TYPE_GFX                    (1)
>  #define VFIO_REGION_TYPE_CCW			(2)
> +#define VFIO_REGION_TYPE_MIGRATION              (3)
>  
>  /* sub-types for VFIO_REGION_TYPE_PCI_* */
>  
> @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
>  /* sub-types for VFIO_REGION_TYPE_CCW */
>  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
>  
> +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> +
> +/*
> + * The structure vfio_device_migration_info is placed at the 0th offset of
> + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> + * migration information. Field accesses from this structure are only supported
> + * at their native width and alignment. Otherwise, the result is undefined and
> + * vendor drivers should return an error.
> + *
> + * device_state: (read/write)
> + *      - The user application writes to this field to inform the vendor driver
> + *        about the device state to be transitioned to.
> + *      - The vendor driver should take the necessary actions to change the
> + *        device state. After successful transition to a given state, the
> + *        vendor driver should return success on write(device_state, state)
> + *        system call. If the device state transition fails, the vendor driver
> + *        should return an appropriate -errno for the fault condition.
> + *      - On the user application side, if the device state transition fails,
> + *	  that is, if write(device_state, state) returns an error, read
> + *	  device_state again to determine the current state of the device from
> + *	  the vendor driver.
> + *      - The vendor driver should return previous state of the device unless
> + *        the vendor driver has encountered an internal error, in which case
> + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> + *      - The user application must use the device reset ioctl to recover the
> + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> + *        indicated to be in a valid device state by reading device_state, the
> + *        user application may attempt to transition the device to any valid
> + *        state reachable from the current state or terminate itself.
> + *
> + *      device_state consists of 3 bits:
> + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> + *        it indicates the _STOP state. When the device state is changed to
> + *        _STOP, driver should stop the device before write() returns.
> + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> + *        driver should start gathering device state information that will be
> + *        provided to the VFIO user application to save the device's state.
> + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> + *        the driver should prepare to resume the device. Data provided through
> + *        the migration region should be used to resume the device.
> + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> + *      application should perform a read-modify-write operation on this
> + *      field when modifying the specified bits.
> + *
> + *  +------- _RESUMING
> + *  |+------ _SAVING
> + *  ||+----- _RUNNING
> + *  |||
> + *  000b => Device Stopped, not saving or resuming
> + *  001b => Device running, which is the default state
> + *  010b => Stop the device & save the device state, stop-and-copy state
> + *  011b => Device running and save the device state, pre-copy state
> + *  100b => Device stopped and the device state is resuming
> + *  101b => Invalid state
> + *  110b => Error state
> + *  111b => Invalid state
> + *
> + * State transitions:
> + *
> + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> + *                (100b)     (001b)     (011b)        (010b)       (000b)
> + * 0. Running or default state
> + *                             |
> + *
> + * 1. Normal Shutdown (optional)
> + *                             |------------------------------------->|
> + *
> + * 2. Save the state or suspend
> + *                             |------------------------->|---------->|
> + *
> + * 3. Save the state during live migration
> + *                             |----------->|------------>|---------->|
> + *
> + * 4. Resuming
> + *                  |<---------|
> + *
> + * 5. Resumed
> + *                  |--------->|
> + *
> + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> + * 1. During normal shutdown of the user application, the user application may
> + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> + *    transition is optional. The vendor driver must support this transition but
> + *    must not require it.
> + * 2. When the user application saves state or suspends the application, the
> + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> + *    device, save the device state and send it to the application through the
> + *    migration region. The sequence to be followed for such transition is given
> + *    below.
> + * 3. In live migration of user application, the state transitions from _RUNNING
> + *    to pre-copy, to stop-and-copy, and to _STOP.
> + *    On state transition from _RUNNING to pre-copy, the driver should start
> + *    gathering the device state while the application is still running and send
> + *    the device state data to application through the migration region.
> + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> + *    the device, save the device state and send it to the user application
> + *    through the migration region.
> + *    Vendor drivers must support the pre-copy state even for implementations
> + *    where no data is provided to the user before the stop-and-copy state. The
> + *    user must not be required to consume all migration data before the device
> + *    transitions to a new state, including the stop-and-copy state.
> + *    The sequence to be followed for above two transitions is given below.
> + * 4. To start the resuming phase, the device state should be transitioned from
> + *    the _RUNNING to the _RESUMING state.
> + *    In the _RESUMING state, the driver should use the device state data
> + *    received through the migration region to resume the device.
> + * 5. After providing saved device data to the driver, the application should
> + *    change the state from _RESUMING to _RUNNING.
> + *
> + * reserved:
> + *      Reads on this field return zero and writes are ignored.
> + *
> + * pending_bytes: (read only)
> + *      The number of pending bytes still to be migrated from the vendor driver.
> + *
> + * data_offset: (read only)
> + *      The user application should read data_offset in the migration region
> + *      from where the user application should read the device data during the
> + *      _SAVING state or write the device data during the _RESUMING state. See
> + *      below for details of sequence to be followed.
> + *
> + * data_size: (read/write)
> + *      The user application should read data_size to get the size in bytes of
> + *      the data copied in the migration region during the _SAVING state and
> + *      write the size in bytes of the data copied in the migration region
> + *      during the _RESUMING state.
> + *
> + * The format of the migration region is as follows:
> + *  ------------------------------------------------------------------
> + * |vfio_device_migration_info|    data section                      |
> + * |                          |     ///////////////////////////////  |
> + * ------------------------------------------------------------------
> + *   ^                              ^
> + *  offset 0-trapped part        data_offset
> + *
> + * The structure vfio_device_migration_info is always followed by the data
> + * section in the region, so data_offset will always be nonzero. The offset
> + * from where the data is copied is decided by the kernel driver. The data
> + * section can be trapped, mapped, or partitioned, depending on how the kernel
> + * driver defines the data section. The data section partition can be defined
> + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> + * page aligned, whereas initial section which contains the
> + * vfio_device_migration_info structure, might not end at the offset, which is
> + * page aligned. The user is not required to access through mmap regardless
> + * of the capabilities of the region mmap.
> + * The vendor driver should determine whether and how to partition the data
> + * section. The vendor driver should return data_offset accordingly.
> + *
> + * The sequence to be followed for the _SAVING|_RUNNING device state or
> + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> + * follows:
> + * a. Read pending_bytes, indicating the start of a new iteration to get device
> + *    data. Repeated read on pending_bytes at this stage should have no side
> + *    effects.
> + *    If pending_bytes == 0, the user application should not iterate to get data
> + *    for that device.
> + *    If pending_bytes > 0, perform the following steps.
> + * b. Read data_offset, indicating that the vendor driver should make data
> + *    available through the data section. The vendor driver should return this
> + *    read operation only after data is available from (region + data_offset)
> + *    to (region + data_offset + data_size).
> + * c. Read data_size, which is the amount of data in bytes available through
> + *    the migration region.
> + *    Read on data_offset and data_size should return the offset and size of
> + *    the current buffer if the user application reads data_offset and
> + *    data_size more than once here.
If data region is mmaped, merely reading data_offset and data_size
cannot let kernel know what are correct values to return.
Consider to add a read operation which is trapped into kernel to let
kernel exactly know it needs to move to the next offset and update data_size
?

> + * d. Read data_size bytes of data from (region + data_offset) from the
> + *    migration region.
> + * e. Process the data.
> + * f. Read pending_bytes, which indicates that the data from the previous
> + *    iteration has been read. If pending_bytes > 0, go to step b.
> + *
> + * If an error occurs during the above sequence, the vendor driver can return
> + * an error code for next read() or write() operation, which will terminate the
> + * loop. The user application should then take the next necessary action, for
> + * example, failing migration or terminating the user application.
> + *
> + * The user application can transition from the _SAVING|_RUNNING
> + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> + * number of pending bytes. The user application should iterate in _SAVING
> + * (stop-and-copy) until pending_bytes is 0.
> + *
> + * The sequence to be followed while _RESUMING device state is as follows:
> + * While data for this device is available, repeat the following steps:
> + * a. Read data_offset from where the user application should write data.
> + * b. Write migration data starting at the migration region + data_offset for
> + *    the length determined by data_size from the migration source.
> + * c. Write data_size, which indicates to the vendor driver that data is
> + *    written in the migration region. Vendor driver should apply the
> + *    user-provided migration region data to the device resume state.
> + *
> + * For the user application, data is opaque. The user application should write
> + * data in the same order as the data is received and the data should be of
> + * same transaction size at the source.
> + */
> +
> +struct vfio_device_migration_info {
> +	__u32 device_state;         /* VFIO device state */
> +#define VFIO_DEVICE_STATE_STOP      (0)
> +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> +				     VFIO_DEVICE_STATE_SAVING |  \
> +				     VFIO_DEVICE_STATE_RESUMING)
> +
> +#define VFIO_DEVICE_STATE_VALID(state) \
> +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> +
> +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> +					      VFIO_DEVICE_STATE_RESUMING))
> +
> +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> +					     VFIO_DEVICE_STATE_RESUMING)
> +
> +	__u32 reserved;
> +	__u64 pending_bytes;
> +	__u64 data_offset;
> +	__u64 data_size;
> +} __attribute__((packed));
> +
>  /*
>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>   * which allows direct access to non-MSIX registers which happened to be within
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-18 19:41 ` [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl " Kirti Wankhede
@ 2020-03-19  3:06   ` Yan Zhao
  2020-03-19  4:01     ` Alex Williamson
  2020-03-19  3:45   ` Alex Williamson
  1 sibling, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-19  3:06 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:
> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> - Start dirty pages tracking while migration is active
> - Stop dirty pages tracking.
> - Get dirty pages bitmap. Its user space application's responsibility to
>   copy content of dirty pages from source to destination during migration.
> 
> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> structure. Bitmap size is calculated considering smallest supported page
> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> 
> Bitmap is populated for already pinned pages when bitmap is allocated for
> a vfio_dma with the smallest supported page size. Update bitmap from
> pinning functions when tracking is enabled. When user application queries
> bitmap, check if requested page size is same as page size used to
> populated bitmap. If it is equal, copy bitmap, but if not equal, return
> error.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 203 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 70aeab921d0f..d6417fb02174 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -71,6 +71,7 @@ struct vfio_iommu {
>  	unsigned int		dma_avail;
>  	bool			v2;
>  	bool			nesting;
> +	bool			dirty_page_tracking;
>  };
>  
>  struct vfio_domain {
> @@ -91,6 +92,7 @@ struct vfio_dma {
>  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>  	struct task_struct	*task;
>  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> +	unsigned long		*bitmap;
>  };
>  
>  struct vfio_group {
> @@ -125,7 +127,10 @@ struct vfio_regions {
>  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>  					(!list_empty(&iommu->domain_list))
>  
> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> +
>  static int put_pfn(unsigned long pfn, int prot);
> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
>  /*
>   * This code handles mapping and unmapping of user data buffers
> @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>  	rb_erase(&old->node, &iommu->dma_list);
>  }
>  
> +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +		struct rb_node *p;
> +		unsigned long npages = dma->size / pgsize;
> +
> +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> +		if (!dma->bitmap) {
> +			struct rb_node *p = rb_prev(n);
> +
> +			for (; p; p = rb_prev(p)) {
> +				struct vfio_dma *dma = rb_entry(n,
> +							struct vfio_dma, node);
> +
> +				kfree(dma->bitmap);
> +				dma->bitmap = NULL;
> +			}
> +			return -ENOMEM;
> +		}
> +
> +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> +			continue;
> +
> +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> +							 node);
> +
> +			bitmap_set(dma->bitmap,
> +					(vpfn->iova - dma->iova) / pgsize, 1);
> +		}
> +	}
> +	return 0;
> +}
> +
> +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> +		kfree(dma->bitmap);
> +		dma->bitmap = NULL;
> +	}
> +}
> +
>  /*
>   * Helper Functions for host iova-pfn list
>   */
> @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  			vfio_unpin_page_external(dma, iova, do_accounting);
>  			goto pin_unwind;
>  		}
> +
> +		if (iommu->dirty_page_tracking) {
> +			unsigned long pgshift =
> +					 __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> +		}
>  	}
>  
>  	ret = i;
> @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unmap_unpin(iommu, dma, true);
>  	vfio_unlink_dma(iommu, dma);
>  	put_task_struct(dma->task);
> +	kfree(dma->bitmap);
>  	kfree(dma);
>  	iommu->dma_avail++;
>  }
> @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>  	return bitmap;
>  }
>  
> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> +				  size_t size, uint64_t pgsize,
> +				  unsigned char __user *bitmap)
> +{
> +	struct vfio_dma *dma;
> +	unsigned long pgshift = __ffs(pgsize);
> +	unsigned int npages, bitmap_size;
> +
> +	dma = vfio_find_dma(iommu, iova, 1);
> +
> +	if (!dma)
> +		return -EINVAL;
> +
> +	if (dma->iova != iova || dma->size != size)
> +		return -EINVAL;
> +
looks this size is passed from user. how can it ensure size always
equals to dma->size ?

shouldn't we iterate dma tree to look for dirty for whole range if a
single dma cannot meet them all?

> +	npages = dma->size >> pgshift;
> +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> +
> +	/* mark all pages dirty if all pages are pinned and mapped. */
> +	if (dma->iommu_mapped)
> +		bitmap_set(dma->bitmap, 0, npages);
> +
> +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +
> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> +{
> +	uint64_t bsize;
> +
> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> +		return -EINVAL;
> +
> +	bsize = DIRTY_BITMAP_BYTES(npages);
> +
> +	if (bitmap_size < bsize)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			     struct vfio_iommu_type1_dma_unmap *unmap)
>  {
> @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> +		int ret = 0;
> +
> +		if (!iommu->v2)
> +			return -EACCES;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> +				    flags);
> +
> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> +			return -EINVAL;
> +
> +		/* only one flag should be set at a time */
> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> +			return -EINVAL;
> +
> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			mutex_lock(&iommu->lock);
> +			if (!iommu->dirty_page_tracking) {
> +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> +				if (!ret)
> +					iommu->dirty_page_tracking = true;
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return ret;
> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking) {
> +				iommu->dirty_page_tracking = false;
> +				vfio_dma_bitmap_free(iommu);
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return 0;
> +		} else if (dirty.flags &
> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> +			unsigned long pgshift;
> +			size_t data_size = dirty.argsz - minsz;
> +			uint64_t iommu_pgsize =
> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (!data_size || data_size < sizeof(range))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> +					   sizeof(range)))
> +				return -EFAULT;
> +
> +			/* allow only min supported pgsize */
> +			if (range.bitmap.pgsize != iommu_pgsize)
> +				return -EINVAL;
> +			if (range.iova & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (!range.size || range.size & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (range.iova + range.size < range.iova)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)range.bitmap.data,
> +				       range.bitmap.size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(range.bitmap.pgsize);
> +			ret = verify_bitmap_size(range.size >> pgshift,
> +						 range.bitmap.size);
> +			if (ret)
> +				return ret;
> +
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking)
> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> +					 range.size, range.bitmap.pgsize,
> +				    (unsigned char __user *)range.bitmap.data);
> +			else
> +				ret = -EINVAL;
> +			mutex_unlock(&iommu->lock);
> +
> +			return ret;
> +		}
>  	}
>  
>  	return -ENOTTY;
> @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
>  
>  	vaddr = dma->vaddr + offset;
>  
> -	if (write)
> +	if (write) {
>  		*copied = __copy_to_user((void __user *)vaddr, data,
>  					 count) ? 0 : count;
> -	else
> +		if (*copied && iommu->dirty_page_tracking) {
> +			unsigned long pgshift =
> +				__ffs(vfio_pgsize_bitmap(iommu));
> +
> +			bitmap_set(dma->bitmap, offset >> pgshift,
> +				   *copied >> pgshift);
> +		}
> +	} else
>  		*copied = __copy_from_user(data, (void __user *)vaddr,
>  					   count) ? 0 : count;
>  	if (kthread)
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking.
  2020-03-18 19:41 ` [PATCH v14 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
@ 2020-03-19  3:44   ` Alex Williamson
  0 siblings, 0 replies; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  3:44 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 19 Mar 2020 01:11:10 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> IOMMU container maintains a list of all pages pinned by vfio_pin_pages API.
> All pages pinned by vendor driver through this API should be considered as
> dirty during migration. When container consists of IOMMU capable device and
> all pages are pinned and mapped, then all pages are marked dirty.
> Added support to start/stop pinned and unpinned pages tracking and to get
> bitmap of all dirtied pages for requested IO virtual address range.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  include/uapi/linux/vfio.h | 55 +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 55 insertions(+)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index d0021467af53..043e9eafb255 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -995,6 +995,12 @@ struct vfio_iommu_type1_dma_map {
>  
>  #define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)
>  
> +struct vfio_bitmap {
> +	__u64        pgsize;	/* page size for bitmap */
> +	__u64        size;	/* in bytes */
> +	__u64 __user *data;	/* one bit per page */
> +};
> +
>  /**
>   * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
>   *							struct vfio_dma_unmap)
> @@ -1021,6 +1027,55 @@ struct vfio_iommu_type1_dma_unmap {
>  #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
>  #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
>  
> +/**
> + * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
> + *                                     struct vfio_iommu_type1_dirty_bitmap)
> + * IOCTL is used for dirty pages tracking. Caller sets argsz, which is size of
> + * struct vfio_iommu_type1_dirty_bitmap. Caller set flag depend on which
> + * operation to perform, details as below:
> + *
> + * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_START set, indicates
> + * migration is active and IOMMU module should track pages which are pinned and
> + * could be dirtied by device.

"...should track" pages dirtied or potentially dirtied by devices.

As soon as we add support for Yan's DMA r/w the pinning requirement is
gone, besides pinning is an in-kernel implementation detail, the user
of this interface doesn't know or care which pages are pinned.

> + * Dirty pages are tracked until tracking is stopped by user application by
> + * setting VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
> + *
> + * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP set, indicates
> + * IOMMU should stop tracking pinned pages.

s/pinned/dirtied/

> + *
> + * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set,
> + * IOCTL returns dirty pages bitmap for IOMMU container during migration for
> + * given IOVA range. User must provide data[] as the structure
> + * vfio_iommu_type1_dirty_bitmap_get through which user provides IOVA range and
> + * pgsize. This interface supports to get bitmap of smallest supported pgsize
> + * only and can be modified in future to get bitmap of specified pgsize.
> + * User must allocate memory for bitmap, zero the bitmap memory and set size
> + * of allocated memory in bitmap_size field. One bit is used to represent one
> + * page consecutively starting from iova offset. User should provide page size
> + * in 'pgsize'. Bit set in bitmap indicates page at that offset from iova is
> + * dirty. Caller must set argsz including size of structure
> + * vfio_iommu_type1_dirty_bitmap_get.
> + *
> + * Only one flag should be set at a time.

"Only one of the flags _START, _STOP, and _GET maybe be specified at a
time."  IOW, let's not presume what yet undefined flags may do.
Hopefully this addresses Dave's concern.

> + *
> + */
> +struct vfio_iommu_type1_dirty_bitmap {
> +	__u32        argsz;
> +	__u32        flags;
> +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
> +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
> +#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
> +	__u8         data[];
> +};
> +
> +struct vfio_iommu_type1_dirty_bitmap_get {
> +	__u64              iova;	/* IO virtual address */
> +	__u64              size;	/* Size of iova range */
> +	struct vfio_bitmap bitmap;
> +};
> +
> +#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
> +
>  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>  
>  /*


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-18 19:41 ` [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl " Kirti Wankhede
  2020-03-19  3:06   ` Yan Zhao
@ 2020-03-19  3:45   ` Alex Williamson
  2020-03-19 14:52     ` Kirti Wankhede
  2020-03-19 18:57     ` Kirti Wankhede
  1 sibling, 2 replies; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  3:45 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 19 Mar 2020 01:11:11 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> - Start dirty pages tracking while migration is active
> - Stop dirty pages tracking.
> - Get dirty pages bitmap. Its user space application's responsibility to
>   copy content of dirty pages from source to destination during migration.
> 
> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> structure. Bitmap size is calculated considering smallest supported page
> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> 
> Bitmap is populated for already pinned pages when bitmap is allocated for
> a vfio_dma with the smallest supported page size. Update bitmap from
> pinning functions when tracking is enabled. When user application queries
> bitmap, check if requested page size is same as page size used to
> populated bitmap. If it is equal, copy bitmap, but if not equal, return
> error.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 203 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 70aeab921d0f..d6417fb02174 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -71,6 +71,7 @@ struct vfio_iommu {
>  	unsigned int		dma_avail;
>  	bool			v2;
>  	bool			nesting;
> +	bool			dirty_page_tracking;
>  };
>  
>  struct vfio_domain {
> @@ -91,6 +92,7 @@ struct vfio_dma {
>  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>  	struct task_struct	*task;
>  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> +	unsigned long		*bitmap;

We've made the bitmap a width invariant u64 else, should be here as
well.

>  };
>  
>  struct vfio_group {
> @@ -125,7 +127,10 @@ struct vfio_regions {
>  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>  					(!list_empty(&iommu->domain_list))
>  
> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> +
>  static int put_pfn(unsigned long pfn, int prot);
> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
>  /*
>   * This code handles mapping and unmapping of user data buffers
> @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>  	rb_erase(&old->node, &iommu->dma_list);
>  }
>  
> +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +		struct rb_node *p;
> +		unsigned long npages = dma->size / pgsize;
> +
> +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> +		if (!dma->bitmap) {
> +			struct rb_node *p = rb_prev(n);
> +
> +			for (; p; p = rb_prev(p)) {
> +				struct vfio_dma *dma = rb_entry(n,
> +							struct vfio_dma, node);
> +
> +				kfree(dma->bitmap);
> +				dma->bitmap = NULL;
> +			}
> +			return -ENOMEM;
> +		}
> +
> +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> +			continue;
> +
> +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> +							 node);
> +
> +			bitmap_set(dma->bitmap,
> +					(vpfn->iova - dma->iova) / pgsize, 1);
> +		}
> +	}
> +	return 0;
> +}
> +
> +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> +		kfree(dma->bitmap);
> +		dma->bitmap = NULL;
> +	}
> +}
> +
>  /*
>   * Helper Functions for host iova-pfn list
>   */
> @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  			vfio_unpin_page_external(dma, iova, do_accounting);
>  			goto pin_unwind;
>  		}
> +
> +		if (iommu->dirty_page_tracking) {
> +			unsigned long pgshift =
> +					 __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> +		}
>  	}
>  
>  	ret = i;
> @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unmap_unpin(iommu, dma, true);
>  	vfio_unlink_dma(iommu, dma);
>  	put_task_struct(dma->task);
> +	kfree(dma->bitmap);
>  	kfree(dma);
>  	iommu->dma_avail++;
>  }
> @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>  	return bitmap;
>  }
>  
> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> +				  size_t size, uint64_t pgsize,
> +				  unsigned char __user *bitmap)

And here, why do callers cast to an unsigned char pointer when we're
going to cast to a void pointer anyway?  Should be a u64 __user pointer.

> +{
> +	struct vfio_dma *dma;
> +	unsigned long pgshift = __ffs(pgsize);
> +	unsigned int npages, bitmap_size;
> +
> +	dma = vfio_find_dma(iommu, iova, 1);
> +
> +	if (!dma)
> +		return -EINVAL;
> +
> +	if (dma->iova != iova || dma->size != size)
> +		return -EINVAL;
> +
> +	npages = dma->size >> pgshift;
> +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> +
> +	/* mark all pages dirty if all pages are pinned and mapped. */
> +	if (dma->iommu_mapped)
> +		bitmap_set(dma->bitmap, 0, npages);
> +
> +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +
> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> +{
> +	uint64_t bsize;
> +
> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)

As commented previously, how do we derive this UINT_MAX limitation?

> +		return -EINVAL;
> +
> +	bsize = DIRTY_BITMAP_BYTES(npages);
> +
> +	if (bitmap_size < bsize)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			     struct vfio_iommu_type1_dma_unmap *unmap)
>  {

We didn't address that vfio_dma_do_map() needs to kvzalloc a bitmap for
any new vfio_dma created while iommu->dirty_page_tracking = true.

> @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> +		int ret = 0;
> +
> +		if (!iommu->v2)
> +			return -EACCES;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> +				    flags);
> +
> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> +			return -EINVAL;
> +
> +		/* only one flag should be set at a time */
> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> +			return -EINVAL;
> +
> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			mutex_lock(&iommu->lock);
> +			if (!iommu->dirty_page_tracking) {
> +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> +				if (!ret)
> +					iommu->dirty_page_tracking = true;
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return ret;
> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking) {
> +				iommu->dirty_page_tracking = false;
> +				vfio_dma_bitmap_free(iommu);
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return 0;
> +		} else if (dirty.flags &
> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> +			unsigned long pgshift;
> +			size_t data_size = dirty.argsz - minsz;
> +			uint64_t iommu_pgsize =
> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (!data_size || data_size < sizeof(range))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> +					   sizeof(range)))
> +				return -EFAULT;
> +
> +			/* allow only min supported pgsize */
> +			if (range.bitmap.pgsize != iommu_pgsize)
> +				return -EINVAL;
> +			if (range.iova & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (!range.size || range.size & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (range.iova + range.size < range.iova)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)range.bitmap.data,
> +				       range.bitmap.size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(range.bitmap.pgsize);
> +			ret = verify_bitmap_size(range.size >> pgshift,
> +						 range.bitmap.size);
> +			if (ret)
> +				return ret;
> +
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking)
> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> +					 range.size, range.bitmap.pgsize,
> +				    (unsigned char __user *)range.bitmap.data);
> +			else
> +				ret = -EINVAL;
> +			mutex_unlock(&iommu->lock);
> +
> +			return ret;
> +		}
>  	}
>  
>  	return -ENOTTY;
> @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
>  
>  	vaddr = dma->vaddr + offset;
>  
> -	if (write)
> +	if (write) {
>  		*copied = __copy_to_user((void __user *)vaddr, data,
>  					 count) ? 0 : count;
> -	else
> +		if (*copied && iommu->dirty_page_tracking) {
> +			unsigned long pgshift =
> +				__ffs(vfio_pgsize_bitmap(iommu));
> +
> +			bitmap_set(dma->bitmap, offset >> pgshift,
> +				   *copied >> pgshift);
> +		}
> +	} else

Great, thanks for adding this!

>  		*copied = __copy_from_user(data, (void __user *)vaddr,
>  					   count) ? 0 : count;
>  	if (kthread)


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-18 19:41 ` [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
@ 2020-03-19  3:45   ` Alex Williamson
  2020-03-20  8:35   ` Yan Zhao
  1 sibling, 0 replies; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  3:45 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 19 Mar 2020 01:11:12 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> DMA mapped pages, including those pinned by mdev vendor drivers, might
> get unpinned and unmapped while migration is active and device is still
> running. For example, in pre-copy phase while guest driver could access
> those pages, host device or vendor driver can dirty these mapped pages.
> Such pages should be marked dirty so as to maintain memory consistency
> for a user making use of dirty page tracking.
> 
> To get bitmap during unmap, user should set flag
> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> zeroed by user space application. Bitmap size and page size should be set
> by user application.

Looks good, but as mentioned we no longer require the user to zero the
bitmap.  It's mentioned in the commit log above and in the uapi
comment.  Thanks,

Alex

> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
>  include/uapi/linux/vfio.h       | 11 +++++++++
>  2 files changed, 62 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index d6417fb02174..aa1ac30f7854 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
>  }
>  
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> -			     struct vfio_iommu_type1_dma_unmap *unmap)
> +			     struct vfio_iommu_type1_dma_unmap *unmap,
> +			     struct vfio_bitmap *bitmap)
>  {
>  	uint64_t mask;
>  	struct vfio_dma *dma, *dma_last = NULL;
> @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  	 * will be returned if these conditions are not met.  The v2 interface
>  	 * will only return success and a size of zero if there were no
>  	 * mappings within the range.
> +	 *
> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> +	 * must be for single mapping. Multiple mappings with this flag set is
> +	 * not supported.
>  	 */
>  	if (iommu->v2) {
>  		dma = vfio_find_dma(iommu, unmap->iova, 1);
> @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			ret = -EINVAL;
>  			goto unlock;
>  		}
> +
> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
> +			ret = -EINVAL;
> +			goto unlock;
> +		}
> +
>  		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
>  		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
>  			ret = -EINVAL;
> @@ -1014,6 +1026,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  		if (dma->task->mm != current->mm)
>  			break;
>  
> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> +		     iommu->dirty_page_tracking)
> +			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
> +					bitmap->pgsize,
> +					(unsigned char __user *) bitmap->data);
> +
>  		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
>  			struct vfio_iommu_type1_dma_unmap nb_unmap;
>  
> @@ -2369,17 +2387,46 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
>  		struct vfio_iommu_type1_dma_unmap unmap;
> -		long ret;
> +		struct vfio_bitmap bitmap = { 0 };
> +		int ret;
>  
>  		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
>  
>  		if (copy_from_user(&unmap, (void __user *)arg, minsz))
>  			return -EFAULT;
>  
> -		if (unmap.argsz < minsz || unmap.flags)
> +		if (unmap.argsz < minsz ||
> +		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
>  			return -EINVAL;
>  
> -		ret = vfio_dma_do_unmap(iommu, &unmap);
> +		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
> +			unsigned long pgshift;
> +			uint64_t iommu_pgsize =
> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (unmap.argsz < (minsz + sizeof(bitmap)))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&bitmap,
> +					   (void __user *)(arg + minsz),
> +					   sizeof(bitmap)))
> +				return -EFAULT;
> +
> +			/* allow only min supported pgsize */
> +			if (bitmap.pgsize != iommu_pgsize)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)bitmap.data, bitmap.size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(bitmap.pgsize);
> +			ret = verify_bitmap_size(unmap.size >> pgshift,
> +						 bitmap.size);
> +			if (ret)
> +				return ret;
> +
> +		}
> +
> +		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
>  		if (ret)
>  			return ret;
>  
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 043e9eafb255..a704e5380f04 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1010,12 +1010,23 @@ struct vfio_bitmap {
>   * field.  No guarantee is made to the user that arbitrary unmaps of iova
>   * or size different from those used in the original mapping call will
>   * succeed.
> + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
> + * before unmapping IO virtual addresses. When this flag is set, user must
> + * provide data[] as structure vfio_bitmap. User must allocate memory to get
> + * bitmap, clear the bitmap memory by setting zero and must set size of
> + * allocated memory in vfio_bitmap.size field. One bit in bitmap
> + * represents per page, page of user provided page size in 'pgsize',
> + * consecutively starting from iova offset. Bit set indicates page at that
> + * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
> + * returned in vfio_bitmap.data
>   */
>  struct vfio_iommu_type1_dma_unmap {
>  	__u32	argsz;
>  	__u32	flags;
> +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
>  	__u64	iova;				/* IO virtual address */
>  	__u64	size;				/* Size of mapping (bytes) */
> +	__u8    data[];
>  };
>  
>  #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-18 19:41 ` [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
@ 2020-03-19  3:45   ` Alex Williamson
  2020-03-19  6:24   ` Yan Zhao
  1 sibling, 0 replies; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  3:45 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 19 Mar 2020 01:11:14 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Added a check such that only singleton IOMMU groups can pin pages.
> From the point when vendor driver pins any pages, consider IOMMU group
> dirty page scope to be limited to pinned pages.
> 
> To optimize to avoid walking list often, added flag
> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
> vfio_domain in the domain_list dirty page scope is limited to pinned
> pages. This flag is updated on first pinned pages request for that IOMMU
> group and on attaching/detaching group.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio.c             | 13 +++++--
>  drivers/vfio/vfio_iommu_type1.c | 77 +++++++++++++++++++++++++++++++++++++++--
>  include/linux/vfio.h            |  4 ++-
>  3 files changed, 87 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 210fcf426643..311b5e4e111e 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -85,6 +85,7 @@ struct vfio_group {
>  	atomic_t			opened;
>  	wait_queue_head_t		container_q;
>  	bool				noiommu;
> +	unsigned int			dev_counter;
>  	struct kvm			*kvm;
>  	struct blocking_notifier_head	notifier;
>  };
> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
>  
>  	mutex_lock(&group->device_lock);
>  	list_add(&device->group_next, &group->device_list);
> +	group->dev_counter++;
>  	mutex_unlock(&group->device_lock);
>  
>  	return device;
> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
>  	struct vfio_group *group = device->group;
>  
>  	list_del(&device->group_next);
> +	group->dev_counter--;
>  	mutex_unlock(&group->device_lock);
>  
>  	dev_set_drvdata(device->dev, NULL);
> @@ -1933,6 +1936,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>  	if (!group)
>  		return -ENODEV;
>  
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
> +
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
>  		goto err_pin_pages;
> @@ -1940,7 +1946,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>  	container = group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->pin_pages))
> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
> +		ret = driver->ops->pin_pages(container->iommu_data,
> +					     group->iommu_group, user_pfn,
>  					     npage, prot, phys_pfn);
>  	else
>  		ret = -ENOTTY;
> @@ -2038,8 +2045,8 @@ int vfio_group_pin_pages(struct vfio_group *group,
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->pin_pages))
>  		ret = driver->ops->pin_pages(container->iommu_data,
> -					     user_iova_pfn, npage,
> -					     prot, phys_pfn);
> +					     group->iommu_group, user_iova_pfn,
> +					     npage, prot, phys_pfn);
>  	else
>  		ret = -ENOTTY;
>  
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 912629320719..deec09f4b0f6 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -72,6 +72,7 @@ struct vfio_iommu {
>  	bool			v2;
>  	bool			nesting;
>  	bool			dirty_page_tracking;
> +	bool			pinned_page_dirty_scope;
>  };
>  
>  struct vfio_domain {
> @@ -99,6 +100,7 @@ struct vfio_group {
>  	struct iommu_group	*iommu_group;
>  	struct list_head	next;
>  	bool			mdev_group;	/* An mdev group */
> +	bool			pinned_page_dirty_scope;
>  };
>  
>  struct vfio_iova {
> @@ -132,6 +134,10 @@ struct vfio_regions {
>  static int put_pfn(unsigned long pfn, int prot);
>  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> +					       struct iommu_group *iommu_group);
> +
> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
>  /*
>   * This code handles mapping and unmapping of user data buffers
>   * into DMA'ble space using the IOMMU
> @@ -556,11 +562,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>  }
>  
>  static int vfio_iommu_type1_pin_pages(void *iommu_data,
> +				      struct iommu_group *iommu_group,
>  				      unsigned long *user_pfn,
>  				      int npage, int prot,
>  				      unsigned long *phys_pfn)
>  {
>  	struct vfio_iommu *iommu = iommu_data;
> +	struct vfio_group *group;
>  	int i, j, ret;
>  	unsigned long remote_vaddr;
>  	struct vfio_dma *dma;
> @@ -630,8 +638,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  				   (vpfn->iova - dma->iova) >> pgshift, 1);
>  		}
>  	}
> -
>  	ret = i;
> +
> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
> +	if (!group->pinned_page_dirty_scope) {
> +		group->pinned_page_dirty_scope = true;
> +		update_pinned_page_dirty_scope(iommu);
> +	}
> +
>  	goto pin_done;
>  
>  pin_unwind:
> @@ -913,8 +927,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>  	npages = dma->size >> pgshift;
>  	bitmap_size = DIRTY_BITMAP_BYTES(npages);
>  
> -	/* mark all pages dirty if all pages are pinned and mapped. */
> -	if (dma->iommu_mapped)
> +	/*
> +	 * mark all pages dirty if any IOMMU capable device is not able
> +	 * to report dirty pages and all pages are pinned and mapped.
> +	 */
> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>  		bitmap_set(dma->bitmap, 0, npages);
>  
>  	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> @@ -1393,6 +1410,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
>  	return NULL;
>  }
>  
> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> +					       struct iommu_group *iommu_group)
> +{
> +	struct vfio_domain *domain;
> +	struct vfio_group *group = NULL;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		group = find_iommu_group(domain, iommu_group);
> +		if (group)
> +			return group;
> +	}
> +
> +	if (iommu->external_domain)
> +		group = find_iommu_group(iommu->external_domain, iommu_group);
> +
> +	return group;
> +}
> +
> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> +{
> +	struct vfio_domain *domain;
> +	struct vfio_group *group;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			if (!group->pinned_page_dirty_scope) {
> +				iommu->pinned_page_dirty_scope = false;
> +				return;
> +			}
> +		}
> +	}
> +
> +	if (iommu->external_domain) {
> +		domain = iommu->external_domain;
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			if (!group->pinned_page_dirty_scope) {
> +				iommu->pinned_page_dirty_scope = false;
> +				return;
> +			}
> +		}
> +	}
> +
> +	iommu->pinned_page_dirty_scope = true;
> +}
> +
>  static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
>  				  phys_addr_t *base)
>  {
> @@ -1799,6 +1861,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  
>  			list_add(&group->next,
>  				 &iommu->external_domain->group_list);
> +			group->pinned_page_dirty_scope = true;
> +			if (!iommu->pinned_page_dirty_scope)
> +				update_pinned_page_dirty_scope(iommu);

A comment above this would be good since this wasn't entirely obvious,
maybe:

/*
 * Non-iommu backed group cannot dirty memory directly,
 * it can only use interfaces that provide dirty tracking.
 * The iommu scope can only be promoted with the addition
 * of a dirty tracking group.
 */

>  			mutex_unlock(&iommu->lock);
>  
>  			return 0;
> @@ -1921,6 +1986,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  done:
>  	/* Delete the old one and insert new iova list */
>  	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> +	iommu->pinned_page_dirty_scope = false;

Likewise here:

/*
 * An iommu backed group can dirty memory directly and therefore
 * demotes the iommu scope until it declares itself dirty tracking
 * capable via the page pinning interface.
 */

>  	mutex_unlock(&iommu->lock);
>  	vfio_iommu_resv_free(&group_resv_regions);
>  
> @@ -2073,6 +2139,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  	struct vfio_iommu *iommu = iommu_data;
>  	struct vfio_domain *domain;
>  	struct vfio_group *group;
> +	bool update_dirty_scope = false;
>  	LIST_HEAD(iova_copy);
>  
>  	mutex_lock(&iommu->lock);
> @@ -2080,6 +2147,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  	if (iommu->external_domain) {
>  		group = find_iommu_group(iommu->external_domain, iommu_group);
>  		if (group) {
> +			update_dirty_scope = !group->pinned_page_dirty_scope;
>  			list_del(&group->next);
>  			kfree(group);
>  
> @@ -2109,6 +2177,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  			continue;
>  
>  		vfio_iommu_detach_group(domain, group);
> +		update_dirty_scope = !group->pinned_page_dirty_scope;
>  		list_del(&group->next);
>  		kfree(group);
>  		/*
> @@ -2139,6 +2208,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  		vfio_iommu_iova_free(&iova_copy);
>  
>  detach_group_done:
> +	if (update_dirty_scope)
> +		update_pinned_page_dirty_scope(iommu);

And one more

/*
 * Removal of a group without dirty tracking may
 * allow the iommu scope to be promoted.
 */

>  	mutex_unlock(&iommu->lock);
>  }
>  
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index be2bd358b952..702e1d7b6e8b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
>  					struct iommu_group *group);
>  	void		(*detach_group)(void *iommu_data,
>  					struct iommu_group *group);
> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
> +	int		(*pin_pages)(void *iommu_data,
> +				     struct iommu_group *group,
> +				     unsigned long *user_pfn,
>  				     int npage, int prot,
>  				     unsigned long *phys_pfn);
>  	int		(*unpin_pages)(void *iommu_data,


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-19  1:17   ` Yan Zhao
@ 2020-03-19  3:49     ` Alex Williamson
  2020-03-19  5:05       ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  3:49 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Wed, 18 Mar 2020 21:17:03 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:
> > - Defined MIGRATION region type and sub-type.
> > 
> > - Defined vfio_device_migration_info structure which will be placed at the
> >   0th offset of migration region to get/set VFIO device related
> >   information. Defined members of structure and usage on read/write access.
> > 
> > - Defined device states and state transition details.
> > 
> > - Defined sequence to be followed while saving and resuming VFIO device.
> > 
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > ---
> >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 227 insertions(+)
> > 
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index 9e843a147ead..d0021467af53 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> >  #define VFIO_REGION_TYPE_GFX                    (1)
> >  #define VFIO_REGION_TYPE_CCW			(2)
> > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> >  
> >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> >  
> > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> >  /* sub-types for VFIO_REGION_TYPE_CCW */
> >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> >  
> > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > +
> > +/*
> > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > + * migration information. Field accesses from this structure are only supported
> > + * at their native width and alignment. Otherwise, the result is undefined and
> > + * vendor drivers should return an error.
> > + *
> > + * device_state: (read/write)
> > + *      - The user application writes to this field to inform the vendor driver
> > + *        about the device state to be transitioned to.
> > + *      - The vendor driver should take the necessary actions to change the
> > + *        device state. After successful transition to a given state, the
> > + *        vendor driver should return success on write(device_state, state)
> > + *        system call. If the device state transition fails, the vendor driver
> > + *        should return an appropriate -errno for the fault condition.
> > + *      - On the user application side, if the device state transition fails,
> > + *	  that is, if write(device_state, state) returns an error, read
> > + *	  device_state again to determine the current state of the device from
> > + *	  the vendor driver.
> > + *      - The vendor driver should return previous state of the device unless
> > + *        the vendor driver has encountered an internal error, in which case
> > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > + *      - The user application must use the device reset ioctl to recover the
> > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > + *        indicated to be in a valid device state by reading device_state, the
> > + *        user application may attempt to transition the device to any valid
> > + *        state reachable from the current state or terminate itself.
> > + *
> > + *      device_state consists of 3 bits:
> > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > + *        it indicates the _STOP state. When the device state is changed to
> > + *        _STOP, driver should stop the device before write() returns.
> > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > + *        driver should start gathering device state information that will be
> > + *        provided to the VFIO user application to save the device's state.
> > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > + *        the driver should prepare to resume the device. Data provided through
> > + *        the migration region should be used to resume the device.
> > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > + *      application should perform a read-modify-write operation on this
> > + *      field when modifying the specified bits.
> > + *
> > + *  +------- _RESUMING
> > + *  |+------ _SAVING
> > + *  ||+----- _RUNNING
> > + *  |||
> > + *  000b => Device Stopped, not saving or resuming
> > + *  001b => Device running, which is the default state
> > + *  010b => Stop the device & save the device state, stop-and-copy state
> > + *  011b => Device running and save the device state, pre-copy state
> > + *  100b => Device stopped and the device state is resuming
> > + *  101b => Invalid state
> > + *  110b => Error state
> > + *  111b => Invalid state
> > + *
> > + * State transitions:
> > + *
> > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > + * 0. Running or default state
> > + *                             |
> > + *
> > + * 1. Normal Shutdown (optional)
> > + *                             |------------------------------------->|
> > + *
> > + * 2. Save the state or suspend
> > + *                             |------------------------->|---------->|
> > + *
> > + * 3. Save the state during live migration
> > + *                             |----------->|------------>|---------->|
> > + *
> > + * 4. Resuming
> > + *                  |<---------|
> > + *
> > + * 5. Resumed
> > + *                  |--------->|
> > + *
> > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > + * 1. During normal shutdown of the user application, the user application may
> > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > + *    transition is optional. The vendor driver must support this transition but
> > + *    must not require it.
> > + * 2. When the user application saves state or suspends the application, the
> > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > + *    device, save the device state and send it to the application through the
> > + *    migration region. The sequence to be followed for such transition is given
> > + *    below.
> > + * 3. In live migration of user application, the state transitions from _RUNNING
> > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > + *    gathering the device state while the application is still running and send
> > + *    the device state data to application through the migration region.
> > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > + *    the device, save the device state and send it to the user application
> > + *    through the migration region.
> > + *    Vendor drivers must support the pre-copy state even for implementations
> > + *    where no data is provided to the user before the stop-and-copy state. The
> > + *    user must not be required to consume all migration data before the device
> > + *    transitions to a new state, including the stop-and-copy state.
> > + *    The sequence to be followed for above two transitions is given below.
> > + * 4. To start the resuming phase, the device state should be transitioned from
> > + *    the _RUNNING to the _RESUMING state.
> > + *    In the _RESUMING state, the driver should use the device state data
> > + *    received through the migration region to resume the device.
> > + * 5. After providing saved device data to the driver, the application should
> > + *    change the state from _RESUMING to _RUNNING.
> > + *
> > + * reserved:
> > + *      Reads on this field return zero and writes are ignored.
> > + *
> > + * pending_bytes: (read only)
> > + *      The number of pending bytes still to be migrated from the vendor driver.
> > + *
> > + * data_offset: (read only)
> > + *      The user application should read data_offset in the migration region
> > + *      from where the user application should read the device data during the
> > + *      _SAVING state or write the device data during the _RESUMING state. See
> > + *      below for details of sequence to be followed.
> > + *
> > + * data_size: (read/write)
> > + *      The user application should read data_size to get the size in bytes of
> > + *      the data copied in the migration region during the _SAVING state and
> > + *      write the size in bytes of the data copied in the migration region
> > + *      during the _RESUMING state.
> > + *
> > + * The format of the migration region is as follows:
> > + *  ------------------------------------------------------------------
> > + * |vfio_device_migration_info|    data section                      |
> > + * |                          |     ///////////////////////////////  |
> > + * ------------------------------------------------------------------
> > + *   ^                              ^
> > + *  offset 0-trapped part        data_offset
> > + *
> > + * The structure vfio_device_migration_info is always followed by the data
> > + * section in the region, so data_offset will always be nonzero. The offset
> > + * from where the data is copied is decided by the kernel driver. The data
> > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > + * driver defines the data section. The data section partition can be defined
> > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > + * page aligned, whereas initial section which contains the
> > + * vfio_device_migration_info structure, might not end at the offset, which is
> > + * page aligned. The user is not required to access through mmap regardless
> > + * of the capabilities of the region mmap.
> > + * The vendor driver should determine whether and how to partition the data
> > + * section. The vendor driver should return data_offset accordingly.
> > + *
> > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > + * follows:
> > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > + *    data. Repeated read on pending_bytes at this stage should have no side
> > + *    effects.
> > + *    If pending_bytes == 0, the user application should not iterate to get data
> > + *    for that device.
> > + *    If pending_bytes > 0, perform the following steps.
> > + * b. Read data_offset, indicating that the vendor driver should make data
> > + *    available through the data section. The vendor driver should return this
> > + *    read operation only after data is available from (region + data_offset)
> > + *    to (region + data_offset + data_size).
> > + * c. Read data_size, which is the amount of data in bytes available through
> > + *    the migration region.
> > + *    Read on data_offset and data_size should return the offset and size of
> > + *    the current buffer if the user application reads data_offset and
> > + *    data_size more than once here.  
> If data region is mmaped, merely reading data_offset and data_size
> cannot let kernel know what are correct values to return.
> Consider to add a read operation which is trapped into kernel to let
> kernel exactly know it needs to move to the next offset and update data_size
> ?

Both operations b. and c. above are to trapped registers, operation d.
below may potentially be to an mmap'd area, which is why we have step
f. which indicates to the vendor driver that the data has been
consumed.  Does that address your concern?  Thanks,

Alex

> > + * d. Read data_size bytes of data from (region + data_offset) from the
> > + *    migration region.
> > + * e. Process the data.
> > + * f. Read pending_bytes, which indicates that the data from the previous
> > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > + *
> > + * If an error occurs during the above sequence, the vendor driver can return
> > + * an error code for next read() or write() operation, which will terminate the
> > + * loop. The user application should then take the next necessary action, for
> > + * example, failing migration or terminating the user application.
> > + *
> > + * The user application can transition from the _SAVING|_RUNNING
> > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > + * number of pending bytes. The user application should iterate in _SAVING
> > + * (stop-and-copy) until pending_bytes is 0.
> > + *
> > + * The sequence to be followed while _RESUMING device state is as follows:
> > + * While data for this device is available, repeat the following steps:
> > + * a. Read data_offset from where the user application should write data.
> > + * b. Write migration data starting at the migration region + data_offset for
> > + *    the length determined by data_size from the migration source.
> > + * c. Write data_size, which indicates to the vendor driver that data is
> > + *    written in the migration region. Vendor driver should apply the
> > + *    user-provided migration region data to the device resume state.
> > + *
> > + * For the user application, data is opaque. The user application should write
> > + * data in the same order as the data is received and the data should be of
> > + * same transaction size at the source.
> > + */
> > +
> > +struct vfio_device_migration_info {
> > +	__u32 device_state;         /* VFIO device state */
> > +#define VFIO_DEVICE_STATE_STOP      (0)
> > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > +				     VFIO_DEVICE_STATE_SAVING |  \
> > +				     VFIO_DEVICE_STATE_RESUMING)
> > +
> > +#define VFIO_DEVICE_STATE_VALID(state) \
> > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > +
> > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > +					      VFIO_DEVICE_STATE_RESUMING))
> > +
> > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > +					     VFIO_DEVICE_STATE_RESUMING)
> > +
> > +	__u32 reserved;
> > +	__u64 pending_bytes;
> > +	__u64 data_offset;
> > +	__u64 data_size;
> > +} __attribute__((packed));
> > +
> >  /*
> >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> >   * which allows direct access to non-MSIX registers which happened to be within
> > -- 
> > 2.7.0
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  3:06   ` Yan Zhao
@ 2020-03-19  4:01     ` Alex Williamson
  2020-03-19  4:15       ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  4:01 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Wed, 18 Mar 2020 23:06:39 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:
> > VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> > - Start dirty pages tracking while migration is active
> > - Stop dirty pages tracking.
> > - Get dirty pages bitmap. Its user space application's responsibility to
> >   copy content of dirty pages from source to destination during migration.
> > 
> > To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> > structure. Bitmap size is calculated considering smallest supported page
> > size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> > 
> > Bitmap is populated for already pinned pages when bitmap is allocated for
> > a vfio_dma with the smallest supported page size. Update bitmap from
> > pinning functions when tracking is enabled. When user application queries
> > bitmap, check if requested page size is same as page size used to
> > populated bitmap. If it is equal, copy bitmap, but if not equal, return
> > error.
> > 
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 203 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > index 70aeab921d0f..d6417fb02174 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -71,6 +71,7 @@ struct vfio_iommu {
> >  	unsigned int		dma_avail;
> >  	bool			v2;
> >  	bool			nesting;
> > +	bool			dirty_page_tracking;
> >  };
> >  
> >  struct vfio_domain {
> > @@ -91,6 +92,7 @@ struct vfio_dma {
> >  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> >  	struct task_struct	*task;
> >  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> > +	unsigned long		*bitmap;
> >  };
> >  
> >  struct vfio_group {
> > @@ -125,7 +127,10 @@ struct vfio_regions {
> >  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> >  					(!list_empty(&iommu->domain_list))
> >  
> > +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> > +
> >  static int put_pfn(unsigned long pfn, int prot);
> > +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >  
> >  /*
> >   * This code handles mapping and unmapping of user data buffers
> > @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> >  	rb_erase(&old->node, &iommu->dma_list);
> >  }
> >  
> > +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> > +{
> > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > +
> > +	for (; n; n = rb_next(n)) {
> > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > +		struct rb_node *p;
> > +		unsigned long npages = dma->size / pgsize;
> > +
> > +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> > +		if (!dma->bitmap) {
> > +			struct rb_node *p = rb_prev(n);
> > +
> > +			for (; p; p = rb_prev(p)) {
> > +				struct vfio_dma *dma = rb_entry(n,
> > +							struct vfio_dma, node);
> > +
> > +				kfree(dma->bitmap);
> > +				dma->bitmap = NULL;
> > +			}
> > +			return -ENOMEM;
> > +		}
> > +
> > +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> > +			continue;
> > +
> > +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> > +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> > +							 node);
> > +
> > +			bitmap_set(dma->bitmap,
> > +					(vpfn->iova - dma->iova) / pgsize, 1);
> > +		}
> > +	}
> > +	return 0;
> > +}
> > +
> > +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> > +{
> > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > +
> > +	for (; n; n = rb_next(n)) {
> > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > +
> > +		kfree(dma->bitmap);
> > +		dma->bitmap = NULL;
> > +	}
> > +}
> > +
> >  /*
> >   * Helper Functions for host iova-pfn list
> >   */
> > @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >  			vfio_unpin_page_external(dma, iova, do_accounting);
> >  			goto pin_unwind;
> >  		}
> > +
> > +		if (iommu->dirty_page_tracking) {
> > +			unsigned long pgshift =
> > +					 __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			bitmap_set(dma->bitmap,
> > +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > +		}
> >  	}
> >  
> >  	ret = i;
> > @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> >  	vfio_unmap_unpin(iommu, dma, true);
> >  	vfio_unlink_dma(iommu, dma);
> >  	put_task_struct(dma->task);
> > +	kfree(dma->bitmap);
> >  	kfree(dma);
> >  	iommu->dma_avail++;
> >  }
> > @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> >  	return bitmap;
> >  }
> >  
> > +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> > +				  size_t size, uint64_t pgsize,
> > +				  unsigned char __user *bitmap)
> > +{
> > +	struct vfio_dma *dma;
> > +	unsigned long pgshift = __ffs(pgsize);
> > +	unsigned int npages, bitmap_size;
> > +
> > +	dma = vfio_find_dma(iommu, iova, 1);
> > +
> > +	if (!dma)
> > +		return -EINVAL;
> > +
> > +	if (dma->iova != iova || dma->size != size)
> > +		return -EINVAL;
> > +  
> looks this size is passed from user. how can it ensure size always
> equals to dma->size ?
> 
> shouldn't we iterate dma tree to look for dirty for whole range if a
> single dma cannot meet them all?

Please see the discussion on v12[1], the problem is with the alignment
of DMA mapped regions versus the bitmap.  A DMA mapping only requires
page alignment, so for example imagine a user requests the bitmap from
page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
efficiently copy the bitmap tracked by the vfio_dma structure to the
user buffer when it's shifted by 1 bit.  Adjacent mappings can also
make for a very complicated implementation.  In the discussion linked
we decided to compromise on a more simple implementation that requires
the user to ask for a bitmap which exactly matches a single DMA
mapping, which Kirti indicates is what we require to support QEMU.
Later in the series, the unmap operation also makes this requirement
when used with the flags to retrieve the dirty bitmap.  Thanks,

Alex

[1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
 
> > +	npages = dma->size >> pgshift;
> > +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> > +
> > +	/* mark all pages dirty if all pages are pinned and mapped. */
> > +	if (dma->iommu_mapped)
> > +		bitmap_set(dma->bitmap, 0, npages);
> > +
> > +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> > +		return -EFAULT;
> > +
> > +	return 0;
> > +}
> > +
> > +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > +{
> > +	uint64_t bsize;
> > +
> > +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> > +		return -EINVAL;
> > +
> > +	bsize = DIRTY_BITMAP_BYTES(npages);
> > +
> > +	if (bitmap_size < bsize)
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >  			     struct vfio_iommu_type1_dma_unmap *unmap)
> >  {
> > @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> >  
> >  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> >  			-EFAULT : 0;
> > +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> > +		struct vfio_iommu_type1_dirty_bitmap dirty;
> > +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> > +		int ret = 0;
> > +
> > +		if (!iommu->v2)
> > +			return -EACCES;
> > +
> > +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> > +				    flags);
> > +
> > +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> > +			return -EFAULT;
> > +
> > +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> > +			return -EINVAL;
> > +
> > +		/* only one flag should be set at a time */
> > +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> > +			return -EINVAL;
> > +
> > +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> > +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			mutex_lock(&iommu->lock);
> > +			if (!iommu->dirty_page_tracking) {
> > +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> > +				if (!ret)
> > +					iommu->dirty_page_tracking = true;
> > +			}
> > +			mutex_unlock(&iommu->lock);
> > +			return ret;
> > +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> > +			mutex_lock(&iommu->lock);
> > +			if (iommu->dirty_page_tracking) {
> > +				iommu->dirty_page_tracking = false;
> > +				vfio_dma_bitmap_free(iommu);
> > +			}
> > +			mutex_unlock(&iommu->lock);
> > +			return 0;
> > +		} else if (dirty.flags &
> > +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> > +			struct vfio_iommu_type1_dirty_bitmap_get range;
> > +			unsigned long pgshift;
> > +			size_t data_size = dirty.argsz - minsz;
> > +			uint64_t iommu_pgsize =
> > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			if (!data_size || data_size < sizeof(range))
> > +				return -EINVAL;
> > +
> > +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> > +					   sizeof(range)))
> > +				return -EFAULT;
> > +
> > +			/* allow only min supported pgsize */
> > +			if (range.bitmap.pgsize != iommu_pgsize)
> > +				return -EINVAL;
> > +			if (range.iova & (iommu_pgsize - 1))
> > +				return -EINVAL;
> > +			if (!range.size || range.size & (iommu_pgsize - 1))
> > +				return -EINVAL;
> > +			if (range.iova + range.size < range.iova)
> > +				return -EINVAL;
> > +			if (!access_ok((void __user *)range.bitmap.data,
> > +				       range.bitmap.size))
> > +				return -EINVAL;
> > +
> > +			pgshift = __ffs(range.bitmap.pgsize);
> > +			ret = verify_bitmap_size(range.size >> pgshift,
> > +						 range.bitmap.size);
> > +			if (ret)
> > +				return ret;
> > +
> > +			mutex_lock(&iommu->lock);
> > +			if (iommu->dirty_page_tracking)
> > +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> > +					 range.size, range.bitmap.pgsize,
> > +				    (unsigned char __user *)range.bitmap.data);
> > +			else
> > +				ret = -EINVAL;
> > +			mutex_unlock(&iommu->lock);
> > +
> > +			return ret;
> > +		}
> >  	}
> >  
> >  	return -ENOTTY;
> > @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> >  
> >  	vaddr = dma->vaddr + offset;
> >  
> > -	if (write)
> > +	if (write) {
> >  		*copied = __copy_to_user((void __user *)vaddr, data,
> >  					 count) ? 0 : count;
> > -	else
> > +		if (*copied && iommu->dirty_page_tracking) {
> > +			unsigned long pgshift =
> > +				__ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			bitmap_set(dma->bitmap, offset >> pgshift,
> > +				   *copied >> pgshift);
> > +		}
> > +	} else
> >  		*copied = __copy_from_user(data, (void __user *)vaddr,
> >  					   count) ? 0 : count;
> >  	if (kthread)
> > -- 
> > 2.7.0
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  4:01     ` Alex Williamson
@ 2020-03-19  4:15       ` Yan Zhao
  2020-03-19  4:40         ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-19  4:15 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 12:01:00PM +0800, Alex Williamson wrote:
> On Wed, 18 Mar 2020 23:06:39 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:
> > > VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> > > - Start dirty pages tracking while migration is active
> > > - Stop dirty pages tracking.
> > > - Get dirty pages bitmap. Its user space application's responsibility to
> > >   copy content of dirty pages from source to destination during migration.
> > > 
> > > To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> > > structure. Bitmap size is calculated considering smallest supported page
> > > size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> > > 
> > > Bitmap is populated for already pinned pages when bitmap is allocated for
> > > a vfio_dma with the smallest supported page size. Update bitmap from
> > > pinning functions when tracking is enabled. When user application queries
> > > bitmap, check if requested page size is same as page size used to
> > > populated bitmap. If it is equal, copy bitmap, but if not equal, return
> > > error.
> > > 
> > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > ---
> > >  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 203 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > > index 70aeab921d0f..d6417fb02174 100644
> > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > @@ -71,6 +71,7 @@ struct vfio_iommu {
> > >  	unsigned int		dma_avail;
> > >  	bool			v2;
> > >  	bool			nesting;
> > > +	bool			dirty_page_tracking;
> > >  };
> > >  
> > >  struct vfio_domain {
> > > @@ -91,6 +92,7 @@ struct vfio_dma {
> > >  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> > >  	struct task_struct	*task;
> > >  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> > > +	unsigned long		*bitmap;
> > >  };
> > >  
> > >  struct vfio_group {
> > > @@ -125,7 +127,10 @@ struct vfio_regions {
> > >  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> > >  					(!list_empty(&iommu->domain_list))
> > >  
> > > +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> > > +
> > >  static int put_pfn(unsigned long pfn, int prot);
> > > +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> > >  
> > >  /*
> > >   * This code handles mapping and unmapping of user data buffers
> > > @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> > >  	rb_erase(&old->node, &iommu->dma_list);
> > >  }
> > >  
> > > +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> > > +{
> > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > +
> > > +	for (; n; n = rb_next(n)) {
> > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > +		struct rb_node *p;
> > > +		unsigned long npages = dma->size / pgsize;
> > > +
> > > +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> > > +		if (!dma->bitmap) {
> > > +			struct rb_node *p = rb_prev(n);
> > > +
> > > +			for (; p; p = rb_prev(p)) {
> > > +				struct vfio_dma *dma = rb_entry(n,
> > > +							struct vfio_dma, node);
> > > +
> > > +				kfree(dma->bitmap);
> > > +				dma->bitmap = NULL;
> > > +			}
> > > +			return -ENOMEM;
> > > +		}
> > > +
> > > +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> > > +			continue;
> > > +
> > > +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> > > +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> > > +							 node);
> > > +
> > > +			bitmap_set(dma->bitmap,
> > > +					(vpfn->iova - dma->iova) / pgsize, 1);
> > > +		}
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > > +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> > > +{
> > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > +
> > > +	for (; n; n = rb_next(n)) {
> > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > +
> > > +		kfree(dma->bitmap);
> > > +		dma->bitmap = NULL;
> > > +	}
> > > +}
> > > +
> > >  /*
> > >   * Helper Functions for host iova-pfn list
> > >   */
> > > @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > >  			vfio_unpin_page_external(dma, iova, do_accounting);
> > >  			goto pin_unwind;
> > >  		}
> > > +
> > > +		if (iommu->dirty_page_tracking) {
> > > +			unsigned long pgshift =
> > > +					 __ffs(vfio_pgsize_bitmap(iommu));
> > > +
> > > +			bitmap_set(dma->bitmap,
> > > +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > > +		}
> > >  	}
> > >  
> > >  	ret = i;
> > > @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> > >  	vfio_unmap_unpin(iommu, dma, true);
> > >  	vfio_unlink_dma(iommu, dma);
> > >  	put_task_struct(dma->task);
> > > +	kfree(dma->bitmap);
> > >  	kfree(dma);
> > >  	iommu->dma_avail++;
> > >  }
> > > @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> > >  	return bitmap;
> > >  }
> > >  
> > > +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> > > +				  size_t size, uint64_t pgsize,
> > > +				  unsigned char __user *bitmap)
> > > +{
> > > +	struct vfio_dma *dma;
> > > +	unsigned long pgshift = __ffs(pgsize);
> > > +	unsigned int npages, bitmap_size;
> > > +
> > > +	dma = vfio_find_dma(iommu, iova, 1);
> > > +
> > > +	if (!dma)
> > > +		return -EINVAL;
> > > +
> > > +	if (dma->iova != iova || dma->size != size)
> > > +		return -EINVAL;
> > > +  
> > looks this size is passed from user. how can it ensure size always
> > equals to dma->size ?
> > 
> > shouldn't we iterate dma tree to look for dirty for whole range if a
> > single dma cannot meet them all?
> 
> Please see the discussion on v12[1], the problem is with the alignment
> of DMA mapped regions versus the bitmap.  A DMA mapping only requires
> page alignment, so for example imagine a user requests the bitmap from
> page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
> efficiently copy the bitmap tracked by the vfio_dma structure to the
> user buffer when it's shifted by 1 bit.  Adjacent mappings can also
> make for a very complicated implementation.  In the discussion linked
> we decided to compromise on a more simple implementation that requires
> the user to ask for a bitmap which exactly matches a single DMA
> mapping, which Kirti indicates is what we require to support QEMU.
> Later in the series, the unmap operation also makes this requirement
> when used with the flags to retrieve the dirty bitmap.  Thanks,
>

so, what about for vIOMMU enabling case?
if IOVAs are mapped per page, then there's a log_sync in qemu,
it's supposed for range from 0-U64MAX, qemu has to find out which
ones are mapped and cut them into pages before calling this IOCTL?
And what if those IOVAs are mapped for len more than one page?

> 
> [1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
>  
> > > +	npages = dma->size >> pgshift;
> > > +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> > > +
> > > +	/* mark all pages dirty if all pages are pinned and mapped. */
> > > +	if (dma->iommu_mapped)
> > > +		bitmap_set(dma->bitmap, 0, npages);
> > > +
> > > +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> > > +		return -EFAULT;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > > +{
> > > +	uint64_t bsize;
> > > +
> > > +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> > > +		return -EINVAL;
> > > +
> > > +	bsize = DIRTY_BITMAP_BYTES(npages);
> > > +
> > > +	if (bitmap_size < bsize)
> > > +		return -EINVAL;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >  			     struct vfio_iommu_type1_dma_unmap *unmap)
> > >  {
> > > @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> > >  
> > >  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> > >  			-EFAULT : 0;
> > > +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> > > +		struct vfio_iommu_type1_dirty_bitmap dirty;
> > > +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> > > +		int ret = 0;
> > > +
> > > +		if (!iommu->v2)
> > > +			return -EACCES;
> > > +
> > > +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> > > +				    flags);
> > > +
> > > +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> > > +			return -EFAULT;
> > > +
> > > +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> > > +			return -EINVAL;
> > > +
> > > +		/* only one flag should be set at a time */
> > > +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> > > +			return -EINVAL;
> > > +
> > > +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> > > +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > +
> > > +			mutex_lock(&iommu->lock);
> > > +			if (!iommu->dirty_page_tracking) {
> > > +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> > > +				if (!ret)
> > > +					iommu->dirty_page_tracking = true;
> > > +			}
> > > +			mutex_unlock(&iommu->lock);
> > > +			return ret;
> > > +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> > > +			mutex_lock(&iommu->lock);
> > > +			if (iommu->dirty_page_tracking) {
> > > +				iommu->dirty_page_tracking = false;
> > > +				vfio_dma_bitmap_free(iommu);
> > > +			}
> > > +			mutex_unlock(&iommu->lock);
> > > +			return 0;
> > > +		} else if (dirty.flags &
> > > +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> > > +			struct vfio_iommu_type1_dirty_bitmap_get range;
> > > +			unsigned long pgshift;
> > > +			size_t data_size = dirty.argsz - minsz;
> > > +			uint64_t iommu_pgsize =
> > > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > +
> > > +			if (!data_size || data_size < sizeof(range))
> > > +				return -EINVAL;
> > > +
> > > +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> > > +					   sizeof(range)))
> > > +				return -EFAULT;
> > > +
> > > +			/* allow only min supported pgsize */
> > > +			if (range.bitmap.pgsize != iommu_pgsize)
> > > +				return -EINVAL;
> > > +			if (range.iova & (iommu_pgsize - 1))
> > > +				return -EINVAL;
> > > +			if (!range.size || range.size & (iommu_pgsize - 1))
> > > +				return -EINVAL;
> > > +			if (range.iova + range.size < range.iova)
> > > +				return -EINVAL;
> > > +			if (!access_ok((void __user *)range.bitmap.data,
> > > +				       range.bitmap.size))
> > > +				return -EINVAL;
> > > +
> > > +			pgshift = __ffs(range.bitmap.pgsize);
> > > +			ret = verify_bitmap_size(range.size >> pgshift,
> > > +						 range.bitmap.size);
> > > +			if (ret)
> > > +				return ret;
> > > +
> > > +			mutex_lock(&iommu->lock);
> > > +			if (iommu->dirty_page_tracking)
> > > +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> > > +					 range.size, range.bitmap.pgsize,
> > > +				    (unsigned char __user *)range.bitmap.data);
> > > +			else
> > > +				ret = -EINVAL;
> > > +			mutex_unlock(&iommu->lock);
> > > +
> > > +			return ret;
> > > +		}
> > >  	}
> > >  
> > >  	return -ENOTTY;
> > > @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> > >  
> > >  	vaddr = dma->vaddr + offset;
> > >  
> > > -	if (write)
> > > +	if (write) {
> > >  		*copied = __copy_to_user((void __user *)vaddr, data,
> > >  					 count) ? 0 : count;
> > > -	else
> > > +		if (*copied && iommu->dirty_page_tracking) {
> > > +			unsigned long pgshift =
> > > +				__ffs(vfio_pgsize_bitmap(iommu));
> > > +
> > > +			bitmap_set(dma->bitmap, offset >> pgshift,
> > > +				   *copied >> pgshift);
> > > +		}
> > > +	} else
> > >  		*copied = __copy_from_user(data, (void __user *)vaddr,
> > >  					   count) ? 0 : count;
> > >  	if (kthread)
> > > -- 
> > > 2.7.0
> > >   
> > 
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  4:15       ` Yan Zhao
@ 2020-03-19  4:40         ` Alex Williamson
  2020-03-19  6:15           ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-19  4:40 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, 19 Mar 2020 00:15:33 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 12:01:00PM +0800, Alex Williamson wrote:
> > On Wed, 18 Mar 2020 23:06:39 -0400
> > Yan Zhao <yan.y.zhao@intel.com> wrote:
> >   
> > > On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:  
> > > > VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> > > > - Start dirty pages tracking while migration is active
> > > > - Stop dirty pages tracking.
> > > > - Get dirty pages bitmap. Its user space application's responsibility to
> > > >   copy content of dirty pages from source to destination during migration.
> > > > 
> > > > To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> > > > structure. Bitmap size is calculated considering smallest supported page
> > > > size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> > > > 
> > > > Bitmap is populated for already pinned pages when bitmap is allocated for
> > > > a vfio_dma with the smallest supported page size. Update bitmap from
> > > > pinning functions when tracking is enabled. When user application queries
> > > > bitmap, check if requested page size is same as page size used to
> > > > populated bitmap. If it is equal, copy bitmap, but if not equal, return
> > > > error.
> > > > 
> > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > ---
> > > >  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> > > >  1 file changed, 203 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > > > index 70aeab921d0f..d6417fb02174 100644
> > > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > > @@ -71,6 +71,7 @@ struct vfio_iommu {
> > > >  	unsigned int		dma_avail;
> > > >  	bool			v2;
> > > >  	bool			nesting;
> > > > +	bool			dirty_page_tracking;
> > > >  };
> > > >  
> > > >  struct vfio_domain {
> > > > @@ -91,6 +92,7 @@ struct vfio_dma {
> > > >  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> > > >  	struct task_struct	*task;
> > > >  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> > > > +	unsigned long		*bitmap;
> > > >  };
> > > >  
> > > >  struct vfio_group {
> > > > @@ -125,7 +127,10 @@ struct vfio_regions {
> > > >  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> > > >  					(!list_empty(&iommu->domain_list))
> > > >  
> > > > +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> > > > +
> > > >  static int put_pfn(unsigned long pfn, int prot);
> > > > +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> > > >  
> > > >  /*
> > > >   * This code handles mapping and unmapping of user data buffers
> > > > @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> > > >  	rb_erase(&old->node, &iommu->dma_list);
> > > >  }
> > > >  
> > > > +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> > > > +{
> > > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > > +
> > > > +	for (; n; n = rb_next(n)) {
> > > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > > +		struct rb_node *p;
> > > > +		unsigned long npages = dma->size / pgsize;
> > > > +
> > > > +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> > > > +		if (!dma->bitmap) {
> > > > +			struct rb_node *p = rb_prev(n);
> > > > +
> > > > +			for (; p; p = rb_prev(p)) {
> > > > +				struct vfio_dma *dma = rb_entry(n,
> > > > +							struct vfio_dma, node);
> > > > +
> > > > +				kfree(dma->bitmap);
> > > > +				dma->bitmap = NULL;
> > > > +			}
> > > > +			return -ENOMEM;
> > > > +		}
> > > > +
> > > > +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> > > > +			continue;
> > > > +
> > > > +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> > > > +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> > > > +							 node);
> > > > +
> > > > +			bitmap_set(dma->bitmap,
> > > > +					(vpfn->iova - dma->iova) / pgsize, 1);
> > > > +		}
> > > > +	}
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> > > > +{
> > > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > > +
> > > > +	for (; n; n = rb_next(n)) {
> > > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > > +
> > > > +		kfree(dma->bitmap);
> > > > +		dma->bitmap = NULL;
> > > > +	}
> > > > +}
> > > > +
> > > >  /*
> > > >   * Helper Functions for host iova-pfn list
> > > >   */
> > > > @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > > >  			vfio_unpin_page_external(dma, iova, do_accounting);
> > > >  			goto pin_unwind;
> > > >  		}
> > > > +
> > > > +		if (iommu->dirty_page_tracking) {
> > > > +			unsigned long pgshift =
> > > > +					 __ffs(vfio_pgsize_bitmap(iommu));
> > > > +
> > > > +			bitmap_set(dma->bitmap,
> > > > +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > > > +		}
> > > >  	}
> > > >  
> > > >  	ret = i;
> > > > @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> > > >  	vfio_unmap_unpin(iommu, dma, true);
> > > >  	vfio_unlink_dma(iommu, dma);
> > > >  	put_task_struct(dma->task);
> > > > +	kfree(dma->bitmap);
> > > >  	kfree(dma);
> > > >  	iommu->dma_avail++;
> > > >  }
> > > > @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> > > >  	return bitmap;
> > > >  }
> > > >  
> > > > +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> > > > +				  size_t size, uint64_t pgsize,
> > > > +				  unsigned char __user *bitmap)
> > > > +{
> > > > +	struct vfio_dma *dma;
> > > > +	unsigned long pgshift = __ffs(pgsize);
> > > > +	unsigned int npages, bitmap_size;
> > > > +
> > > > +	dma = vfio_find_dma(iommu, iova, 1);
> > > > +
> > > > +	if (!dma)
> > > > +		return -EINVAL;
> > > > +
> > > > +	if (dma->iova != iova || dma->size != size)
> > > > +		return -EINVAL;
> > > > +    
> > > looks this size is passed from user. how can it ensure size always
> > > equals to dma->size ?
> > > 
> > > shouldn't we iterate dma tree to look for dirty for whole range if a
> > > single dma cannot meet them all?  
> > 
> > Please see the discussion on v12[1], the problem is with the alignment
> > of DMA mapped regions versus the bitmap.  A DMA mapping only requires
> > page alignment, so for example imagine a user requests the bitmap from
> > page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
> > efficiently copy the bitmap tracked by the vfio_dma structure to the
> > user buffer when it's shifted by 1 bit.  Adjacent mappings can also
> > make for a very complicated implementation.  In the discussion linked
> > we decided to compromise on a more simple implementation that requires
> > the user to ask for a bitmap which exactly matches a single DMA
> > mapping, which Kirti indicates is what we require to support QEMU.
> > Later in the series, the unmap operation also makes this requirement
> > when used with the flags to retrieve the dirty bitmap.  Thanks,
> >  
> 
> so, what about for vIOMMU enabling case?
> if IOVAs are mapped per page, then there's a log_sync in qemu,
> it's supposed for range from 0-U64MAX, qemu has to find out which
> ones are mapped and cut them into pages before calling this IOCTL?
> And what if those IOVAs are mapped for len more than one page?

Good question.  Kirti?

> > [1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
> >    
> > > > +	npages = dma->size >> pgshift;
> > > > +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> > > > +
> > > > +	/* mark all pages dirty if all pages are pinned and mapped. */
> > > > +	if (dma->iommu_mapped)
> > > > +		bitmap_set(dma->bitmap, 0, npages);
> > > > +
> > > > +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> > > > +		return -EFAULT;
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > > > +{
> > > > +	uint64_t bsize;
> > > > +
> > > > +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> > > > +		return -EINVAL;
> > > > +
> > > > +	bsize = DIRTY_BITMAP_BYTES(npages);
> > > > +
> > > > +	if (bitmap_size < bsize)
> > > > +		return -EINVAL;
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > > >  			     struct vfio_iommu_type1_dma_unmap *unmap)
> > > >  {
> > > > @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> > > >  
> > > >  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> > > >  			-EFAULT : 0;
> > > > +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> > > > +		struct vfio_iommu_type1_dirty_bitmap dirty;
> > > > +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> > > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> > > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> > > > +		int ret = 0;
> > > > +
> > > > +		if (!iommu->v2)
> > > > +			return -EACCES;
> > > > +
> > > > +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> > > > +				    flags);
> > > > +
> > > > +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> > > > +			return -EFAULT;
> > > > +
> > > > +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> > > > +			return -EINVAL;
> > > > +
> > > > +		/* only one flag should be set at a time */
> > > > +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> > > > +			return -EINVAL;
> > > > +
> > > > +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> > > > +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > > +
> > > > +			mutex_lock(&iommu->lock);
> > > > +			if (!iommu->dirty_page_tracking) {
> > > > +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> > > > +				if (!ret)
> > > > +					iommu->dirty_page_tracking = true;
> > > > +			}
> > > > +			mutex_unlock(&iommu->lock);
> > > > +			return ret;
> > > > +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> > > > +			mutex_lock(&iommu->lock);
> > > > +			if (iommu->dirty_page_tracking) {
> > > > +				iommu->dirty_page_tracking = false;
> > > > +				vfio_dma_bitmap_free(iommu);
> > > > +			}
> > > > +			mutex_unlock(&iommu->lock);
> > > > +			return 0;
> > > > +		} else if (dirty.flags &
> > > > +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> > > > +			struct vfio_iommu_type1_dirty_bitmap_get range;
> > > > +			unsigned long pgshift;
> > > > +			size_t data_size = dirty.argsz - minsz;
> > > > +			uint64_t iommu_pgsize =
> > > > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > > +
> > > > +			if (!data_size || data_size < sizeof(range))
> > > > +				return -EINVAL;
> > > > +
> > > > +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> > > > +					   sizeof(range)))
> > > > +				return -EFAULT;
> > > > +
> > > > +			/* allow only min supported pgsize */
> > > > +			if (range.bitmap.pgsize != iommu_pgsize)
> > > > +				return -EINVAL;
> > > > +			if (range.iova & (iommu_pgsize - 1))
> > > > +				return -EINVAL;
> > > > +			if (!range.size || range.size & (iommu_pgsize - 1))
> > > > +				return -EINVAL;
> > > > +			if (range.iova + range.size < range.iova)
> > > > +				return -EINVAL;
> > > > +			if (!access_ok((void __user *)range.bitmap.data,
> > > > +				       range.bitmap.size))
> > > > +				return -EINVAL;
> > > > +
> > > > +			pgshift = __ffs(range.bitmap.pgsize);
> > > > +			ret = verify_bitmap_size(range.size >> pgshift,
> > > > +						 range.bitmap.size);
> > > > +			if (ret)
> > > > +				return ret;
> > > > +
> > > > +			mutex_lock(&iommu->lock);
> > > > +			if (iommu->dirty_page_tracking)
> > > > +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> > > > +					 range.size, range.bitmap.pgsize,
> > > > +				    (unsigned char __user *)range.bitmap.data);
> > > > +			else
> > > > +				ret = -EINVAL;
> > > > +			mutex_unlock(&iommu->lock);
> > > > +
> > > > +			return ret;
> > > > +		}
> > > >  	}
> > > >  
> > > >  	return -ENOTTY;
> > > > @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> > > >  
> > > >  	vaddr = dma->vaddr + offset;
> > > >  
> > > > -	if (write)
> > > > +	if (write) {
> > > >  		*copied = __copy_to_user((void __user *)vaddr, data,
> > > >  					 count) ? 0 : count;
> > > > -	else
> > > > +		if (*copied && iommu->dirty_page_tracking) {
> > > > +			unsigned long pgshift =
> > > > +				__ffs(vfio_pgsize_bitmap(iommu));
> > > > +
> > > > +			bitmap_set(dma->bitmap, offset >> pgshift,
> > > > +				   *copied >> pgshift);
> > > > +		}
> > > > +	} else
> > > >  		*copied = __copy_from_user(data, (void __user *)vaddr,
> > > >  					   count) ? 0 : count;
> > > >  	if (kthread)
> > > > -- 
> > > > 2.7.0
> > > >     
> > >   
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-19  3:49     ` Alex Williamson
@ 2020-03-19  5:05       ` Yan Zhao
  2020-03-19 13:09         ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-19  5:05 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:
> On Wed, 18 Mar 2020 21:17:03 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:
> > > - Defined MIGRATION region type and sub-type.
> > > 
> > > - Defined vfio_device_migration_info structure which will be placed at the
> > >   0th offset of migration region to get/set VFIO device related
> > >   information. Defined members of structure and usage on read/write access.
> > > 
> > > - Defined device states and state transition details.
> > > 
> > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > 
> > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > ---
> > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 227 insertions(+)
> > > 
> > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > index 9e843a147ead..d0021467af53 100644
> > > --- a/include/uapi/linux/vfio.h
> > > +++ b/include/uapi/linux/vfio.h
> > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > >  
> > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > >  
> > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > >  
> > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > +
> > > +/*
> > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > + * migration information. Field accesses from this structure are only supported
> > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > + * vendor drivers should return an error.
> > > + *
> > > + * device_state: (read/write)
> > > + *      - The user application writes to this field to inform the vendor driver
> > > + *        about the device state to be transitioned to.
> > > + *      - The vendor driver should take the necessary actions to change the
> > > + *        device state. After successful transition to a given state, the
> > > + *        vendor driver should return success on write(device_state, state)
> > > + *        system call. If the device state transition fails, the vendor driver
> > > + *        should return an appropriate -errno for the fault condition.
> > > + *      - On the user application side, if the device state transition fails,
> > > + *	  that is, if write(device_state, state) returns an error, read
> > > + *	  device_state again to determine the current state of the device from
> > > + *	  the vendor driver.
> > > + *      - The vendor driver should return previous state of the device unless
> > > + *        the vendor driver has encountered an internal error, in which case
> > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > + *      - The user application must use the device reset ioctl to recover the
> > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > + *        indicated to be in a valid device state by reading device_state, the
> > > + *        user application may attempt to transition the device to any valid
> > > + *        state reachable from the current state or terminate itself.
> > > + *
> > > + *      device_state consists of 3 bits:
> > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > + *        it indicates the _STOP state. When the device state is changed to
> > > + *        _STOP, driver should stop the device before write() returns.
> > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > + *        driver should start gathering device state information that will be
> > > + *        provided to the VFIO user application to save the device's state.
> > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > + *        the driver should prepare to resume the device. Data provided through
> > > + *        the migration region should be used to resume the device.
> > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > + *      application should perform a read-modify-write operation on this
> > > + *      field when modifying the specified bits.
> > > + *
> > > + *  +------- _RESUMING
> > > + *  |+------ _SAVING
> > > + *  ||+----- _RUNNING
> > > + *  |||
> > > + *  000b => Device Stopped, not saving or resuming
> > > + *  001b => Device running, which is the default state
> > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > + *  011b => Device running and save the device state, pre-copy state
> > > + *  100b => Device stopped and the device state is resuming
> > > + *  101b => Invalid state
> > > + *  110b => Error state
> > > + *  111b => Invalid state
> > > + *
> > > + * State transitions:
> > > + *
> > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > + * 0. Running or default state
> > > + *                             |
> > > + *
> > > + * 1. Normal Shutdown (optional)
> > > + *                             |------------------------------------->|
> > > + *
> > > + * 2. Save the state or suspend
> > > + *                             |------------------------->|---------->|
> > > + *
> > > + * 3. Save the state during live migration
> > > + *                             |----------->|------------>|---------->|
> > > + *
> > > + * 4. Resuming
> > > + *                  |<---------|
> > > + *
> > > + * 5. Resumed
> > > + *                  |--------->|
> > > + *
> > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > + * 1. During normal shutdown of the user application, the user application may
> > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > + *    transition is optional. The vendor driver must support this transition but
> > > + *    must not require it.
> > > + * 2. When the user application saves state or suspends the application, the
> > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > + *    device, save the device state and send it to the application through the
> > > + *    migration region. The sequence to be followed for such transition is given
> > > + *    below.
> > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > + *    gathering the device state while the application is still running and send
> > > + *    the device state data to application through the migration region.
> > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > + *    the device, save the device state and send it to the user application
> > > + *    through the migration region.
> > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > + *    user must not be required to consume all migration data before the device
> > > + *    transitions to a new state, including the stop-and-copy state.
> > > + *    The sequence to be followed for above two transitions is given below.
> > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > + *    the _RUNNING to the _RESUMING state.
> > > + *    In the _RESUMING state, the driver should use the device state data
> > > + *    received through the migration region to resume the device.
> > > + * 5. After providing saved device data to the driver, the application should
> > > + *    change the state from _RESUMING to _RUNNING.
> > > + *
> > > + * reserved:
> > > + *      Reads on this field return zero and writes are ignored.
> > > + *
> > > + * pending_bytes: (read only)
> > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > + *
> > > + * data_offset: (read only)
> > > + *      The user application should read data_offset in the migration region
> > > + *      from where the user application should read the device data during the
> > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > + *      below for details of sequence to be followed.
> > > + *
> > > + * data_size: (read/write)
> > > + *      The user application should read data_size to get the size in bytes of
> > > + *      the data copied in the migration region during the _SAVING state and
> > > + *      write the size in bytes of the data copied in the migration region
> > > + *      during the _RESUMING state.
> > > + *
> > > + * The format of the migration region is as follows:
> > > + *  ------------------------------------------------------------------
> > > + * |vfio_device_migration_info|    data section                      |
> > > + * |                          |     ///////////////////////////////  |
> > > + * ------------------------------------------------------------------
> > > + *   ^                              ^
> > > + *  offset 0-trapped part        data_offset
> > > + *
> > > + * The structure vfio_device_migration_info is always followed by the data
> > > + * section in the region, so data_offset will always be nonzero. The offset
> > > + * from where the data is copied is decided by the kernel driver. The data
> > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > + * driver defines the data section. The data section partition can be defined
> > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > + * page aligned, whereas initial section which contains the
> > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > + * page aligned. The user is not required to access through mmap regardless
> > > + * of the capabilities of the region mmap.
> > > + * The vendor driver should determine whether and how to partition the data
> > > + * section. The vendor driver should return data_offset accordingly.
> > > + *
> > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > + * follows:
> > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > + *    effects.
> > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > + *    for that device.
> > > + *    If pending_bytes > 0, perform the following steps.
> > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > + *    available through the data section. The vendor driver should return this
> > > + *    read operation only after data is available from (region + data_offset)
> > > + *    to (region + data_offset + data_size).
> > > + * c. Read data_size, which is the amount of data in bytes available through
> > > + *    the migration region.
> > > + *    Read on data_offset and data_size should return the offset and size of
> > > + *    the current buffer if the user application reads data_offset and
> > > + *    data_size more than once here.  
> > If data region is mmaped, merely reading data_offset and data_size
> > cannot let kernel know what are correct values to return.
> > Consider to add a read operation which is trapped into kernel to let
> > kernel exactly know it needs to move to the next offset and update data_size
> > ?
> 
> Both operations b. and c. above are to trapped registers, operation d.
> below may potentially be to an mmap'd area, which is why we have step
> f. which indicates to the vendor driver that the data has been
> consumed.  Does that address your concern?  Thanks,
>
No. :)
the problem is about semantics of data_offset, data_size, and
pending_bytes.
b and c do not tell kernel that the data is read by user.
so, without knowing step d happen, kernel cannot update pending_bytes to
be returned in step f.

> 
> > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > + *    migration region.
> > > + * e. Process the data.
> > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > + *
> > > + * If an error occurs during the above sequence, the vendor driver can return
> > > + * an error code for next read() or write() operation, which will terminate the
> > > + * loop. The user application should then take the next necessary action, for
> > > + * example, failing migration or terminating the user application.
> > > + *
> > > + * The user application can transition from the _SAVING|_RUNNING
> > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > + * number of pending bytes. The user application should iterate in _SAVING
> > > + * (stop-and-copy) until pending_bytes is 0.
> > > + *
> > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > + * While data for this device is available, repeat the following steps:
> > > + * a. Read data_offset from where the user application should write data.
> > > + * b. Write migration data starting at the migration region + data_offset for
> > > + *    the length determined by data_size from the migration source.
> > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > + *    written in the migration region. Vendor driver should apply the
> > > + *    user-provided migration region data to the device resume state.
> > > + *
> > > + * For the user application, data is opaque. The user application should write
> > > + * data in the same order as the data is received and the data should be of
> > > + * same transaction size at the source.
> > > + */
> > > +
> > > +struct vfio_device_migration_info {
> > > +	__u32 device_state;         /* VFIO device state */
> > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > +
> > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > +
> > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > +
> > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > +
> > > +	__u32 reserved;
> > > +	__u64 pending_bytes;
> > > +	__u64 data_offset;
> > > +	__u64 data_size;
> > > +} __attribute__((packed));
> > > +
> > >  /*
> > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > >   * which allows direct access to non-MSIX registers which happened to be within
> > > -- 
> > > 2.7.0
> > >   
> > 
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  4:40         ` Alex Williamson
@ 2020-03-19  6:15           ` Yan Zhao
  2020-03-19 13:06             ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-19  6:15 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 12:40:53PM +0800, Alex Williamson wrote:
> On Thu, 19 Mar 2020 00:15:33 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Thu, Mar 19, 2020 at 12:01:00PM +0800, Alex Williamson wrote:
> > > On Wed, 18 Mar 2020 23:06:39 -0400
> > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > >   
> > > > On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:  
> > > > > VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> > > > > - Start dirty pages tracking while migration is active
> > > > > - Stop dirty pages tracking.
> > > > > - Get dirty pages bitmap. Its user space application's responsibility to
> > > > >   copy content of dirty pages from source to destination during migration.
> > > > > 
> > > > > To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> > > > > structure. Bitmap size is calculated considering smallest supported page
> > > > > size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> > > > > 
> > > > > Bitmap is populated for already pinned pages when bitmap is allocated for
> > > > > a vfio_dma with the smallest supported page size. Update bitmap from
> > > > > pinning functions when tracking is enabled. When user application queries
> > > > > bitmap, check if requested page size is same as page size used to
> > > > > populated bitmap. If it is equal, copy bitmap, but if not equal, return
> > > > > error.
> > > > > 
> > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > ---
> > > > >  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> > > > >  1 file changed, 203 insertions(+), 2 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > > > > index 70aeab921d0f..d6417fb02174 100644
> > > > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > > > @@ -71,6 +71,7 @@ struct vfio_iommu {
> > > > >  	unsigned int		dma_avail;
> > > > >  	bool			v2;
> > > > >  	bool			nesting;
> > > > > +	bool			dirty_page_tracking;
> > > > >  };
> > > > >  
> > > > >  struct vfio_domain {
> > > > > @@ -91,6 +92,7 @@ struct vfio_dma {
> > > > >  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> > > > >  	struct task_struct	*task;
> > > > >  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> > > > > +	unsigned long		*bitmap;
> > > > >  };
> > > > >  
> > > > >  struct vfio_group {
> > > > > @@ -125,7 +127,10 @@ struct vfio_regions {
> > > > >  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> > > > >  					(!list_empty(&iommu->domain_list))
> > > > >  
> > > > > +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> > > > > +
> > > > >  static int put_pfn(unsigned long pfn, int prot);
> > > > > +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> > > > >  
> > > > >  /*
> > > > >   * This code handles mapping and unmapping of user data buffers
> > > > > @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> > > > >  	rb_erase(&old->node, &iommu->dma_list);
> > > > >  }
> > > > >  
> > > > > +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> > > > > +{
> > > > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > > > +
> > > > > +	for (; n; n = rb_next(n)) {
> > > > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > > > +		struct rb_node *p;
> > > > > +		unsigned long npages = dma->size / pgsize;
> > > > > +
> > > > > +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> > > > > +		if (!dma->bitmap) {
> > > > > +			struct rb_node *p = rb_prev(n);
> > > > > +
> > > > > +			for (; p; p = rb_prev(p)) {
> > > > > +				struct vfio_dma *dma = rb_entry(n,
> > > > > +							struct vfio_dma, node);
> > > > > +
> > > > > +				kfree(dma->bitmap);
> > > > > +				dma->bitmap = NULL;
> > > > > +			}
> > > > > +			return -ENOMEM;
> > > > > +		}
> > > > > +
> > > > > +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> > > > > +			continue;
> > > > > +
> > > > > +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> > > > > +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> > > > > +							 node);
> > > > > +
> > > > > +			bitmap_set(dma->bitmap,
> > > > > +					(vpfn->iova - dma->iova) / pgsize, 1);
> > > > > +		}
> > > > > +	}
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> > > > > +{
> > > > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > > > +
> > > > > +	for (; n; n = rb_next(n)) {
> > > > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > > > +
> > > > > +		kfree(dma->bitmap);
> > > > > +		dma->bitmap = NULL;
> > > > > +	}
> > > > > +}
> > > > > +
> > > > >  /*
> > > > >   * Helper Functions for host iova-pfn list
> > > > >   */
> > > > > @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > > > >  			vfio_unpin_page_external(dma, iova, do_accounting);
> > > > >  			goto pin_unwind;
> > > > >  		}
> > > > > +
> > > > > +		if (iommu->dirty_page_tracking) {
> > > > > +			unsigned long pgshift =
> > > > > +					 __ffs(vfio_pgsize_bitmap(iommu));
> > > > > +
> > > > > +			bitmap_set(dma->bitmap,
> > > > > +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > > > > +		}
> > > > >  	}
> > > > >  
> > > > >  	ret = i;
> > > > > @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> > > > >  	vfio_unmap_unpin(iommu, dma, true);
> > > > >  	vfio_unlink_dma(iommu, dma);
> > > > >  	put_task_struct(dma->task);
> > > > > +	kfree(dma->bitmap);
> > > > >  	kfree(dma);
> > > > >  	iommu->dma_avail++;
> > > > >  }
> > > > > @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> > > > >  	return bitmap;
> > > > >  }
> > > > >  
> > > > > +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> > > > > +				  size_t size, uint64_t pgsize,
> > > > > +				  unsigned char __user *bitmap)
> > > > > +{
> > > > > +	struct vfio_dma *dma;
> > > > > +	unsigned long pgshift = __ffs(pgsize);
> > > > > +	unsigned int npages, bitmap_size;
> > > > > +
> > > > > +	dma = vfio_find_dma(iommu, iova, 1);
> > > > > +
> > > > > +	if (!dma)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	if (dma->iova != iova || dma->size != size)
> > > > > +		return -EINVAL;
> > > > > +    
> > > > looks this size is passed from user. how can it ensure size always
> > > > equals to dma->size ?
> > > > 
> > > > shouldn't we iterate dma tree to look for dirty for whole range if a
> > > > single dma cannot meet them all?  
> > > 
> > > Please see the discussion on v12[1], the problem is with the alignment
> > > of DMA mapped regions versus the bitmap.  A DMA mapping only requires
> > > page alignment, so for example imagine a user requests the bitmap from
> > > page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
> > > efficiently copy the bitmap tracked by the vfio_dma structure to the
> > > user buffer when it's shifted by 1 bit.  Adjacent mappings can also
> > > make for a very complicated implementation.  In the discussion linked
> > > we decided to compromise on a more simple implementation that requires
> > > the user to ask for a bitmap which exactly matches a single DMA
> > > mapping, which Kirti indicates is what we require to support QEMU.
> > > Later in the series, the unmap operation also makes this requirement
> > > when used with the flags to retrieve the dirty bitmap.  Thanks,
> > >  
> > 
> > so, what about for vIOMMU enabling case?
> > if IOVAs are mapped per page, then there's a log_sync in qemu,
> > it's supposed for range from 0-U64MAX, qemu has to find out which
> > ones are mapped and cut them into pages before calling this IOCTL?
> > And what if those IOVAs are mapped for len more than one page?
> 
> Good question.  Kirti?
> 
> > > [1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
> > >    
> > > > > +	npages = dma->size >> pgshift;
> > > > > +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> > > > > +
> > > > > +	/* mark all pages dirty if all pages are pinned and mapped. */
> > > > > +	if (dma->iommu_mapped)
> > > > > +		bitmap_set(dma->bitmap, 0, npages);
> > > > > +
> > > > > +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> > > > > +		return -EFAULT;
> > > > > +
Here, dma->bitmap needs to be cleared. right?

> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > > > > +{
> > > > > +	uint64_t bsize;
> > > > > +
> > > > > +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	bsize = DIRTY_BITMAP_BYTES(npages);
> > > > > +
> > > > > +	if (bitmap_size < bsize)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > > > >  			     struct vfio_iommu_type1_dma_unmap *unmap)
> > > > >  {
> > > > > @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> > > > >  
> > > > >  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> > > > >  			-EFAULT : 0;
> > > > > +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> > > > > +		struct vfio_iommu_type1_dirty_bitmap dirty;
> > > > > +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> > > > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> > > > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> > > > > +		int ret = 0;
> > > > > +
> > > > > +		if (!iommu->v2)
> > > > > +			return -EACCES;
> > > > > +
> > > > > +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> > > > > +				    flags);
> > > > > +
> > > > > +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> > > > > +			return -EFAULT;
> > > > > +
> > > > > +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> > > > > +			return -EINVAL;
> > > > > +
> > > > > +		/* only one flag should be set at a time */
> > > > > +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> > > > > +			return -EINVAL;
> > > > > +
> > > > > +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> > > > > +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > > > +
> > > > > +			mutex_lock(&iommu->lock);
> > > > > +			if (!iommu->dirty_page_tracking) {
> > > > > +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> > > > > +				if (!ret)
> > > > > +					iommu->dirty_page_tracking = true;
> > > > > +			}
> > > > > +			mutex_unlock(&iommu->lock);
> > > > > +			return ret;
> > > > > +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> > > > > +			mutex_lock(&iommu->lock);
> > > > > +			if (iommu->dirty_page_tracking) {
> > > > > +				iommu->dirty_page_tracking = false;
> > > > > +				vfio_dma_bitmap_free(iommu);
> > > > > +			}
> > > > > +			mutex_unlock(&iommu->lock);
> > > > > +			return 0;
> > > > > +		} else if (dirty.flags &
> > > > > +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> > > > > +			struct vfio_iommu_type1_dirty_bitmap_get range;
> > > > > +			unsigned long pgshift;
> > > > > +			size_t data_size = dirty.argsz - minsz;
> > > > > +			uint64_t iommu_pgsize =
> > > > > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > > > +
> > > > > +			if (!data_size || data_size < sizeof(range))
> > > > > +				return -EINVAL;
> > > > > +
> > > > > +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> > > > > +					   sizeof(range)))
> > > > > +				return -EFAULT;
> > > > > +
> > > > > +			/* allow only min supported pgsize */
> > > > > +			if (range.bitmap.pgsize != iommu_pgsize)
> > > > > +				return -EINVAL;
> > > > > +			if (range.iova & (iommu_pgsize - 1))
> > > > > +				return -EINVAL;
> > > > > +			if (!range.size || range.size & (iommu_pgsize - 1))
> > > > > +				return -EINVAL;
> > > > > +			if (range.iova + range.size < range.iova)
> > > > > +				return -EINVAL;
> > > > > +			if (!access_ok((void __user *)range.bitmap.data,
> > > > > +				       range.bitmap.size))
> > > > > +				return -EINVAL;
> > > > > +
> > > > > +			pgshift = __ffs(range.bitmap.pgsize);
> > > > > +			ret = verify_bitmap_size(range.size >> pgshift,
> > > > > +						 range.bitmap.size);
> > > > > +			if (ret)
> > > > > +				return ret;
> > > > > +
> > > > > +			mutex_lock(&iommu->lock);
> > > > > +			if (iommu->dirty_page_tracking)
> > > > > +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> > > > > +					 range.size, range.bitmap.pgsize,
> > > > > +				    (unsigned char __user *)range.bitmap.data);
> > > > > +			else
> > > > > +				ret = -EINVAL;
> > > > > +			mutex_unlock(&iommu->lock);
> > > > > +
> > > > > +			return ret;
> > > > > +		}
> > > > >  	}
> > > > >  
> > > > >  	return -ENOTTY;
> > > > > @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> > > > >  
> > > > >  	vaddr = dma->vaddr + offset;
> > > > >  
> > > > > -	if (write)
> > > > > +	if (write) {
> > > > >  		*copied = __copy_to_user((void __user *)vaddr, data,
> > > > >  					 count) ? 0 : count;
> > > > > -	else
> > > > > +		if (*copied && iommu->dirty_page_tracking) {
> > > > > +			unsigned long pgshift =
> > > > > +				__ffs(vfio_pgsize_bitmap(iommu));
> > > > > +
> > > > > +			bitmap_set(dma->bitmap, offset >> pgshift,
> > > > > +				   *copied >> pgshift);
> > > > > +		}
> > > > > +	} else
> > > > >  		*copied = __copy_from_user(data, (void __user *)vaddr,
> > > > >  					   count) ? 0 : count;
> > > > >  	if (kthread)
> > > > > -- 
> > > > > 2.7.0
> > > > >     
> > > >   
> > >   
> > 
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-18 19:41 ` [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
  2020-03-19  3:45   ` Alex Williamson
@ 2020-03-19  6:24   ` Yan Zhao
  2020-03-20 19:41     ` Alex Williamson
  1 sibling, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-19  6:24 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 03:41:14AM +0800, Kirti Wankhede wrote:
> Added a check such that only singleton IOMMU groups can pin pages.
> From the point when vendor driver pins any pages, consider IOMMU group
> dirty page scope to be limited to pinned pages.
> 
> To optimize to avoid walking list often, added flag
> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
> vfio_domain in the domain_list dirty page scope is limited to pinned
> pages. This flag is updated on first pinned pages request for that IOMMU
> group and on attaching/detaching group.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio.c             | 13 +++++--
>  drivers/vfio/vfio_iommu_type1.c | 77 +++++++++++++++++++++++++++++++++++++++--
>  include/linux/vfio.h            |  4 ++-
>  3 files changed, 87 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 210fcf426643..311b5e4e111e 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -85,6 +85,7 @@ struct vfio_group {
>  	atomic_t			opened;
>  	wait_queue_head_t		container_q;
>  	bool				noiommu;
> +	unsigned int			dev_counter;
>  	struct kvm			*kvm;
>  	struct blocking_notifier_head	notifier;
>  };
> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
>  
>  	mutex_lock(&group->device_lock);
>  	list_add(&device->group_next, &group->device_list);
> +	group->dev_counter++;
>  	mutex_unlock(&group->device_lock);
>  
>  	return device;
> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
>  	struct vfio_group *group = device->group;
>  
>  	list_del(&device->group_next);
> +	group->dev_counter--;
>  	mutex_unlock(&group->device_lock);
>  
>  	dev_set_drvdata(device->dev, NULL);
> @@ -1933,6 +1936,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>  	if (!group)
>  		return -ENODEV;
>  
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
> +
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
>  		goto err_pin_pages;
> @@ -1940,7 +1946,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>  	container = group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->pin_pages))
> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
> +		ret = driver->ops->pin_pages(container->iommu_data,
> +					     group->iommu_group, user_pfn,
>  					     npage, prot, phys_pfn);
>  	else
>  		ret = -ENOTTY;
> @@ -2038,8 +2045,8 @@ int vfio_group_pin_pages(struct vfio_group *group,
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->pin_pages))
>  		ret = driver->ops->pin_pages(container->iommu_data,
> -					     user_iova_pfn, npage,
> -					     prot, phys_pfn);
> +					     group->iommu_group, user_iova_pfn,
> +					     npage, prot, phys_pfn);
>  	else
>  		ret = -ENOTTY;
>  
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 912629320719..deec09f4b0f6 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -72,6 +72,7 @@ struct vfio_iommu {
>  	bool			v2;
>  	bool			nesting;
>  	bool			dirty_page_tracking;
> +	bool			pinned_page_dirty_scope;
>  };
>  
>  struct vfio_domain {
> @@ -99,6 +100,7 @@ struct vfio_group {
>  	struct iommu_group	*iommu_group;
>  	struct list_head	next;
>  	bool			mdev_group;	/* An mdev group */
> +	bool			pinned_page_dirty_scope;
>  };
>  
>  struct vfio_iova {
> @@ -132,6 +134,10 @@ struct vfio_regions {
>  static int put_pfn(unsigned long pfn, int prot);
>  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> +					       struct iommu_group *iommu_group);
> +
> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
>  /*
>   * This code handles mapping and unmapping of user data buffers
>   * into DMA'ble space using the IOMMU
> @@ -556,11 +562,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>  }
>  
>  static int vfio_iommu_type1_pin_pages(void *iommu_data,
> +				      struct iommu_group *iommu_group,
>  				      unsigned long *user_pfn,
>  				      int npage, int prot,
>  				      unsigned long *phys_pfn)
>  {
>  	struct vfio_iommu *iommu = iommu_data;
> +	struct vfio_group *group;
>  	int i, j, ret;
>  	unsigned long remote_vaddr;
>  	struct vfio_dma *dma;
> @@ -630,8 +638,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  				   (vpfn->iova - dma->iova) >> pgshift, 1);
>  		}
>  	}

Could you provide an interface lightweight than vfio_pin_pages for pass-through
devices? e.g. vfio_mark_iova_dirty()

Or at least allowing phys_pfn to be empty for pass-through devices.

This is really inefficient:
bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1));
i.e.
in order to mark an iova dirty, it has to go through iova ---> pfn --> iova
while acquiring pfn is not necessary for pass-through devices.


> -
>  	ret = i;
> +
> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
> +	if (!group->pinned_page_dirty_scope) {
> +		group->pinned_page_dirty_scope = true;
> +		update_pinned_page_dirty_scope(iommu);
> +	}
> +
>  	goto pin_done;
>  
>  pin_unwind:
> @@ -913,8 +927,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>  	npages = dma->size >> pgshift;
>  	bitmap_size = DIRTY_BITMAP_BYTES(npages);
>  
> -	/* mark all pages dirty if all pages are pinned and mapped. */
> -	if (dma->iommu_mapped)
> +	/*
> +	 * mark all pages dirty if any IOMMU capable device is not able
> +	 * to report dirty pages and all pages are pinned and mapped.
> +	 */
> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>  		bitmap_set(dma->bitmap, 0, npages);
>  
>  	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> @@ -1393,6 +1410,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
>  	return NULL;
>  }
>  
> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> +					       struct iommu_group *iommu_group)
> +{
> +	struct vfio_domain *domain;
> +	struct vfio_group *group = NULL;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		group = find_iommu_group(domain, iommu_group);
> +		if (group)
> +			return group;
> +	}
> +
> +	if (iommu->external_domain)
> +		group = find_iommu_group(iommu->external_domain, iommu_group);
> +
> +	return group;
> +}
> +
> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> +{
> +	struct vfio_domain *domain;
> +	struct vfio_group *group;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			if (!group->pinned_page_dirty_scope) {
> +				iommu->pinned_page_dirty_scope = false;
> +				return;
> +			}
> +		}
> +	}
> +
> +	if (iommu->external_domain) {
> +		domain = iommu->external_domain;
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			if (!group->pinned_page_dirty_scope) {
> +				iommu->pinned_page_dirty_scope = false;
> +				return;
> +			}
> +		}
> +	}
> +
> +	iommu->pinned_page_dirty_scope = true;
> +}
> +
>  static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
>  				  phys_addr_t *base)
>  {
> @@ -1799,6 +1861,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  
>  			list_add(&group->next,
>  				 &iommu->external_domain->group_list);
> +			group->pinned_page_dirty_scope = true;
> +			if (!iommu->pinned_page_dirty_scope)
> +				update_pinned_page_dirty_scope(iommu);
>  			mutex_unlock(&iommu->lock);
>  
>  			return 0;
> @@ -1921,6 +1986,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  done:
>  	/* Delete the old one and insert new iova list */
>  	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> +	iommu->pinned_page_dirty_scope = false;
>  	mutex_unlock(&iommu->lock);
>  	vfio_iommu_resv_free(&group_resv_regions);
>  
> @@ -2073,6 +2139,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  	struct vfio_iommu *iommu = iommu_data;
>  	struct vfio_domain *domain;
>  	struct vfio_group *group;
> +	bool update_dirty_scope = false;
>  	LIST_HEAD(iova_copy);
>  
>  	mutex_lock(&iommu->lock);
> @@ -2080,6 +2147,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  	if (iommu->external_domain) {
>  		group = find_iommu_group(iommu->external_domain, iommu_group);
>  		if (group) {
> +			update_dirty_scope = !group->pinned_page_dirty_scope;
>  			list_del(&group->next);
>  			kfree(group);
>  
> @@ -2109,6 +2177,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  			continue;
>  
>  		vfio_iommu_detach_group(domain, group);
> +		update_dirty_scope = !group->pinned_page_dirty_scope;
>  		list_del(&group->next);
>  		kfree(group);
>  		/*
> @@ -2139,6 +2208,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  		vfio_iommu_iova_free(&iova_copy);
>  
>  detach_group_done:
> +	if (update_dirty_scope)
> +		update_pinned_page_dirty_scope(iommu);
>  	mutex_unlock(&iommu->lock);
>  }
>  
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index be2bd358b952..702e1d7b6e8b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
>  					struct iommu_group *group);
>  	void		(*detach_group)(void *iommu_data,
>  					struct iommu_group *group);
> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
> +	int		(*pin_pages)(void *iommu_data,
> +				     struct iommu_group *group,
> +				     unsigned long *user_pfn,
>  				     int npage, int prot,
>  				     unsigned long *phys_pfn);
>  	int		(*unpin_pages)(void *iommu_data,
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  6:15           ` Yan Zhao
@ 2020-03-19 13:06             ` Alex Williamson
  2020-03-19 16:57               ` Kirti Wankhede
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-19 13:06 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, 19 Mar 2020 02:15:34 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 12:40:53PM +0800, Alex Williamson wrote:
> > On Thu, 19 Mar 2020 00:15:33 -0400
> > Yan Zhao <yan.y.zhao@intel.com> wrote:
> >   
> > > On Thu, Mar 19, 2020 at 12:01:00PM +0800, Alex Williamson wrote:  
> > > > On Wed, 18 Mar 2020 23:06:39 -0400
> > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > >     
> > > > > On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:    
> > > > > > VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> > > > > > - Start dirty pages tracking while migration is active
> > > > > > - Stop dirty pages tracking.
> > > > > > - Get dirty pages bitmap. Its user space application's responsibility to
> > > > > >   copy content of dirty pages from source to destination during migration.
> > > > > > 
> > > > > > To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> > > > > > structure. Bitmap size is calculated considering smallest supported page
> > > > > > size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> > > > > > 
> > > > > > Bitmap is populated for already pinned pages when bitmap is allocated for
> > > > > > a vfio_dma with the smallest supported page size. Update bitmap from
> > > > > > pinning functions when tracking is enabled. When user application queries
> > > > > > bitmap, check if requested page size is same as page size used to
> > > > > > populated bitmap. If it is equal, copy bitmap, but if not equal, return
> > > > > > error.
> > > > > > 
> > > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > > ---
> > > > > >  drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> > > > > >  1 file changed, 203 insertions(+), 2 deletions(-)
> > > > > > 
> > > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > > > > > index 70aeab921d0f..d6417fb02174 100644
> > > > > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > > > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > > > > @@ -71,6 +71,7 @@ struct vfio_iommu {
> > > > > >  	unsigned int		dma_avail;
> > > > > >  	bool			v2;
> > > > > >  	bool			nesting;
> > > > > > +	bool			dirty_page_tracking;
> > > > > >  };
> > > > > >  
> > > > > >  struct vfio_domain {
> > > > > > @@ -91,6 +92,7 @@ struct vfio_dma {
> > > > > >  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> > > > > >  	struct task_struct	*task;
> > > > > >  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> > > > > > +	unsigned long		*bitmap;
> > > > > >  };
> > > > > >  
> > > > > >  struct vfio_group {
> > > > > > @@ -125,7 +127,10 @@ struct vfio_regions {
> > > > > >  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> > > > > >  					(!list_empty(&iommu->domain_list))
> > > > > >  
> > > > > > +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> > > > > > +
> > > > > >  static int put_pfn(unsigned long pfn, int prot);
> > > > > > +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> > > > > >  
> > > > > >  /*
> > > > > >   * This code handles mapping and unmapping of user data buffers
> > > > > > @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> > > > > >  	rb_erase(&old->node, &iommu->dma_list);
> > > > > >  }
> > > > > >  
> > > > > > +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> > > > > > +{
> > > > > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > > > > +
> > > > > > +	for (; n; n = rb_next(n)) {
> > > > > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > > > > +		struct rb_node *p;
> > > > > > +		unsigned long npages = dma->size / pgsize;
> > > > > > +
> > > > > > +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> > > > > > +		if (!dma->bitmap) {
> > > > > > +			struct rb_node *p = rb_prev(n);
> > > > > > +
> > > > > > +			for (; p; p = rb_prev(p)) {
> > > > > > +				struct vfio_dma *dma = rb_entry(n,
> > > > > > +							struct vfio_dma, node);
> > > > > > +
> > > > > > +				kfree(dma->bitmap);
> > > > > > +				dma->bitmap = NULL;
> > > > > > +			}
> > > > > > +			return -ENOMEM;
> > > > > > +		}
> > > > > > +
> > > > > > +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> > > > > > +			continue;
> > > > > > +
> > > > > > +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> > > > > > +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> > > > > > +							 node);
> > > > > > +
> > > > > > +			bitmap_set(dma->bitmap,
> > > > > > +					(vpfn->iova - dma->iova) / pgsize, 1);
> > > > > > +		}
> > > > > > +	}
> > > > > > +	return 0;
> > > > > > +}
> > > > > > +
> > > > > > +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> > > > > > +{
> > > > > > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > > > > > +
> > > > > > +	for (; n; n = rb_next(n)) {
> > > > > > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > > > > > +
> > > > > > +		kfree(dma->bitmap);
> > > > > > +		dma->bitmap = NULL;
> > > > > > +	}
> > > > > > +}
> > > > > > +
> > > > > >  /*
> > > > > >   * Helper Functions for host iova-pfn list
> > > > > >   */
> > > > > > @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > > > > >  			vfio_unpin_page_external(dma, iova, do_accounting);
> > > > > >  			goto pin_unwind;
> > > > > >  		}
> > > > > > +
> > > > > > +		if (iommu->dirty_page_tracking) {
> > > > > > +			unsigned long pgshift =
> > > > > > +					 __ffs(vfio_pgsize_bitmap(iommu));
> > > > > > +
> > > > > > +			bitmap_set(dma->bitmap,
> > > > > > +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > > > > > +		}
> > > > > >  	}
> > > > > >  
> > > > > >  	ret = i;
> > > > > > @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> > > > > >  	vfio_unmap_unpin(iommu, dma, true);
> > > > > >  	vfio_unlink_dma(iommu, dma);
> > > > > >  	put_task_struct(dma->task);
> > > > > > +	kfree(dma->bitmap);
> > > > > >  	kfree(dma);
> > > > > >  	iommu->dma_avail++;
> > > > > >  }
> > > > > > @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> > > > > >  	return bitmap;
> > > > > >  }
> > > > > >  
> > > > > > +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> > > > > > +				  size_t size, uint64_t pgsize,
> > > > > > +				  unsigned char __user *bitmap)
> > > > > > +{
> > > > > > +	struct vfio_dma *dma;
> > > > > > +	unsigned long pgshift = __ffs(pgsize);
> > > > > > +	unsigned int npages, bitmap_size;
> > > > > > +
> > > > > > +	dma = vfio_find_dma(iommu, iova, 1);
> > > > > > +
> > > > > > +	if (!dma)
> > > > > > +		return -EINVAL;
> > > > > > +
> > > > > > +	if (dma->iova != iova || dma->size != size)
> > > > > > +		return -EINVAL;
> > > > > > +      
> > > > > looks this size is passed from user. how can it ensure size always
> > > > > equals to dma->size ?
> > > > > 
> > > > > shouldn't we iterate dma tree to look for dirty for whole range if a
> > > > > single dma cannot meet them all?    
> > > > 
> > > > Please see the discussion on v12[1], the problem is with the alignment
> > > > of DMA mapped regions versus the bitmap.  A DMA mapping only requires
> > > > page alignment, so for example imagine a user requests the bitmap from
> > > > page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
> > > > efficiently copy the bitmap tracked by the vfio_dma structure to the
> > > > user buffer when it's shifted by 1 bit.  Adjacent mappings can also
> > > > make for a very complicated implementation.  In the discussion linked
> > > > we decided to compromise on a more simple implementation that requires
> > > > the user to ask for a bitmap which exactly matches a single DMA
> > > > mapping, which Kirti indicates is what we require to support QEMU.
> > > > Later in the series, the unmap operation also makes this requirement
> > > > when used with the flags to retrieve the dirty bitmap.  Thanks,
> > > >    
> > > 
> > > so, what about for vIOMMU enabling case?
> > > if IOVAs are mapped per page, then there's a log_sync in qemu,
> > > it's supposed for range from 0-U64MAX, qemu has to find out which
> > > ones are mapped and cut them into pages before calling this IOCTL?
> > > And what if those IOVAs are mapped for len more than one page?  
> > 
> > Good question.  Kirti?
> >   
> > > > [1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
> > > >      
> > > > > > +	npages = dma->size >> pgshift;
> > > > > > +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> > > > > > +
> > > > > > +	/* mark all pages dirty if all pages are pinned and mapped. */
> > > > > > +	if (dma->iommu_mapped)
> > > > > > +		bitmap_set(dma->bitmap, 0, npages);
> > > > > > +
> > > > > > +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> > > > > > +		return -EFAULT;
> > > > > > +  
> Here, dma->bitmap needs to be cleared. right?

Ah, I missed re-checking this in my review.  v13 did clear it, but I
noted that we need to re-populate any currently pinned pages.  This
neither clears nor repopulates.  That's wrong.  Thanks,

Alex
 
> > > > > > +	return 0;
> > > > > > +}
> > > > > > +
> > > > > > +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > > > > > +{
> > > > > > +	uint64_t bsize;
> > > > > > +
> > > > > > +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> > > > > > +		return -EINVAL;
> > > > > > +
> > > > > > +	bsize = DIRTY_BITMAP_BYTES(npages);
> > > > > > +
> > > > > > +	if (bitmap_size < bsize)
> > > > > > +		return -EINVAL;
> > > > > > +
> > > > > > +	return 0;
> > > > > > +}
> > > > > > +
> > > > > >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > > > > >  			     struct vfio_iommu_type1_dma_unmap *unmap)
> > > > > >  {
> > > > > > @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> > > > > >  
> > > > > >  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> > > > > >  			-EFAULT : 0;
> > > > > > +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> > > > > > +		struct vfio_iommu_type1_dirty_bitmap dirty;
> > > > > > +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> > > > > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> > > > > > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> > > > > > +		int ret = 0;
> > > > > > +
> > > > > > +		if (!iommu->v2)
> > > > > > +			return -EACCES;
> > > > > > +
> > > > > > +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> > > > > > +				    flags);
> > > > > > +
> > > > > > +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> > > > > > +			return -EFAULT;
> > > > > > +
> > > > > > +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> > > > > > +			return -EINVAL;
> > > > > > +
> > > > > > +		/* only one flag should be set at a time */
> > > > > > +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> > > > > > +			return -EINVAL;
> > > > > > +
> > > > > > +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> > > > > > +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > > > > +
> > > > > > +			mutex_lock(&iommu->lock);
> > > > > > +			if (!iommu->dirty_page_tracking) {
> > > > > > +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> > > > > > +				if (!ret)
> > > > > > +					iommu->dirty_page_tracking = true;
> > > > > > +			}
> > > > > > +			mutex_unlock(&iommu->lock);
> > > > > > +			return ret;
> > > > > > +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> > > > > > +			mutex_lock(&iommu->lock);
> > > > > > +			if (iommu->dirty_page_tracking) {
> > > > > > +				iommu->dirty_page_tracking = false;
> > > > > > +				vfio_dma_bitmap_free(iommu);
> > > > > > +			}
> > > > > > +			mutex_unlock(&iommu->lock);
> > > > > > +			return 0;
> > > > > > +		} else if (dirty.flags &
> > > > > > +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> > > > > > +			struct vfio_iommu_type1_dirty_bitmap_get range;
> > > > > > +			unsigned long pgshift;
> > > > > > +			size_t data_size = dirty.argsz - minsz;
> > > > > > +			uint64_t iommu_pgsize =
> > > > > > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > > > > +
> > > > > > +			if (!data_size || data_size < sizeof(range))
> > > > > > +				return -EINVAL;
> > > > > > +
> > > > > > +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> > > > > > +					   sizeof(range)))
> > > > > > +				return -EFAULT;
> > > > > > +
> > > > > > +			/* allow only min supported pgsize */
> > > > > > +			if (range.bitmap.pgsize != iommu_pgsize)
> > > > > > +				return -EINVAL;
> > > > > > +			if (range.iova & (iommu_pgsize - 1))
> > > > > > +				return -EINVAL;
> > > > > > +			if (!range.size || range.size & (iommu_pgsize - 1))
> > > > > > +				return -EINVAL;
> > > > > > +			if (range.iova + range.size < range.iova)
> > > > > > +				return -EINVAL;
> > > > > > +			if (!access_ok((void __user *)range.bitmap.data,
> > > > > > +				       range.bitmap.size))
> > > > > > +				return -EINVAL;
> > > > > > +
> > > > > > +			pgshift = __ffs(range.bitmap.pgsize);
> > > > > > +			ret = verify_bitmap_size(range.size >> pgshift,
> > > > > > +						 range.bitmap.size);
> > > > > > +			if (ret)
> > > > > > +				return ret;
> > > > > > +
> > > > > > +			mutex_lock(&iommu->lock);
> > > > > > +			if (iommu->dirty_page_tracking)
> > > > > > +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> > > > > > +					 range.size, range.bitmap.pgsize,
> > > > > > +				    (unsigned char __user *)range.bitmap.data);
> > > > > > +			else
> > > > > > +				ret = -EINVAL;
> > > > > > +			mutex_unlock(&iommu->lock);
> > > > > > +
> > > > > > +			return ret;
> > > > > > +		}
> > > > > >  	}
> > > > > >  
> > > > > >  	return -ENOTTY;
> > > > > > @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> > > > > >  
> > > > > >  	vaddr = dma->vaddr + offset;
> > > > > >  
> > > > > > -	if (write)
> > > > > > +	if (write) {
> > > > > >  		*copied = __copy_to_user((void __user *)vaddr, data,
> > > > > >  					 count) ? 0 : count;
> > > > > > -	else
> > > > > > +		if (*copied && iommu->dirty_page_tracking) {
> > > > > > +			unsigned long pgshift =
> > > > > > +				__ffs(vfio_pgsize_bitmap(iommu));
> > > > > > +
> > > > > > +			bitmap_set(dma->bitmap, offset >> pgshift,
> > > > > > +				   *copied >> pgshift);
> > > > > > +		}
> > > > > > +	} else
> > > > > >  		*copied = __copy_from_user(data, (void __user *)vaddr,
> > > > > >  					   count) ? 0 : count;
> > > > > >  	if (kthread)
> > > > > > -- 
> > > > > > 2.7.0
> > > > > >       
> > > > >     
> > > >     
> > >   
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-19  5:05       ` Yan Zhao
@ 2020-03-19 13:09         ` Alex Williamson
  2020-03-20  1:30           ` Yan Zhao
  2020-03-23 14:45           ` Auger Eric
  0 siblings, 2 replies; 47+ messages in thread
From: Alex Williamson @ 2020-03-19 13:09 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, 19 Mar 2020 01:05:54 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:
> > On Wed, 18 Mar 2020 21:17:03 -0400
> > Yan Zhao <yan.y.zhao@intel.com> wrote:
> >   
> > > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:  
> > > > - Defined MIGRATION region type and sub-type.
> > > > 
> > > > - Defined vfio_device_migration_info structure which will be placed at the
> > > >   0th offset of migration region to get/set VFIO device related
> > > >   information. Defined members of structure and usage on read/write access.
> > > > 
> > > > - Defined device states and state transition details.
> > > > 
> > > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > > 
> > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > ---
> > > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 227 insertions(+)
> > > > 
> > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > index 9e843a147ead..d0021467af53 100644
> > > > --- a/include/uapi/linux/vfio.h
> > > > +++ b/include/uapi/linux/vfio.h
> > > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > > >  
> > > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > > >  
> > > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > > >  
> > > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > > +
> > > > +/*
> > > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > > + * migration information. Field accesses from this structure are only supported
> > > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > > + * vendor drivers should return an error.
> > > > + *
> > > > + * device_state: (read/write)
> > > > + *      - The user application writes to this field to inform the vendor driver
> > > > + *        about the device state to be transitioned to.
> > > > + *      - The vendor driver should take the necessary actions to change the
> > > > + *        device state. After successful transition to a given state, the
> > > > + *        vendor driver should return success on write(device_state, state)
> > > > + *        system call. If the device state transition fails, the vendor driver
> > > > + *        should return an appropriate -errno for the fault condition.
> > > > + *      - On the user application side, if the device state transition fails,
> > > > + *	  that is, if write(device_state, state) returns an error, read
> > > > + *	  device_state again to determine the current state of the device from
> > > > + *	  the vendor driver.
> > > > + *      - The vendor driver should return previous state of the device unless
> > > > + *        the vendor driver has encountered an internal error, in which case
> > > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > > + *      - The user application must use the device reset ioctl to recover the
> > > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > > + *        indicated to be in a valid device state by reading device_state, the
> > > > + *        user application may attempt to transition the device to any valid
> > > > + *        state reachable from the current state or terminate itself.
> > > > + *
> > > > + *      device_state consists of 3 bits:
> > > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > > + *        it indicates the _STOP state. When the device state is changed to
> > > > + *        _STOP, driver should stop the device before write() returns.
> > > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > > + *        driver should start gathering device state information that will be
> > > > + *        provided to the VFIO user application to save the device's state.
> > > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > > + *        the driver should prepare to resume the device. Data provided through
> > > > + *        the migration region should be used to resume the device.
> > > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > > + *      application should perform a read-modify-write operation on this
> > > > + *      field when modifying the specified bits.
> > > > + *
> > > > + *  +------- _RESUMING
> > > > + *  |+------ _SAVING
> > > > + *  ||+----- _RUNNING
> > > > + *  |||
> > > > + *  000b => Device Stopped, not saving or resuming
> > > > + *  001b => Device running, which is the default state
> > > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > > + *  011b => Device running and save the device state, pre-copy state
> > > > + *  100b => Device stopped and the device state is resuming
> > > > + *  101b => Invalid state
> > > > + *  110b => Error state
> > > > + *  111b => Invalid state
> > > > + *
> > > > + * State transitions:
> > > > + *
> > > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > > + * 0. Running or default state
> > > > + *                             |
> > > > + *
> > > > + * 1. Normal Shutdown (optional)
> > > > + *                             |------------------------------------->|
> > > > + *
> > > > + * 2. Save the state or suspend
> > > > + *                             |------------------------->|---------->|
> > > > + *
> > > > + * 3. Save the state during live migration
> > > > + *                             |----------->|------------>|---------->|
> > > > + *
> > > > + * 4. Resuming
> > > > + *                  |<---------|
> > > > + *
> > > > + * 5. Resumed
> > > > + *                  |--------->|
> > > > + *
> > > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > > + * 1. During normal shutdown of the user application, the user application may
> > > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > > + *    transition is optional. The vendor driver must support this transition but
> > > > + *    must not require it.
> > > > + * 2. When the user application saves state or suspends the application, the
> > > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > > + *    device, save the device state and send it to the application through the
> > > > + *    migration region. The sequence to be followed for such transition is given
> > > > + *    below.
> > > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > > + *    gathering the device state while the application is still running and send
> > > > + *    the device state data to application through the migration region.
> > > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > > + *    the device, save the device state and send it to the user application
> > > > + *    through the migration region.
> > > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > > + *    user must not be required to consume all migration data before the device
> > > > + *    transitions to a new state, including the stop-and-copy state.
> > > > + *    The sequence to be followed for above two transitions is given below.
> > > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > > + *    the _RUNNING to the _RESUMING state.
> > > > + *    In the _RESUMING state, the driver should use the device state data
> > > > + *    received through the migration region to resume the device.
> > > > + * 5. After providing saved device data to the driver, the application should
> > > > + *    change the state from _RESUMING to _RUNNING.
> > > > + *
> > > > + * reserved:
> > > > + *      Reads on this field return zero and writes are ignored.
> > > > + *
> > > > + * pending_bytes: (read only)
> > > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > > + *
> > > > + * data_offset: (read only)
> > > > + *      The user application should read data_offset in the migration region
> > > > + *      from where the user application should read the device data during the
> > > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > > + *      below for details of sequence to be followed.
> > > > + *
> > > > + * data_size: (read/write)
> > > > + *      The user application should read data_size to get the size in bytes of
> > > > + *      the data copied in the migration region during the _SAVING state and
> > > > + *      write the size in bytes of the data copied in the migration region
> > > > + *      during the _RESUMING state.
> > > > + *
> > > > + * The format of the migration region is as follows:
> > > > + *  ------------------------------------------------------------------
> > > > + * |vfio_device_migration_info|    data section                      |
> > > > + * |                          |     ///////////////////////////////  |
> > > > + * ------------------------------------------------------------------
> > > > + *   ^                              ^
> > > > + *  offset 0-trapped part        data_offset
> > > > + *
> > > > + * The structure vfio_device_migration_info is always followed by the data
> > > > + * section in the region, so data_offset will always be nonzero. The offset
> > > > + * from where the data is copied is decided by the kernel driver. The data
> > > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > > + * driver defines the data section. The data section partition can be defined
> > > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > > + * page aligned, whereas initial section which contains the
> > > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > > + * page aligned. The user is not required to access through mmap regardless
> > > > + * of the capabilities of the region mmap.
> > > > + * The vendor driver should determine whether and how to partition the data
> > > > + * section. The vendor driver should return data_offset accordingly.
> > > > + *
> > > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > > + * follows:
> > > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > > + *    effects.
> > > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > > + *    for that device.
> > > > + *    If pending_bytes > 0, perform the following steps.
> > > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > > + *    available through the data section. The vendor driver should return this
> > > > + *    read operation only after data is available from (region + data_offset)
> > > > + *    to (region + data_offset + data_size).
> > > > + * c. Read data_size, which is the amount of data in bytes available through
> > > > + *    the migration region.
> > > > + *    Read on data_offset and data_size should return the offset and size of
> > > > + *    the current buffer if the user application reads data_offset and
> > > > + *    data_size more than once here.    
> > > If data region is mmaped, merely reading data_offset and data_size
> > > cannot let kernel know what are correct values to return.
> > > Consider to add a read operation which is trapped into kernel to let
> > > kernel exactly know it needs to move to the next offset and update data_size
> > > ?  
> > 
> > Both operations b. and c. above are to trapped registers, operation d.
> > below may potentially be to an mmap'd area, which is why we have step
> > f. which indicates to the vendor driver that the data has been
> > consumed.  Does that address your concern?  Thanks,
> >  
> No. :)
> the problem is about semantics of data_offset, data_size, and
> pending_bytes.
> b and c do not tell kernel that the data is read by user.
> so, without knowing step d happen, kernel cannot update pending_bytes to
> be returned in step f.

Sorry, I'm still not understanding, I see step f. as the indicator
you're looking for.  The user reads pending_bytes to indicate the data
in the migration area has been consumed.  The vendor driver updates its
internal state on that read and returns the updated value for
pending_bytes.  Thanks,

Alex
 
> > > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > > + *    migration region.
> > > > + * e. Process the data.
> > > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > > + *
> > > > + * If an error occurs during the above sequence, the vendor driver can return
> > > > + * an error code for next read() or write() operation, which will terminate the
> > > > + * loop. The user application should then take the next necessary action, for
> > > > + * example, failing migration or terminating the user application.
> > > > + *
> > > > + * The user application can transition from the _SAVING|_RUNNING
> > > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > > + * number of pending bytes. The user application should iterate in _SAVING
> > > > + * (stop-and-copy) until pending_bytes is 0.
> > > > + *
> > > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > > + * While data for this device is available, repeat the following steps:
> > > > + * a. Read data_offset from where the user application should write data.
> > > > + * b. Write migration data starting at the migration region + data_offset for
> > > > + *    the length determined by data_size from the migration source.
> > > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > > + *    written in the migration region. Vendor driver should apply the
> > > > + *    user-provided migration region data to the device resume state.
> > > > + *
> > > > + * For the user application, data is opaque. The user application should write
> > > > + * data in the same order as the data is received and the data should be of
> > > > + * same transaction size at the source.
> > > > + */
> > > > +
> > > > +struct vfio_device_migration_info {
> > > > +	__u32 device_state;         /* VFIO device state */
> > > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > > +
> > > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > > +
> > > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > > +
> > > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > > +
> > > > +	__u32 reserved;
> > > > +	__u64 pending_bytes;
> > > > +	__u64 data_offset;
> > > > +	__u64 data_size;
> > > > +} __attribute__((packed));
> > > > +
> > > >  /*
> > > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > > >   * which allows direct access to non-MSIX registers which happened to be within
> > > > -- 
> > > > 2.7.0
> > > >     
> > >   
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  3:45   ` Alex Williamson
@ 2020-03-19 14:52     ` Kirti Wankhede
  2020-03-19 16:22       ` Alex Williamson
  2020-03-19 18:57     ` Kirti Wankhede
  1 sibling, 1 reply; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-19 14:52 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/19/2020 9:15 AM, Alex Williamson wrote:
> On Thu, 19 Mar 2020 01:11:11 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
>> - Start dirty pages tracking while migration is active
>> - Stop dirty pages tracking.
>> - Get dirty pages bitmap. Its user space application's responsibility to
>>    copy content of dirty pages from source to destination during migration.
>>
>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
>> structure. Bitmap size is calculated considering smallest supported page
>> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
>>
>> Bitmap is populated for already pinned pages when bitmap is allocated for
>> a vfio_dma with the smallest supported page size. Update bitmap from
>> pinning functions when tracking is enabled. When user application queries
>> bitmap, check if requested page size is same as page size used to
>> populated bitmap. If it is equal, copy bitmap, but if not equal, return
>> error.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 203 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 70aeab921d0f..d6417fb02174 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -71,6 +71,7 @@ struct vfio_iommu {
>>   	unsigned int		dma_avail;
>>   	bool			v2;
>>   	bool			nesting;
>> +	bool			dirty_page_tracking;
>>   };
>>   
>>   struct vfio_domain {
>> @@ -91,6 +92,7 @@ struct vfio_dma {
>>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>>   	struct task_struct	*task;
>>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
>> +	unsigned long		*bitmap;
> 
> We've made the bitmap a width invariant u64 else, should be here as
> well.
> 
>>   };
>>   
>>   struct vfio_group {
>> @@ -125,7 +127,10 @@ struct vfio_regions {
>>   #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>>   					(!list_empty(&iommu->domain_list))
>>   
>> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
>> +
>>   static int put_pfn(unsigned long pfn, int prot);
>> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>>   
>>   /*
>>    * This code handles mapping and unmapping of user data buffers
>> @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>>   	rb_erase(&old->node, &iommu->dma_list);
>>   }
>>   
>> +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
>> +{
>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> +	for (; n; n = rb_next(n)) {
>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +		struct rb_node *p;
>> +		unsigned long npages = dma->size / pgsize;
>> +
>> +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
>> +		if (!dma->bitmap) {
>> +			struct rb_node *p = rb_prev(n);
>> +
>> +			for (; p; p = rb_prev(p)) {
>> +				struct vfio_dma *dma = rb_entry(n,
>> +							struct vfio_dma, node);
>> +
>> +				kfree(dma->bitmap);
>> +				dma->bitmap = NULL;
>> +			}
>> +			return -ENOMEM;
>> +		}
>> +
>> +		if (RB_EMPTY_ROOT(&dma->pfn_list))
>> +			continue;
>> +
>> +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
>> +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
>> +							 node);
>> +
>> +			bitmap_set(dma->bitmap,
>> +					(vpfn->iova - dma->iova) / pgsize, 1);
>> +		}
>> +	}
>> +	return 0;
>> +}
>> +
>> +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
>> +{
>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> +	for (; n; n = rb_next(n)) {
>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +
>> +		kfree(dma->bitmap);
>> +		dma->bitmap = NULL;
>> +	}
>> +}
>> +
>>   /*
>>    * Helper Functions for host iova-pfn list
>>    */
>> @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>   			vfio_unpin_page_external(dma, iova, do_accounting);
>>   			goto pin_unwind;
>>   		}
>> +
>> +		if (iommu->dirty_page_tracking) {
>> +			unsigned long pgshift =
>> +					 __ffs(vfio_pgsize_bitmap(iommu));
>> +
>> +			bitmap_set(dma->bitmap,
>> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
>> +		}
>>   	}
>>   
>>   	ret = i;
>> @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>>   	vfio_unmap_unpin(iommu, dma, true);
>>   	vfio_unlink_dma(iommu, dma);
>>   	put_task_struct(dma->task);
>> +	kfree(dma->bitmap);
>>   	kfree(dma);
>>   	iommu->dma_avail++;
>>   }
>> @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>>   	return bitmap;
>>   }
>>   
>> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>> +				  size_t size, uint64_t pgsize,
>> +				  unsigned char __user *bitmap)
> 
> And here, why do callers cast to an unsigned char pointer when we're
> going to cast to a void pointer anyway?  Should be a u64 __user pointer.
> 
>> +{
>> +	struct vfio_dma *dma;
>> +	unsigned long pgshift = __ffs(pgsize);
>> +	unsigned int npages, bitmap_size;
>> +
>> +	dma = vfio_find_dma(iommu, iova, 1);
>> +
>> +	if (!dma)
>> +		return -EINVAL;
>> +
>> +	if (dma->iova != iova || dma->size != size)
>> +		return -EINVAL;
>> +
>> +	npages = dma->size >> pgshift;
>> +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
>> +
>> +	/* mark all pages dirty if all pages are pinned and mapped. */
>> +	if (dma->iommu_mapped)
>> +		bitmap_set(dma->bitmap, 0, npages);
>> +
>> +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
>> +		return -EFAULT;
>> +
>> +	return 0;
>> +}
>> +
>> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
>> +{
>> +	uint64_t bsize;
>> +
>> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> 
> As commented previously, how do we derive this UINT_MAX limitation?
> 

Sorry, I missed that earlier

 > UINT_MAX seems arbitrary, is this specified in our API?  The size of a
 > vfio_dma is limited to what the user is able to pin, and therefore
 > their locked memory limit, but do we have an explicit limit elsewhere
 > that results in this limit here.  I think a 4GB bitmap would track
 > something like 2^47 bytes of memory, that's pretty excessive, but still
 > an arbitrary limit.

There has to be some upper limit check. In core KVM, in
virt/kvm/kvm_main.c there is max number of pages check:

if (new.npages > KVM_MEM_MAX_NR_PAGES)

Where
/*
  * Some of the bitops functions do not support too long bitmaps.
  * This number must be determined not to exceed such limits.
  */
#define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)

Though I don't know which bitops functions do not support long bitmaps.

Something similar as above can be done or same as you also mentioned of 
4GB bitmap limit? that is U32_MAX instead of UINT_MAX?

>> +		return -EINVAL;
>> +
>> +	bsize = DIRTY_BITMAP_BYTES(npages);
>> +
>> +	if (bitmap_size < bsize)
>> +		return -EINVAL;
>> +
>> +	return 0;
>> +}
>> +
>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   			     struct vfio_iommu_type1_dma_unmap *unmap)
>>   {
> 
> We didn't address that vfio_dma_do_map() needs to kvzalloc a bitmap for
> any new vfio_dma created while iommu->dirty_page_tracking = true.
> 

Good point. Adding it.

Thanks,
Kirti

>> @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>   
>>   		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>>   			-EFAULT : 0;
>> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
>> +		struct vfio_iommu_type1_dirty_bitmap dirty;
>> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
>> +		int ret = 0;
>> +
>> +		if (!iommu->v2)
>> +			return -EACCES;
>> +
>> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
>> +				    flags);
>> +
>> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
>> +			return -EFAULT;
>> +
>> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
>> +			return -EINVAL;
>> +
>> +		/* only one flag should be set at a time */
>> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
>> +			return -EINVAL;
>> +
>> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
>> +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
>> +
>> +			mutex_lock(&iommu->lock);
>> +			if (!iommu->dirty_page_tracking) {
>> +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
>> +				if (!ret)
>> +					iommu->dirty_page_tracking = true;
>> +			}
>> +			mutex_unlock(&iommu->lock);
>> +			return ret;
>> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
>> +			mutex_lock(&iommu->lock);
>> +			if (iommu->dirty_page_tracking) {
>> +				iommu->dirty_page_tracking = false;
>> +				vfio_dma_bitmap_free(iommu);
>> +			}
>> +			mutex_unlock(&iommu->lock);
>> +			return 0;
>> +		} else if (dirty.flags &
>> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
>> +			struct vfio_iommu_type1_dirty_bitmap_get range;
>> +			unsigned long pgshift;
>> +			size_t data_size = dirty.argsz - minsz;
>> +			uint64_t iommu_pgsize =
>> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
>> +
>> +			if (!data_size || data_size < sizeof(range))
>> +				return -EINVAL;
>> +
>> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
>> +					   sizeof(range)))
>> +				return -EFAULT;
>> +
>> +			/* allow only min supported pgsize */
>> +			if (range.bitmap.pgsize != iommu_pgsize)
>> +				return -EINVAL;
>> +			if (range.iova & (iommu_pgsize - 1))
>> +				return -EINVAL;
>> +			if (!range.size || range.size & (iommu_pgsize - 1))
>> +				return -EINVAL;
>> +			if (range.iova + range.size < range.iova)
>> +				return -EINVAL;
>> +			if (!access_ok((void __user *)range.bitmap.data,
>> +				       range.bitmap.size))
>> +				return -EINVAL;
>> +
>> +			pgshift = __ffs(range.bitmap.pgsize);
>> +			ret = verify_bitmap_size(range.size >> pgshift,
>> +						 range.bitmap.size);
>> +			if (ret)
>> +				return ret;
>> +
>> +			mutex_lock(&iommu->lock);
>> +			if (iommu->dirty_page_tracking)
>> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
>> +					 range.size, range.bitmap.pgsize,
>> +				    (unsigned char __user *)range.bitmap.data);
>> +			else
>> +				ret = -EINVAL;
>> +			mutex_unlock(&iommu->lock);
>> +
>> +			return ret;
>> +		}
>>   	}
>>   
>>   	return -ENOTTY;
>> @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
>>   
>>   	vaddr = dma->vaddr + offset;
>>   
>> -	if (write)
>> +	if (write) {
>>   		*copied = __copy_to_user((void __user *)vaddr, data,
>>   					 count) ? 0 : count;
>> -	else
>> +		if (*copied && iommu->dirty_page_tracking) {
>> +			unsigned long pgshift =
>> +				__ffs(vfio_pgsize_bitmap(iommu));
>> +
>> +			bitmap_set(dma->bitmap, offset >> pgshift,
>> +				   *copied >> pgshift);
>> +		}
>> +	} else
> 
> Great, thanks for adding this!
> 
>>   		*copied = __copy_from_user(data, (void __user *)vaddr,
>>   					   count) ? 0 : count;
>>   	if (kthread)
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19 14:52     ` Kirti Wankhede
@ 2020-03-19 16:22       ` Alex Williamson
  2020-03-19 20:25         ` Kirti Wankhede
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-19 16:22 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 19 Mar 2020 20:22:41 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 3/19/2020 9:15 AM, Alex Williamson wrote:
> > On Thu, 19 Mar 2020 01:11:11 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> >> - Start dirty pages tracking while migration is active
> >> - Stop dirty pages tracking.
> >> - Get dirty pages bitmap. Its user space application's responsibility to
> >>    copy content of dirty pages from source to destination during migration.
> >>
> >> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> >> structure. Bitmap size is calculated considering smallest supported page
> >> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> >>
> >> Bitmap is populated for already pinned pages when bitmap is allocated for
> >> a vfio_dma with the smallest supported page size. Update bitmap from
> >> pinning functions when tracking is enabled. When user application queries
> >> bitmap, check if requested page size is same as page size used to
> >> populated bitmap. If it is equal, copy bitmap, but if not equal, return
> >> error.
> >>
> >> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >> ---
> >>   drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> >>   1 file changed, 203 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index 70aeab921d0f..d6417fb02174 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -71,6 +71,7 @@ struct vfio_iommu {
> >>   	unsigned int		dma_avail;
> >>   	bool			v2;
> >>   	bool			nesting;
> >> +	bool			dirty_page_tracking;
> >>   };
> >>   
> >>   struct vfio_domain {
> >> @@ -91,6 +92,7 @@ struct vfio_dma {
> >>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> >>   	struct task_struct	*task;
> >>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> >> +	unsigned long		*bitmap;  
> > 
> > We've made the bitmap a width invariant u64 else, should be here as
> > well.
> >   
> >>   };
> >>   
> >>   struct vfio_group {
> >> @@ -125,7 +127,10 @@ struct vfio_regions {
> >>   #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> >>   					(!list_empty(&iommu->domain_list))
> >>   
> >> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> >> +
> >>   static int put_pfn(unsigned long pfn, int prot);
> >> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >>   
> >>   /*
> >>    * This code handles mapping and unmapping of user data buffers
> >> @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> >>   	rb_erase(&old->node, &iommu->dma_list);
> >>   }
> >>   
> >> +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> >> +{
> >> +	struct rb_node *n = rb_first(&iommu->dma_list);
> >> +
> >> +	for (; n; n = rb_next(n)) {
> >> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >> +		struct rb_node *p;
> >> +		unsigned long npages = dma->size / pgsize;
> >> +
> >> +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> >> +		if (!dma->bitmap) {
> >> +			struct rb_node *p = rb_prev(n);
> >> +
> >> +			for (; p; p = rb_prev(p)) {
> >> +				struct vfio_dma *dma = rb_entry(n,
> >> +							struct vfio_dma, node);
> >> +
> >> +				kfree(dma->bitmap);
> >> +				dma->bitmap = NULL;
> >> +			}
> >> +			return -ENOMEM;
> >> +		}
> >> +
> >> +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> >> +			continue;
> >> +
> >> +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> >> +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> >> +							 node);
> >> +
> >> +			bitmap_set(dma->bitmap,
> >> +					(vpfn->iova - dma->iova) / pgsize, 1);
> >> +		}
> >> +	}
> >> +	return 0;
> >> +}
> >> +
> >> +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> >> +{
> >> +	struct rb_node *n = rb_first(&iommu->dma_list);
> >> +
> >> +	for (; n; n = rb_next(n)) {
> >> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >> +
> >> +		kfree(dma->bitmap);
> >> +		dma->bitmap = NULL;
> >> +	}
> >> +}
> >> +
> >>   /*
> >>    * Helper Functions for host iova-pfn list
> >>    */
> >> @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>   			vfio_unpin_page_external(dma, iova, do_accounting);
> >>   			goto pin_unwind;
> >>   		}
> >> +
> >> +		if (iommu->dirty_page_tracking) {
> >> +			unsigned long pgshift =
> >> +					 __ffs(vfio_pgsize_bitmap(iommu));
> >> +
> >> +			bitmap_set(dma->bitmap,
> >> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> >> +		}
> >>   	}
> >>   
> >>   	ret = i;
> >> @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> >>   	vfio_unmap_unpin(iommu, dma, true);
> >>   	vfio_unlink_dma(iommu, dma);
> >>   	put_task_struct(dma->task);
> >> +	kfree(dma->bitmap);
> >>   	kfree(dma);
> >>   	iommu->dma_avail++;
> >>   }
> >> @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> >>   	return bitmap;
> >>   }
> >>   
> >> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> >> +				  size_t size, uint64_t pgsize,
> >> +				  unsigned char __user *bitmap)  
> > 
> > And here, why do callers cast to an unsigned char pointer when we're
> > going to cast to a void pointer anyway?  Should be a u64 __user pointer.
> >   
> >> +{
> >> +	struct vfio_dma *dma;
> >> +	unsigned long pgshift = __ffs(pgsize);
> >> +	unsigned int npages, bitmap_size;
> >> +
> >> +	dma = vfio_find_dma(iommu, iova, 1);
> >> +
> >> +	if (!dma)
> >> +		return -EINVAL;
> >> +
> >> +	if (dma->iova != iova || dma->size != size)
> >> +		return -EINVAL;
> >> +
> >> +	npages = dma->size >> pgshift;
> >> +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> >> +
> >> +	/* mark all pages dirty if all pages are pinned and mapped. */
> >> +	if (dma->iommu_mapped)
> >> +		bitmap_set(dma->bitmap, 0, npages);
> >> +
> >> +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> >> +		return -EFAULT;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> >> +{
> >> +	uint64_t bsize;
> >> +
> >> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)  
> > 
> > As commented previously, how do we derive this UINT_MAX limitation?
> >   
> 
> Sorry, I missed that earlier
> 
>  > UINT_MAX seems arbitrary, is this specified in our API?  The size of a
>  > vfio_dma is limited to what the user is able to pin, and therefore
>  > their locked memory limit, but do we have an explicit limit elsewhere
>  > that results in this limit here.  I think a 4GB bitmap would track
>  > something like 2^47 bytes of memory, that's pretty excessive, but still
>  > an arbitrary limit.  
> 
> There has to be some upper limit check. In core KVM, in
> virt/kvm/kvm_main.c there is max number of pages check:
> 
> if (new.npages > KVM_MEM_MAX_NR_PAGES)
> 
> Where
> /*
>   * Some of the bitops functions do not support too long bitmaps.
>   * This number must be determined not to exceed such limits.
>   */
> #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)
> 
> Though I don't know which bitops functions do not support long bitmaps.
> 
> Something similar as above can be done or same as you also mentioned of 
> 4GB bitmap limit? that is U32_MAX instead of UINT_MAX?

Let's see, we use bitmap_set():

void bitmap_set(unsigned long *map, unsigned int start, unsigned int nbits)

So we're limited to an unsigned int number of bits, but for an
unaligned, multi-bit operation this will call __bitmap_set():

void __bitmap_set(unsigned long *map, unsigned int start, int len)

So we're down to a signed int number of bits (seems like an API bug in
bitops there), so it makes sense that KVM is testing against MAX_INT
number of pages, ie. number of bits.  But that still suggests a bitmap
size of MAX_UINT is off by a factor of 16.  So we can have 2^31 bits
divided by 2^3 bits/byte yields a maximum bitmap size of 2^28 (ie.
256MB), which maps 2^31 * 2^12 = 2^43 (8TB) on a 4K system.

Let's fix the limit check and put a nice comment explaining it.  Thanks,

Alex

> >> +		return -EINVAL;
> >> +
> >> +	bsize = DIRTY_BITMAP_BYTES(npages);
> >> +
> >> +	if (bitmap_size < bsize)
> >> +		return -EINVAL;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >>   			     struct vfio_iommu_type1_dma_unmap *unmap)
> >>   {  
> > 
> > We didn't address that vfio_dma_do_map() needs to kvzalloc a bitmap for
> > any new vfio_dma created while iommu->dirty_page_tracking = true.
> >   
> 
> Good point. Adding it.
> 
> Thanks,
> Kirti
> 
> >> @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> >>   
> >>   		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> >>   			-EFAULT : 0;
> >> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> >> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> >> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> >> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> >> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> >> +		int ret = 0;
> >> +
> >> +		if (!iommu->v2)
> >> +			return -EACCES;
> >> +
> >> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> >> +				    flags);
> >> +
> >> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> >> +			return -EFAULT;
> >> +
> >> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> >> +			return -EINVAL;
> >> +
> >> +		/* only one flag should be set at a time */
> >> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> >> +			return -EINVAL;
> >> +
> >> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> >> +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> >> +
> >> +			mutex_lock(&iommu->lock);
> >> +			if (!iommu->dirty_page_tracking) {
> >> +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> >> +				if (!ret)
> >> +					iommu->dirty_page_tracking = true;
> >> +			}
> >> +			mutex_unlock(&iommu->lock);
> >> +			return ret;
> >> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> >> +			mutex_lock(&iommu->lock);
> >> +			if (iommu->dirty_page_tracking) {
> >> +				iommu->dirty_page_tracking = false;
> >> +				vfio_dma_bitmap_free(iommu);
> >> +			}
> >> +			mutex_unlock(&iommu->lock);
> >> +			return 0;
> >> +		} else if (dirty.flags &
> >> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> >> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> >> +			unsigned long pgshift;
> >> +			size_t data_size = dirty.argsz - minsz;
> >> +			uint64_t iommu_pgsize =
> >> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> >> +
> >> +			if (!data_size || data_size < sizeof(range))
> >> +				return -EINVAL;
> >> +
> >> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> >> +					   sizeof(range)))
> >> +				return -EFAULT;
> >> +
> >> +			/* allow only min supported pgsize */
> >> +			if (range.bitmap.pgsize != iommu_pgsize)
> >> +				return -EINVAL;
> >> +			if (range.iova & (iommu_pgsize - 1))
> >> +				return -EINVAL;
> >> +			if (!range.size || range.size & (iommu_pgsize - 1))
> >> +				return -EINVAL;
> >> +			if (range.iova + range.size < range.iova)
> >> +				return -EINVAL;
> >> +			if (!access_ok((void __user *)range.bitmap.data,
> >> +				       range.bitmap.size))
> >> +				return -EINVAL;
> >> +
> >> +			pgshift = __ffs(range.bitmap.pgsize);
> >> +			ret = verify_bitmap_size(range.size >> pgshift,
> >> +						 range.bitmap.size);
> >> +			if (ret)
> >> +				return ret;
> >> +
> >> +			mutex_lock(&iommu->lock);
> >> +			if (iommu->dirty_page_tracking)
> >> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> >> +					 range.size, range.bitmap.pgsize,
> >> +				    (unsigned char __user *)range.bitmap.data);
> >> +			else
> >> +				ret = -EINVAL;
> >> +			mutex_unlock(&iommu->lock);
> >> +
> >> +			return ret;
> >> +		}
> >>   	}
> >>   
> >>   	return -ENOTTY;
> >> @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> >>   
> >>   	vaddr = dma->vaddr + offset;
> >>   
> >> -	if (write)
> >> +	if (write) {
> >>   		*copied = __copy_to_user((void __user *)vaddr, data,
> >>   					 count) ? 0 : count;
> >> -	else
> >> +		if (*copied && iommu->dirty_page_tracking) {
> >> +			unsigned long pgshift =
> >> +				__ffs(vfio_pgsize_bitmap(iommu));
> >> +
> >> +			bitmap_set(dma->bitmap, offset >> pgshift,
> >> +				   *copied >> pgshift);
> >> +		}
> >> +	} else  
> > 
> > Great, thanks for adding this!
> >   
> >>   		*copied = __copy_from_user(data, (void __user *)vaddr,
> >>   					   count) ? 0 : count;
> >>   	if (kthread)  
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19 13:06             ` Alex Williamson
@ 2020-03-19 16:57               ` Kirti Wankhede
  2020-03-20  0:51                 ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-19 16:57 UTC (permalink / raw)
  To: Alex Williamson, Yan Zhao
  Cc: cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng, Liu, Yi L,
	mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, Wang,
	Zhi A, qemu-devel, kvm



On 3/19/2020 6:36 PM, Alex Williamson wrote:
> On Thu, 19 Mar 2020 02:15:34 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
>> On Thu, Mar 19, 2020 at 12:40:53PM +0800, Alex Williamson wrote:
>>> On Thu, 19 Mar 2020 00:15:33 -0400
>>> Yan Zhao <yan.y.zhao@intel.com> wrote:
>>>    
>>>> On Thu, Mar 19, 2020 at 12:01:00PM +0800, Alex Williamson wrote:
>>>>> On Wed, 18 Mar 2020 23:06:39 -0400
>>>>> Yan Zhao <yan.y.zhao@intel.com> wrote:
>>>>>      
>>>>>> On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:
>>>>>>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
>>>>>>> - Start dirty pages tracking while migration is active
>>>>>>> - Stop dirty pages tracking.
>>>>>>> - Get dirty pages bitmap. Its user space application's responsibility to
>>>>>>>    copy content of dirty pages from source to destination during migration.
>>>>>>>
>>>>>>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
>>>>>>> structure. Bitmap size is calculated considering smallest supported page
>>>>>>> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
>>>>>>>
>>>>>>> Bitmap is populated for already pinned pages when bitmap is allocated for
>>>>>>> a vfio_dma with the smallest supported page size. Update bitmap from
>>>>>>> pinning functions when tracking is enabled. When user application queries
>>>>>>> bitmap, check if requested page size is same as page size used to
>>>>>>> populated bitmap. If it is equal, copy bitmap, but if not equal, return
>>>>>>> error.
>>>>>>>
>>>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>>>>> ---
>>>>>>>   drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
>>>>>>>   1 file changed, 203 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>>>>>> index 70aeab921d0f..d6417fb02174 100644
>>>>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>>>>> @@ -71,6 +71,7 @@ struct vfio_iommu {
>>>>>>>   	unsigned int		dma_avail;
>>>>>>>   	bool			v2;
>>>>>>>   	bool			nesting;
>>>>>>> +	bool			dirty_page_tracking;
>>>>>>>   };
>>>>>>>   
>>>>>>>   struct vfio_domain {
>>>>>>> @@ -91,6 +92,7 @@ struct vfio_dma {
>>>>>>>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>>>>>>>   	struct task_struct	*task;
>>>>>>>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
>>>>>>> +	unsigned long		*bitmap;
>>>>>>>   };
>>>>>>>   
>>>>>>>   struct vfio_group {
>>>>>>> @@ -125,7 +127,10 @@ struct vfio_regions {
>>>>>>>   #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>>>>>>>   					(!list_empty(&iommu->domain_list))
>>>>>>>   
>>>>>>> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
>>>>>>> +
>>>>>>>   static int put_pfn(unsigned long pfn, int prot);
>>>>>>> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>>>>>>>   
>>>>>>>   /*
>>>>>>>    * This code handles mapping and unmapping of user data buffers
>>>>>>> @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>>>>>>>   	rb_erase(&old->node, &iommu->dma_list);
>>>>>>>   }
>>>>>>>   
>>>>>>> +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
>>>>>>> +{
>>>>>>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>>>>>>> +
>>>>>>> +	for (; n; n = rb_next(n)) {
>>>>>>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>>>>>>> +		struct rb_node *p;
>>>>>>> +		unsigned long npages = dma->size / pgsize;
>>>>>>> +
>>>>>>> +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
>>>>>>> +		if (!dma->bitmap) {
>>>>>>> +			struct rb_node *p = rb_prev(n);
>>>>>>> +
>>>>>>> +			for (; p; p = rb_prev(p)) {
>>>>>>> +				struct vfio_dma *dma = rb_entry(n,
>>>>>>> +							struct vfio_dma, node);
>>>>>>> +
>>>>>>> +				kfree(dma->bitmap);
>>>>>>> +				dma->bitmap = NULL;
>>>>>>> +			}
>>>>>>> +			return -ENOMEM;
>>>>>>> +		}
>>>>>>> +
>>>>>>> +		if (RB_EMPTY_ROOT(&dma->pfn_list))
>>>>>>> +			continue;
>>>>>>> +
>>>>>>> +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
>>>>>>> +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
>>>>>>> +							 node);
>>>>>>> +
>>>>>>> +			bitmap_set(dma->bitmap,
>>>>>>> +					(vpfn->iova - dma->iova) / pgsize, 1);
>>>>>>> +		}
>>>>>>> +	}
>>>>>>> +	return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
>>>>>>> +{
>>>>>>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>>>>>>> +
>>>>>>> +	for (; n; n = rb_next(n)) {
>>>>>>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>>>>>>> +
>>>>>>> +		kfree(dma->bitmap);
>>>>>>> +		dma->bitmap = NULL;
>>>>>>> +	}
>>>>>>> +}
>>>>>>> +
>>>>>>>   /*
>>>>>>>    * Helper Functions for host iova-pfn list
>>>>>>>    */
>>>>>>> @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>>>>>>   			vfio_unpin_page_external(dma, iova, do_accounting);
>>>>>>>   			goto pin_unwind;
>>>>>>>   		}
>>>>>>> +
>>>>>>> +		if (iommu->dirty_page_tracking) {
>>>>>>> +			unsigned long pgshift =
>>>>>>> +					 __ffs(vfio_pgsize_bitmap(iommu));
>>>>>>> +
>>>>>>> +			bitmap_set(dma->bitmap,
>>>>>>> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
>>>>>>> +		}
>>>>>>>   	}
>>>>>>>   
>>>>>>>   	ret = i;
>>>>>>> @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>>>>>>>   	vfio_unmap_unpin(iommu, dma, true);
>>>>>>>   	vfio_unlink_dma(iommu, dma);
>>>>>>>   	put_task_struct(dma->task);
>>>>>>> +	kfree(dma->bitmap);
>>>>>>>   	kfree(dma);
>>>>>>>   	iommu->dma_avail++;
>>>>>>>   }
>>>>>>> @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>>>>>>>   	return bitmap;
>>>>>>>   }
>>>>>>>   
>>>>>>> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>>>>>>> +				  size_t size, uint64_t pgsize,
>>>>>>> +				  unsigned char __user *bitmap)
>>>>>>> +{
>>>>>>> +	struct vfio_dma *dma;
>>>>>>> +	unsigned long pgshift = __ffs(pgsize);
>>>>>>> +	unsigned int npages, bitmap_size;
>>>>>>> +
>>>>>>> +	dma = vfio_find_dma(iommu, iova, 1);
>>>>>>> +
>>>>>>> +	if (!dma)
>>>>>>> +		return -EINVAL;
>>>>>>> +
>>>>>>> +	if (dma->iova != iova || dma->size != size)
>>>>>>> +		return -EINVAL;
>>>>>>> +
>>>>>> looks this size is passed from user. how can it ensure size always
>>>>>> equals to dma->size ?
>>>>>>
>>>>>> shouldn't we iterate dma tree to look for dirty for whole range if a
>>>>>> single dma cannot meet them all?
>>>>>
>>>>> Please see the discussion on v12[1], the problem is with the alignment
>>>>> of DMA mapped regions versus the bitmap.  A DMA mapping only requires
>>>>> page alignment, so for example imagine a user requests the bitmap from
>>>>> page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
>>>>> efficiently copy the bitmap tracked by the vfio_dma structure to the
>>>>> user buffer when it's shifted by 1 bit.  Adjacent mappings can also
>>>>> make for a very complicated implementation.  In the discussion linked
>>>>> we decided to compromise on a more simple implementation that requires
>>>>> the user to ask for a bitmap which exactly matches a single DMA
>>>>> mapping, which Kirti indicates is what we require to support QEMU.
>>>>> Later in the series, the unmap operation also makes this requirement
>>>>> when used with the flags to retrieve the dirty bitmap.  Thanks,
>>>>>     
>>>>
>>>> so, what about for vIOMMU enabling case?
>>>> if IOVAs are mapped per page, then there's a log_sync in qemu,
>>>> it's supposed for range from 0-U64MAX, qemu has to find out which
>>>> ones are mapped and cut them into pages before calling this IOCTL?
>>>> And what if those IOVAs are mapped for len more than one page?
>>>
>>> Good question.  Kirti?
>>>

In log_sync with vIOMMU, loop for range such that:

- find iotlb entry for iova, get iova_xlat
- size = iotlb.addr_mask + 1; This is same caculation as when mapping 
are created from vfio_iommu_map_notify()
- use the <iova_xlat, size> for VFIO_IOMMU_DIRTY_PAGES ioctl
- increment iova: iova += size
- iterate above steps till end of range.

>>>>> [1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
>>>>>       
>>>>>>> +	npages = dma->size >> pgshift;
>>>>>>> +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
>>>>>>> +
>>>>>>> +	/* mark all pages dirty if all pages are pinned and mapped. */
>>>>>>> +	if (dma->iommu_mapped)
>>>>>>> +		bitmap_set(dma->bitmap, 0, npages);
>>>>>>> +
>>>>>>> +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
>>>>>>> +		return -EFAULT;
>>>>>>> +
>> Here, dma->bitmap needs to be cleared. right?
> 
> Ah, I missed re-checking this in my review.  v13 did clear it, but I
> noted that we need to re-populate any currently pinned pages.  This
> neither clears nor repopulates.  That's wrong.  Thanks,
> 

Why re-populate when there will be no change since 
vfio_iova_dirty_bitmap() is called holding iommu->lock? If there is any 
pin request while vfio_iova_dirty_bitmap() is still working, it will 
wait till iommu->lock is released. Bitmap will be populated when page is 
pinned.

Thanks,
Kirti

> Alex
>   
>>>>>>> +	return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
>>>>>>> +{
>>>>>>> +	uint64_t bsize;
>>>>>>> +
>>>>>>> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
>>>>>>> +		return -EINVAL;
>>>>>>> +
>>>>>>> +	bsize = DIRTY_BITMAP_BYTES(npages);
>>>>>>> +
>>>>>>> +	if (bitmap_size < bsize)
>>>>>>> +		return -EINVAL;
>>>>>>> +
>>>>>>> +	return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>>>>>>   			     struct vfio_iommu_type1_dma_unmap *unmap)
>>>>>>>   {
>>>>>>> @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>>>>>>   
>>>>>>>   		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>>>>>>>   			-EFAULT : 0;
>>>>>>> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
>>>>>>> +		struct vfio_iommu_type1_dirty_bitmap dirty;
>>>>>>> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
>>>>>>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
>>>>>>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
>>>>>>> +		int ret = 0;
>>>>>>> +
>>>>>>> +		if (!iommu->v2)
>>>>>>> +			return -EACCES;
>>>>>>> +
>>>>>>> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
>>>>>>> +				    flags);
>>>>>>> +
>>>>>>> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
>>>>>>> +			return -EFAULT;
>>>>>>> +
>>>>>>> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
>>>>>>> +			return -EINVAL;
>>>>>>> +
>>>>>>> +		/* only one flag should be set at a time */
>>>>>>> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
>>>>>>> +			return -EINVAL;
>>>>>>> +
>>>>>>> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
>>>>>>> +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
>>>>>>> +
>>>>>>> +			mutex_lock(&iommu->lock);
>>>>>>> +			if (!iommu->dirty_page_tracking) {
>>>>>>> +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
>>>>>>> +				if (!ret)
>>>>>>> +					iommu->dirty_page_tracking = true;
>>>>>>> +			}
>>>>>>> +			mutex_unlock(&iommu->lock);
>>>>>>> +			return ret;
>>>>>>> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
>>>>>>> +			mutex_lock(&iommu->lock);
>>>>>>> +			if (iommu->dirty_page_tracking) {
>>>>>>> +				iommu->dirty_page_tracking = false;
>>>>>>> +				vfio_dma_bitmap_free(iommu);
>>>>>>> +			}
>>>>>>> +			mutex_unlock(&iommu->lock);
>>>>>>> +			return 0;
>>>>>>> +		} else if (dirty.flags &
>>>>>>> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
>>>>>>> +			struct vfio_iommu_type1_dirty_bitmap_get range;
>>>>>>> +			unsigned long pgshift;
>>>>>>> +			size_t data_size = dirty.argsz - minsz;
>>>>>>> +			uint64_t iommu_pgsize =
>>>>>>> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
>>>>>>> +
>>>>>>> +			if (!data_size || data_size < sizeof(range))
>>>>>>> +				return -EINVAL;
>>>>>>> +
>>>>>>> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
>>>>>>> +					   sizeof(range)))
>>>>>>> +				return -EFAULT;
>>>>>>> +
>>>>>>> +			/* allow only min supported pgsize */
>>>>>>> +			if (range.bitmap.pgsize != iommu_pgsize)
>>>>>>> +				return -EINVAL;
>>>>>>> +			if (range.iova & (iommu_pgsize - 1))
>>>>>>> +				return -EINVAL;
>>>>>>> +			if (!range.size || range.size & (iommu_pgsize - 1))
>>>>>>> +				return -EINVAL;
>>>>>>> +			if (range.iova + range.size < range.iova)
>>>>>>> +				return -EINVAL;
>>>>>>> +			if (!access_ok((void __user *)range.bitmap.data,
>>>>>>> +				       range.bitmap.size))
>>>>>>> +				return -EINVAL;
>>>>>>> +
>>>>>>> +			pgshift = __ffs(range.bitmap.pgsize);
>>>>>>> +			ret = verify_bitmap_size(range.size >> pgshift,
>>>>>>> +						 range.bitmap.size);
>>>>>>> +			if (ret)
>>>>>>> +				return ret;
>>>>>>> +
>>>>>>> +			mutex_lock(&iommu->lock);
>>>>>>> +			if (iommu->dirty_page_tracking)
>>>>>>> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
>>>>>>> +					 range.size, range.bitmap.pgsize,
>>>>>>> +				    (unsigned char __user *)range.bitmap.data);
>>>>>>> +			else
>>>>>>> +				ret = -EINVAL;
>>>>>>> +			mutex_unlock(&iommu->lock);
>>>>>>> +
>>>>>>> +			return ret;
>>>>>>> +		}
>>>>>>>   	}
>>>>>>>   
>>>>>>>   	return -ENOTTY;
>>>>>>> @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
>>>>>>>   
>>>>>>>   	vaddr = dma->vaddr + offset;
>>>>>>>   
>>>>>>> -	if (write)
>>>>>>> +	if (write) {
>>>>>>>   		*copied = __copy_to_user((void __user *)vaddr, data,
>>>>>>>   					 count) ? 0 : count;
>>>>>>> -	else
>>>>>>> +		if (*copied && iommu->dirty_page_tracking) {
>>>>>>> +			unsigned long pgshift =
>>>>>>> +				__ffs(vfio_pgsize_bitmap(iommu));
>>>>>>> +
>>>>>>> +			bitmap_set(dma->bitmap, offset >> pgshift,
>>>>>>> +				   *copied >> pgshift);
>>>>>>> +		}
>>>>>>> +	} else
>>>>>>>   		*copied = __copy_from_user(data, (void __user *)vaddr,
>>>>>>>   					   count) ? 0 : count;
>>>>>>>   	if (kthread)
>>>>>>> -- 
>>>>>>> 2.7.0
>>>>>>>        
>>>>>>      
>>>>>      
>>>>    
>>>    
>>
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19  3:45   ` Alex Williamson
  2020-03-19 14:52     ` Kirti Wankhede
@ 2020-03-19 18:57     ` Kirti Wankhede
  1 sibling, 0 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-19 18:57 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/19/2020 9:15 AM, Alex Williamson wrote:
> On Thu, 19 Mar 2020 01:11:11 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
>> - Start dirty pages tracking while migration is active
>> - Stop dirty pages tracking.
>> - Get dirty pages bitmap. Its user space application's responsibility to
>>    copy content of dirty pages from source to destination during migration.
>>
>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
>> structure. Bitmap size is calculated considering smallest supported page
>> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
>>
>> Bitmap is populated for already pinned pages when bitmap is allocated for
>> a vfio_dma with the smallest supported page size. Update bitmap from
>> pinning functions when tracking is enabled. When user application queries
>> bitmap, check if requested page size is same as page size used to
>> populated bitmap. If it is equal, copy bitmap, but if not equal, return
>> error.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 203 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 70aeab921d0f..d6417fb02174 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -71,6 +71,7 @@ struct vfio_iommu {
>>   	unsigned int		dma_avail;
>>   	bool			v2;
>>   	bool			nesting;
>> +	bool			dirty_page_tracking;
>>   };
>>   
>>   struct vfio_domain {
>> @@ -91,6 +92,7 @@ struct vfio_dma {
>>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>>   	struct task_struct	*task;
>>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
>> +	unsigned long		*bitmap;
> 
> We've made the bitmap a width invariant u64 else, should be here as
> well.
> 

Changing to u64 causes compile time warnings as below. Keeping 'unsigned 
long *'

drivers/vfio/vfio_iommu_type1.c: In function ‘vfio_dma_bitmap_alloc_all’:
drivers/vfio/vfio_iommu_type1.c:232:8: warning: passing argument 1 of 
‘bitmap_set’ from incompatible pointer type [enabled by default]
         (vpfn->iova - dma->iova) / pgsize, 1);
         ^
In file included from ./include/linux/cpumask.h:12:0,
                  from ./arch/x86/include/asm/cpumask.h:5,
                  from ./arch/x86/include/asm/msr.h:11,
                  from ./arch/x86/include/asm/processor.h:22,
                  from ./arch/x86/include/asm/cpufeature.h:5,
                  from ./arch/x86/include/asm/thread_info.h:53,
                  from ./include/linux/thread_info.h:38,
                  from ./arch/x86/include/asm/preempt.h:7,
                  from ./include/linux/preempt.h:78,
                  from ./include/linux/spinlock.h:51,
                  from ./include/linux/seqlock.h:36,
                  from ./include/linux/time.h:6,
                  from ./include/linux/compat.h:10,
                  from drivers/vfio/vfio_iommu_type1.c:24:
./include/linux/bitmap.h:405:29: note: expected ‘long unsigned int *’ 
but argument is of type ‘u64 *’
  static __always_inline void bitmap_set(unsigned long *map, unsigned 
int start,

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19 16:22       ` Alex Williamson
@ 2020-03-19 20:25         ` Kirti Wankhede
  2020-03-19 20:54           ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-19 20:25 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/19/2020 9:52 PM, Alex Williamson wrote:
> On Thu, 19 Mar 2020 20:22:41 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> On 3/19/2020 9:15 AM, Alex Williamson wrote:
>>> On Thu, 19 Mar 2020 01:11:11 +0530
>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>    

<snip>

>>>> +
>>>> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
>>>> +{
>>>> +	uint64_t bsize;
>>>> +
>>>> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
>>>
>>> As commented previously, how do we derive this UINT_MAX limitation?
>>>    
>>
>> Sorry, I missed that earlier
>>
>>   > UINT_MAX seems arbitrary, is this specified in our API?  The size of a
>>   > vfio_dma is limited to what the user is able to pin, and therefore
>>   > their locked memory limit, but do we have an explicit limit elsewhere
>>   > that results in this limit here.  I think a 4GB bitmap would track
>>   > something like 2^47 bytes of memory, that's pretty excessive, but still
>>   > an arbitrary limit.
>>
>> There has to be some upper limit check. In core KVM, in
>> virt/kvm/kvm_main.c there is max number of pages check:
>>
>> if (new.npages > KVM_MEM_MAX_NR_PAGES)
>>
>> Where
>> /*
>>    * Some of the bitops functions do not support too long bitmaps.
>>    * This number must be determined not to exceed such limits.
>>    */
>> #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)
>>
>> Though I don't know which bitops functions do not support long bitmaps.
>>
>> Something similar as above can be done or same as you also mentioned of
>> 4GB bitmap limit? that is U32_MAX instead of UINT_MAX?
> 
> Let's see, we use bitmap_set():
> 
> void bitmap_set(unsigned long *map, unsigned int start, unsigned int nbits)
> 
> So we're limited to an unsigned int number of bits, but for an
> unaligned, multi-bit operation this will call __bitmap_set():
> 
> void __bitmap_set(unsigned long *map, unsigned int start, int len)
> 
> So we're down to a signed int number of bits (seems like an API bug in
> bitops there), so it makes sense that KVM is testing against MAX_INT
> number of pages, ie. number of bits.  But that still suggests a bitmap
> size of MAX_UINT is off by a factor of 16.  So we can have 2^31 bits
> divided by 2^3 bits/byte yields a maximum bitmap size of 2^28 (ie.
> 256MB), which maps 2^31 * 2^12 = 2^43 (8TB) on a 4K system.
> 
> Let's fix the limit check and put a nice comment explaining it.  Thanks,
> 

Agreed. Adding DIRTY_BITMAP_SIZE_MAX macro and comment as below.

/*
  * Input argument of number of bits to bitmap_set() is unsigned 
integer, which
  * further casts to signed integer for unaligned multi-bit operation,
  * __bitmap_set().
  * Then maximum bitmap size supported is 2^31 bits divided by 2^3 
bits/byte,
  * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
  * system.
  */
#define DIRTY_BITMAP_PAGES_MAX  ((1UL << 31) - 1)
#define DIRTY_BITMAP_SIZE_MAX 	\
			DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)


Thanks,
Kirti

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19 20:25         ` Kirti Wankhede
@ 2020-03-19 20:54           ` Alex Williamson
  0 siblings, 0 replies; 47+ messages in thread
From: Alex Williamson @ 2020-03-19 20:54 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Fri, 20 Mar 2020 01:55:10 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 3/19/2020 9:52 PM, Alex Williamson wrote:
> > On Thu, 19 Mar 2020 20:22:41 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> On 3/19/2020 9:15 AM, Alex Williamson wrote:  
> >>> On Thu, 19 Mar 2020 01:11:11 +0530
> >>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>      
> 
> <snip>
> 
> >>>> +
> >>>> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> >>>> +{
> >>>> +	uint64_t bsize;
> >>>> +
> >>>> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)  
> >>>
> >>> As commented previously, how do we derive this UINT_MAX limitation?
> >>>      
> >>
> >> Sorry, I missed that earlier
> >>  
> >>   > UINT_MAX seems arbitrary, is this specified in our API?  The size of a
> >>   > vfio_dma is limited to what the user is able to pin, and therefore
> >>   > their locked memory limit, but do we have an explicit limit elsewhere
> >>   > that results in this limit here.  I think a 4GB bitmap would track
> >>   > something like 2^47 bytes of memory, that's pretty excessive, but still
> >>   > an arbitrary limit.  
> >>
> >> There has to be some upper limit check. In core KVM, in
> >> virt/kvm/kvm_main.c there is max number of pages check:
> >>
> >> if (new.npages > KVM_MEM_MAX_NR_PAGES)
> >>
> >> Where
> >> /*
> >>    * Some of the bitops functions do not support too long bitmaps.
> >>    * This number must be determined not to exceed such limits.
> >>    */
> >> #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1)
> >>
> >> Though I don't know which bitops functions do not support long bitmaps.
> >>
> >> Something similar as above can be done or same as you also mentioned of
> >> 4GB bitmap limit? that is U32_MAX instead of UINT_MAX?  
> > 
> > Let's see, we use bitmap_set():
> > 
> > void bitmap_set(unsigned long *map, unsigned int start, unsigned int nbits)
> > 
> > So we're limited to an unsigned int number of bits, but for an
> > unaligned, multi-bit operation this will call __bitmap_set():
> > 
> > void __bitmap_set(unsigned long *map, unsigned int start, int len)
> > 
> > So we're down to a signed int number of bits (seems like an API bug in
> > bitops there), so it makes sense that KVM is testing against MAX_INT
> > number of pages, ie. number of bits.  But that still suggests a bitmap
> > size of MAX_UINT is off by a factor of 16.  So we can have 2^31 bits
> > divided by 2^3 bits/byte yields a maximum bitmap size of 2^28 (ie.
> > 256MB), which maps 2^31 * 2^12 = 2^43 (8TB) on a 4K system.
> > 
> > Let's fix the limit check and put a nice comment explaining it.  Thanks,
> >   
> 
> Agreed. Adding DIRTY_BITMAP_SIZE_MAX macro and comment as below.
> 
> /*
>   * Input argument of number of bits to bitmap_set() is unsigned 
> integer, which
>   * further casts to signed integer for unaligned multi-bit operation,
>   * __bitmap_set().
>   * Then maximum bitmap size supported is 2^31 bits divided by 2^3 
> bits/byte,
>   * that is 2^28 (256 MB) which maps to 2^31 * 2^12 = 2^43 (8TB) on 4K page
>   * system.
>   */
> #define DIRTY_BITMAP_PAGES_MAX  ((1UL << 31) - 1)

nit, can we just use INT_MAX here?

> #define DIRTY_BITMAP_SIZE_MAX 	\
> 			DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX)
> 
> 
> Thanks,
> Kirti
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl for dirty pages tracking.
  2020-03-19 16:57               ` Kirti Wankhede
@ 2020-03-20  0:51                 ` Yan Zhao
  0 siblings, 0 replies; 47+ messages in thread
From: Yan Zhao @ 2020-03-20  0:51 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: Alex Williamson, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Fri, Mar 20, 2020 at 12:57:30AM +0800, Kirti Wankhede wrote:
> 
> 
> On 3/19/2020 6:36 PM, Alex Williamson wrote:
> > On Thu, 19 Mar 2020 02:15:34 -0400
> > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > 
> >> On Thu, Mar 19, 2020 at 12:40:53PM +0800, Alex Williamson wrote:
> >>> On Thu, 19 Mar 2020 00:15:33 -0400
> >>> Yan Zhao <yan.y.zhao@intel.com> wrote:
> >>>    
> >>>> On Thu, Mar 19, 2020 at 12:01:00PM +0800, Alex Williamson wrote:
> >>>>> On Wed, 18 Mar 2020 23:06:39 -0400
> >>>>> Yan Zhao <yan.y.zhao@intel.com> wrote:
> >>>>>      
> >>>>>> On Thu, Mar 19, 2020 at 03:41:11AM +0800, Kirti Wankhede wrote:
> >>>>>>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> >>>>>>> - Start dirty pages tracking while migration is active
> >>>>>>> - Stop dirty pages tracking.
> >>>>>>> - Get dirty pages bitmap. Its user space application's responsibility to
> >>>>>>>    copy content of dirty pages from source to destination during migration.
> >>>>>>>
> >>>>>>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> >>>>>>> structure. Bitmap size is calculated considering smallest supported page
> >>>>>>> size. Bitmap is allocated for all vfio_dmas when dirty logging is enabled
> >>>>>>>
> >>>>>>> Bitmap is populated for already pinned pages when bitmap is allocated for
> >>>>>>> a vfio_dma with the smallest supported page size. Update bitmap from
> >>>>>>> pinning functions when tracking is enabled. When user application queries
> >>>>>>> bitmap, check if requested page size is same as page size used to
> >>>>>>> populated bitmap. If it is equal, copy bitmap, but if not equal, return
> >>>>>>> error.
> >>>>>>>
> >>>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >>>>>>> ---
> >>>>>>>   drivers/vfio/vfio_iommu_type1.c | 205 +++++++++++++++++++++++++++++++++++++++-
> >>>>>>>   1 file changed, 203 insertions(+), 2 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >>>>>>> index 70aeab921d0f..d6417fb02174 100644
> >>>>>>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>>>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>>>>>> @@ -71,6 +71,7 @@ struct vfio_iommu {
> >>>>>>>   	unsigned int		dma_avail;
> >>>>>>>   	bool			v2;
> >>>>>>>   	bool			nesting;
> >>>>>>> +	bool			dirty_page_tracking;
> >>>>>>>   };
> >>>>>>>   
> >>>>>>>   struct vfio_domain {
> >>>>>>> @@ -91,6 +92,7 @@ struct vfio_dma {
> >>>>>>>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> >>>>>>>   	struct task_struct	*task;
> >>>>>>>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> >>>>>>> +	unsigned long		*bitmap;
> >>>>>>>   };
> >>>>>>>   
> >>>>>>>   struct vfio_group {
> >>>>>>> @@ -125,7 +127,10 @@ struct vfio_regions {
> >>>>>>>   #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
> >>>>>>>   					(!list_empty(&iommu->domain_list))
> >>>>>>>   
> >>>>>>> +#define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) / BITS_PER_BYTE)
> >>>>>>> +
> >>>>>>>   static int put_pfn(unsigned long pfn, int prot);
> >>>>>>> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >>>>>>>   
> >>>>>>>   /*
> >>>>>>>    * This code handles mapping and unmapping of user data buffers
> >>>>>>> @@ -175,6 +180,55 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> >>>>>>>   	rb_erase(&old->node, &iommu->dma_list);
> >>>>>>>   }
> >>>>>>>   
> >>>>>>> +static int vfio_dma_bitmap_alloc(struct vfio_iommu *iommu, uint64_t pgsize)
> >>>>>>> +{
> >>>>>>> +	struct rb_node *n = rb_first(&iommu->dma_list);
> >>>>>>> +
> >>>>>>> +	for (; n; n = rb_next(n)) {
> >>>>>>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >>>>>>> +		struct rb_node *p;
> >>>>>>> +		unsigned long npages = dma->size / pgsize;
> >>>>>>> +
> >>>>>>> +		dma->bitmap = kvzalloc(DIRTY_BITMAP_BYTES(npages), GFP_KERNEL);
> >>>>>>> +		if (!dma->bitmap) {
> >>>>>>> +			struct rb_node *p = rb_prev(n);
> >>>>>>> +
> >>>>>>> +			for (; p; p = rb_prev(p)) {
> >>>>>>> +				struct vfio_dma *dma = rb_entry(n,
> >>>>>>> +							struct vfio_dma, node);
> >>>>>>> +
> >>>>>>> +				kfree(dma->bitmap);
> >>>>>>> +				dma->bitmap = NULL;
> >>>>>>> +			}
> >>>>>>> +			return -ENOMEM;
> >>>>>>> +		}
> >>>>>>> +
> >>>>>>> +		if (RB_EMPTY_ROOT(&dma->pfn_list))
> >>>>>>> +			continue;
> >>>>>>> +
> >>>>>>> +		for (p = rb_first(&dma->pfn_list); p; p = rb_next(p)) {
> >>>>>>> +			struct vfio_pfn *vpfn = rb_entry(p, struct vfio_pfn,
> >>>>>>> +							 node);
> >>>>>>> +
> >>>>>>> +			bitmap_set(dma->bitmap,
> >>>>>>> +					(vpfn->iova - dma->iova) / pgsize, 1);
> >>>>>>> +		}
> >>>>>>> +	}
> >>>>>>> +	return 0;
> >>>>>>> +}
> >>>>>>> +
> >>>>>>> +static void vfio_dma_bitmap_free(struct vfio_iommu *iommu)
> >>>>>>> +{
> >>>>>>> +	struct rb_node *n = rb_first(&iommu->dma_list);
> >>>>>>> +
> >>>>>>> +	for (; n; n = rb_next(n)) {
> >>>>>>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >>>>>>> +
> >>>>>>> +		kfree(dma->bitmap);
> >>>>>>> +		dma->bitmap = NULL;
> >>>>>>> +	}
> >>>>>>> +}
> >>>>>>> +
> >>>>>>>   /*
> >>>>>>>    * Helper Functions for host iova-pfn list
> >>>>>>>    */
> >>>>>>> @@ -567,6 +621,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>>>>>>   			vfio_unpin_page_external(dma, iova, do_accounting);
> >>>>>>>   			goto pin_unwind;
> >>>>>>>   		}
> >>>>>>> +
> >>>>>>> +		if (iommu->dirty_page_tracking) {
> >>>>>>> +			unsigned long pgshift =
> >>>>>>> +					 __ffs(vfio_pgsize_bitmap(iommu));
> >>>>>>> +
> >>>>>>> +			bitmap_set(dma->bitmap,
> >>>>>>> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> >>>>>>> +		}
> >>>>>>>   	}
> >>>>>>>   
> >>>>>>>   	ret = i;
> >>>>>>> @@ -801,6 +863,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> >>>>>>>   	vfio_unmap_unpin(iommu, dma, true);
> >>>>>>>   	vfio_unlink_dma(iommu, dma);
> >>>>>>>   	put_task_struct(dma->task);
> >>>>>>> +	kfree(dma->bitmap);
> >>>>>>>   	kfree(dma);
> >>>>>>>   	iommu->dma_avail++;
> >>>>>>>   }
> >>>>>>> @@ -831,6 +894,50 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> >>>>>>>   	return bitmap;
> >>>>>>>   }
> >>>>>>>   
> >>>>>>> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> >>>>>>> +				  size_t size, uint64_t pgsize,
> >>>>>>> +				  unsigned char __user *bitmap)
> >>>>>>> +{
> >>>>>>> +	struct vfio_dma *dma;
> >>>>>>> +	unsigned long pgshift = __ffs(pgsize);
> >>>>>>> +	unsigned int npages, bitmap_size;
> >>>>>>> +
> >>>>>>> +	dma = vfio_find_dma(iommu, iova, 1);
> >>>>>>> +
> >>>>>>> +	if (!dma)
> >>>>>>> +		return -EINVAL;
> >>>>>>> +
> >>>>>>> +	if (dma->iova != iova || dma->size != size)
> >>>>>>> +		return -EINVAL;
> >>>>>>> +
> >>>>>> looks this size is passed from user. how can it ensure size always
> >>>>>> equals to dma->size ?
> >>>>>>
> >>>>>> shouldn't we iterate dma tree to look for dirty for whole range if a
> >>>>>> single dma cannot meet them all?
> >>>>>
> >>>>> Please see the discussion on v12[1], the problem is with the alignment
> >>>>> of DMA mapped regions versus the bitmap.  A DMA mapping only requires
> >>>>> page alignment, so for example imagine a user requests the bitmap from
> >>>>> page zero to 4GB, but we have a DMA mapping starting at 4KB.  We can't
> >>>>> efficiently copy the bitmap tracked by the vfio_dma structure to the
> >>>>> user buffer when it's shifted by 1 bit.  Adjacent mappings can also
> >>>>> make for a very complicated implementation.  In the discussion linked
> >>>>> we decided to compromise on a more simple implementation that requires
> >>>>> the user to ask for a bitmap which exactly matches a single DMA
> >>>>> mapping, which Kirti indicates is what we require to support QEMU.
> >>>>> Later in the series, the unmap operation also makes this requirement
> >>>>> when used with the flags to retrieve the dirty bitmap.  Thanks,
> >>>>>     
> >>>>
> >>>> so, what about for vIOMMU enabling case?
> >>>> if IOVAs are mapped per page, then there's a log_sync in qemu,
> >>>> it's supposed for range from 0-U64MAX, qemu has to find out which
> >>>> ones are mapped and cut them into pages before calling this IOCTL?
> >>>> And what if those IOVAs are mapped for len more than one page?
> >>>
> >>> Good question.  Kirti?
> >>>
> 
> In log_sync with vIOMMU, loop for range such that:
> 
> - find iotlb entry for iova, get iova_xlat
> - size = iotlb.addr_mask + 1; This is same caculation as when mapping 
> are created from vfio_iommu_map_notify()
> - use the <iova_xlat, size> for VFIO_IOMMU_DIRTY_PAGES ioctl
> - increment iova: iova += size
> - iterate above steps till end of range.
>
Ok. It makes sense, though not efficient :)
think about when there's no iotlb found for an iova, page by page
incremental of iova is required. right?

> >>>>> [1] https://lore.kernel.org/kvm/20200218215330.5bc8fc6a@w520.home/
> >>>>>       
> >>>>>>> +	npages = dma->size >> pgshift;
> >>>>>>> +	bitmap_size = DIRTY_BITMAP_BYTES(npages);
> >>>>>>> +
> >>>>>>> +	/* mark all pages dirty if all pages are pinned and mapped. */
> >>>>>>> +	if (dma->iommu_mapped)
> >>>>>>> +		bitmap_set(dma->bitmap, 0, npages);
> >>>>>>> +
> >>>>>>> +	if (copy_to_user((void __user *)bitmap, dma->bitmap, bitmap_size))
> >>>>>>> +		return -EFAULT;
> >>>>>>> +
> >> Here, dma->bitmap needs to be cleared. right?
> > 
> > Ah, I missed re-checking this in my review.  v13 did clear it, but I
> > noted that we need to re-populate any currently pinned pages.  This
> > neither clears nor repopulates.  That's wrong.  Thanks,
> > 
> 
> Why re-populate when there will be no change since 
> vfio_iova_dirty_bitmap() is called holding iommu->lock? If there is any 
> pin request while vfio_iova_dirty_bitmap() is still working, it will 
> wait till iommu->lock is released. Bitmap will be populated when page is 
> pinned.
> 
> Thanks,
> Kirti
> 
> > Alex
> >   
> >>>>>>> +	return 0;
> >>>>>>> +}
> >>>>>>> +
> >>>>>>> +static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> >>>>>>> +{
> >>>>>>> +	uint64_t bsize;
> >>>>>>> +
> >>>>>>> +	if (!npages || !bitmap_size || bitmap_size > UINT_MAX)
> >>>>>>> +		return -EINVAL;
> >>>>>>> +
> >>>>>>> +	bsize = DIRTY_BITMAP_BYTES(npages);
> >>>>>>> +
> >>>>>>> +	if (bitmap_size < bsize)
> >>>>>>> +		return -EINVAL;
> >>>>>>> +
> >>>>>>> +	return 0;
> >>>>>>> +}
> >>>>>>> +
> >>>>>>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >>>>>>>   			     struct vfio_iommu_type1_dma_unmap *unmap)
> >>>>>>>   {
> >>>>>>> @@ -2278,6 +2385,93 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> >>>>>>>   
> >>>>>>>   		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> >>>>>>>   			-EFAULT : 0;
> >>>>>>> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> >>>>>>> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> >>>>>>> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> >>>>>>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> >>>>>>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> >>>>>>> +		int ret = 0;
> >>>>>>> +
> >>>>>>> +		if (!iommu->v2)
> >>>>>>> +			return -EACCES;
> >>>>>>> +
> >>>>>>> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> >>>>>>> +				    flags);
> >>>>>>> +
> >>>>>>> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> >>>>>>> +			return -EFAULT;
> >>>>>>> +
> >>>>>>> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> >>>>>>> +			return -EINVAL;
> >>>>>>> +
> >>>>>>> +		/* only one flag should be set at a time */
> >>>>>>> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> >>>>>>> +			return -EINVAL;
> >>>>>>> +
> >>>>>>> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> >>>>>>> +			uint64_t pgsize = 1 << __ffs(vfio_pgsize_bitmap(iommu));
> >>>>>>> +
> >>>>>>> +			mutex_lock(&iommu->lock);
> >>>>>>> +			if (!iommu->dirty_page_tracking) {
> >>>>>>> +				ret = vfio_dma_bitmap_alloc(iommu, pgsize);
> >>>>>>> +				if (!ret)
> >>>>>>> +					iommu->dirty_page_tracking = true;
> >>>>>>> +			}
> >>>>>>> +			mutex_unlock(&iommu->lock);
> >>>>>>> +			return ret;
> >>>>>>> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> >>>>>>> +			mutex_lock(&iommu->lock);
> >>>>>>> +			if (iommu->dirty_page_tracking) {
> >>>>>>> +				iommu->dirty_page_tracking = false;
> >>>>>>> +				vfio_dma_bitmap_free(iommu);
> >>>>>>> +			}
> >>>>>>> +			mutex_unlock(&iommu->lock);
> >>>>>>> +			return 0;
> >>>>>>> +		} else if (dirty.flags &
> >>>>>>> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> >>>>>>> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> >>>>>>> +			unsigned long pgshift;
> >>>>>>> +			size_t data_size = dirty.argsz - minsz;
> >>>>>>> +			uint64_t iommu_pgsize =
> >>>>>>> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> >>>>>>> +
> >>>>>>> +			if (!data_size || data_size < sizeof(range))
> >>>>>>> +				return -EINVAL;
> >>>>>>> +
> >>>>>>> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> >>>>>>> +					   sizeof(range)))
> >>>>>>> +				return -EFAULT;
> >>>>>>> +
> >>>>>>> +			/* allow only min supported pgsize */
> >>>>>>> +			if (range.bitmap.pgsize != iommu_pgsize)
> >>>>>>> +				return -EINVAL;
> >>>>>>> +			if (range.iova & (iommu_pgsize - 1))
> >>>>>>> +				return -EINVAL;
> >>>>>>> +			if (!range.size || range.size & (iommu_pgsize - 1))
> >>>>>>> +				return -EINVAL;
> >>>>>>> +			if (range.iova + range.size < range.iova)
> >>>>>>> +				return -EINVAL;
> >>>>>>> +			if (!access_ok((void __user *)range.bitmap.data,
> >>>>>>> +				       range.bitmap.size))
> >>>>>>> +				return -EINVAL;
> >>>>>>> +
> >>>>>>> +			pgshift = __ffs(range.bitmap.pgsize);
> >>>>>>> +			ret = verify_bitmap_size(range.size >> pgshift,
> >>>>>>> +						 range.bitmap.size);
> >>>>>>> +			if (ret)
> >>>>>>> +				return ret;
> >>>>>>> +
> >>>>>>> +			mutex_lock(&iommu->lock);
> >>>>>>> +			if (iommu->dirty_page_tracking)
> >>>>>>> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> >>>>>>> +					 range.size, range.bitmap.pgsize,
> >>>>>>> +				    (unsigned char __user *)range.bitmap.data);
> >>>>>>> +			else
> >>>>>>> +				ret = -EINVAL;
> >>>>>>> +			mutex_unlock(&iommu->lock);
> >>>>>>> +
> >>>>>>> +			return ret;
> >>>>>>> +		}
> >>>>>>>   	}
> >>>>>>>   
> >>>>>>>   	return -ENOTTY;
> >>>>>>> @@ -2345,10 +2539,17 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
> >>>>>>>   
> >>>>>>>   	vaddr = dma->vaddr + offset;
> >>>>>>>   
> >>>>>>> -	if (write)
> >>>>>>> +	if (write) {
> >>>>>>>   		*copied = __copy_to_user((void __user *)vaddr, data,
> >>>>>>>   					 count) ? 0 : count;
> >>>>>>> -	else
> >>>>>>> +		if (*copied && iommu->dirty_page_tracking) {
> >>>>>>> +			unsigned long pgshift =
> >>>>>>> +				__ffs(vfio_pgsize_bitmap(iommu));
> >>>>>>> +
> >>>>>>> +			bitmap_set(dma->bitmap, offset >> pgshift,
> >>>>>>> +				   *copied >> pgshift);
> >>>>>>> +		}
> >>>>>>> +	} else
> >>>>>>>   		*copied = __copy_from_user(data, (void __user *)vaddr,
> >>>>>>>   					   count) ? 0 : count;
> >>>>>>>   	if (kthread)
> >>>>>>> -- 
> >>>>>>> 2.7.0
> >>>>>>>        
> >>>>>>      
> >>>>>      
> >>>>    
> >>>    
> >>
> > 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-19 13:09         ` Alex Williamson
@ 2020-03-20  1:30           ` Yan Zhao
  2020-03-20  2:34             ` Alex Williamson
  2020-03-23 14:45           ` Auger Eric
  1 sibling, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-20  1:30 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 09:09:21PM +0800, Alex Williamson wrote:
> On Thu, 19 Mar 2020 01:05:54 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:
> > > On Wed, 18 Mar 2020 21:17:03 -0400
> > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > >   
> > > > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:  
> > > > > - Defined MIGRATION region type and sub-type.
> > > > > 
> > > > > - Defined vfio_device_migration_info structure which will be placed at the
> > > > >   0th offset of migration region to get/set VFIO device related
> > > > >   information. Defined members of structure and usage on read/write access.
> > > > > 
> > > > > - Defined device states and state transition details.
> > > > > 
> > > > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > > > 
> > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > ---
> > > > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > > > >  1 file changed, 227 insertions(+)
> > > > > 
> > > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > > index 9e843a147ead..d0021467af53 100644
> > > > > --- a/include/uapi/linux/vfio.h
> > > > > +++ b/include/uapi/linux/vfio.h
> > > > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > > > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > > > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > > > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > > > >  
> > > > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > > > >  
> > > > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > > > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > > > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > > > >  
> > > > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > > > +
> > > > > +/*
> > > > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > > > + * migration information. Field accesses from this structure are only supported
> > > > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > > > + * vendor drivers should return an error.
> > > > > + *
> > > > > + * device_state: (read/write)
> > > > > + *      - The user application writes to this field to inform the vendor driver
> > > > > + *        about the device state to be transitioned to.
> > > > > + *      - The vendor driver should take the necessary actions to change the
> > > > > + *        device state. After successful transition to a given state, the
> > > > > + *        vendor driver should return success on write(device_state, state)
> > > > > + *        system call. If the device state transition fails, the vendor driver
> > > > > + *        should return an appropriate -errno for the fault condition.
> > > > > + *      - On the user application side, if the device state transition fails,
> > > > > + *	  that is, if write(device_state, state) returns an error, read
> > > > > + *	  device_state again to determine the current state of the device from
> > > > > + *	  the vendor driver.
> > > > > + *      - The vendor driver should return previous state of the device unless
> > > > > + *        the vendor driver has encountered an internal error, in which case
> > > > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > > > + *      - The user application must use the device reset ioctl to recover the
> > > > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > > > + *        indicated to be in a valid device state by reading device_state, the
> > > > > + *        user application may attempt to transition the device to any valid
> > > > > + *        state reachable from the current state or terminate itself.
> > > > > + *
> > > > > + *      device_state consists of 3 bits:
> > > > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > > > + *        it indicates the _STOP state. When the device state is changed to
> > > > > + *        _STOP, driver should stop the device before write() returns.
> > > > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > > > + *        driver should start gathering device state information that will be
> > > > > + *        provided to the VFIO user application to save the device's state.
> > > > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > > > + *        the driver should prepare to resume the device. Data provided through
> > > > > + *        the migration region should be used to resume the device.
> > > > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > > > + *      application should perform a read-modify-write operation on this
> > > > > + *      field when modifying the specified bits.
> > > > > + *
> > > > > + *  +------- _RESUMING
> > > > > + *  |+------ _SAVING
> > > > > + *  ||+----- _RUNNING
> > > > > + *  |||
> > > > > + *  000b => Device Stopped, not saving or resuming
> > > > > + *  001b => Device running, which is the default state
> > > > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > > > + *  011b => Device running and save the device state, pre-copy state
> > > > > + *  100b => Device stopped and the device state is resuming
> > > > > + *  101b => Invalid state
> > > > > + *  110b => Error state
> > > > > + *  111b => Invalid state
> > > > > + *
> > > > > + * State transitions:
> > > > > + *
> > > > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > > > + * 0. Running or default state
> > > > > + *                             |
> > > > > + *
> > > > > + * 1. Normal Shutdown (optional)
> > > > > + *                             |------------------------------------->|
> > > > > + *
> > > > > + * 2. Save the state or suspend
> > > > > + *                             |------------------------->|---------->|
> > > > > + *
> > > > > + * 3. Save the state during live migration
> > > > > + *                             |----------->|------------>|---------->|
> > > > > + *
> > > > > + * 4. Resuming
> > > > > + *                  |<---------|
> > > > > + *
> > > > > + * 5. Resumed
> > > > > + *                  |--------->|
> > > > > + *
> > > > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > > > + * 1. During normal shutdown of the user application, the user application may
> > > > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > > > + *    transition is optional. The vendor driver must support this transition but
> > > > > + *    must not require it.
> > > > > + * 2. When the user application saves state or suspends the application, the
> > > > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > > > + *    device, save the device state and send it to the application through the
> > > > > + *    migration region. The sequence to be followed for such transition is given
> > > > > + *    below.
> > > > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > > > + *    gathering the device state while the application is still running and send
> > > > > + *    the device state data to application through the migration region.
> > > > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > > > + *    the device, save the device state and send it to the user application
> > > > > + *    through the migration region.
> > > > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > > > + *    user must not be required to consume all migration data before the device
> > > > > + *    transitions to a new state, including the stop-and-copy state.
> > > > > + *    The sequence to be followed for above two transitions is given below.
> > > > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > > > + *    the _RUNNING to the _RESUMING state.
> > > > > + *    In the _RESUMING state, the driver should use the device state data
> > > > > + *    received through the migration region to resume the device.
> > > > > + * 5. After providing saved device data to the driver, the application should
> > > > > + *    change the state from _RESUMING to _RUNNING.
> > > > > + *
> > > > > + * reserved:
> > > > > + *      Reads on this field return zero and writes are ignored.
> > > > > + *
> > > > > + * pending_bytes: (read only)
> > > > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > > > + *
> > > > > + * data_offset: (read only)
> > > > > + *      The user application should read data_offset in the migration region
> > > > > + *      from where the user application should read the device data during the
> > > > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > > > + *      below for details of sequence to be followed.
> > > > > + *
> > > > > + * data_size: (read/write)
> > > > > + *      The user application should read data_size to get the size in bytes of
> > > > > + *      the data copied in the migration region during the _SAVING state and
> > > > > + *      write the size in bytes of the data copied in the migration region
> > > > > + *      during the _RESUMING state.
> > > > > + *
> > > > > + * The format of the migration region is as follows:
> > > > > + *  ------------------------------------------------------------------
> > > > > + * |vfio_device_migration_info|    data section                      |
> > > > > + * |                          |     ///////////////////////////////  |
> > > > > + * ------------------------------------------------------------------
> > > > > + *   ^                              ^
> > > > > + *  offset 0-trapped part        data_offset
> > > > > + *
> > > > > + * The structure vfio_device_migration_info is always followed by the data
> > > > > + * section in the region, so data_offset will always be nonzero. The offset
> > > > > + * from where the data is copied is decided by the kernel driver. The data
> > > > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > > > + * driver defines the data section. The data section partition can be defined
> > > > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > > > + * page aligned, whereas initial section which contains the
> > > > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > > > + * page aligned. The user is not required to access through mmap regardless
> > > > > + * of the capabilities of the region mmap.
> > > > > + * The vendor driver should determine whether and how to partition the data
> > > > > + * section. The vendor driver should return data_offset accordingly.
> > > > > + *
> > > > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > > > + * follows:
> > > > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > > > + *    effects.
> > > > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > > > + *    for that device.
> > > > > + *    If pending_bytes > 0, perform the following steps.
> > > > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > > > + *    available through the data section. The vendor driver should return this
> > > > > + *    read operation only after data is available from (region + data_offset)
> > > > > + *    to (region + data_offset + data_size).
> > > > > + * c. Read data_size, which is the amount of data in bytes available through
> > > > > + *    the migration region.
> > > > > + *    Read on data_offset and data_size should return the offset and size of
> > > > > + *    the current buffer if the user application reads data_offset and
> > > > > + *    data_size more than once here.    
> > > > If data region is mmaped, merely reading data_offset and data_size
> > > > cannot let kernel know what are correct values to return.
> > > > Consider to add a read operation which is trapped into kernel to let
> > > > kernel exactly know it needs to move to the next offset and update data_size
> > > > ?  
> > > 
> > > Both operations b. and c. above are to trapped registers, operation d.
> > > below may potentially be to an mmap'd area, which is why we have step
> > > f. which indicates to the vendor driver that the data has been
> > > consumed.  Does that address your concern?  Thanks,
> > >  
> > No. :)
> > the problem is about semantics of data_offset, data_size, and
> > pending_bytes.
> > b and c do not tell kernel that the data is read by user.
> > so, without knowing step d happen, kernel cannot update pending_bytes to
> > be returned in step f.
> 
> Sorry, I'm still not understanding, I see step f. as the indicator
> you're looking for.  The user reads pending_bytes to indicate the data
> in the migration area has been consumed.  The vendor driver updates its
> internal state on that read and returns the updated value for
> pending_bytes.  Thanks,
> 
we could not regard reading of pending_bytes as an indicator of
migration data consumed.

for 1, in migration thread, read of pending_bytes is called every
iteration, but reads of data_size & data_offset are not (they are
skippable). so it's possible that the sequence is like
(1) reading of pending_bytes
(2) reading of pending_bytes
(3) reading of pending_bytes
(4) reading of data_offset & data_size
(5) reading of pending_bytes

for 2, it's not right to force kernel to understand qemu's sequence and
decide that only a read of pending_bytes after reads of data_offset & data_size
indicates data has been consumed.

Agree?

Thanks
Yan

>  
> > > > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > > > + *    migration region.
> > > > > + * e. Process the data.
> > > > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > > > + *
> > > > > + * If an error occurs during the above sequence, the vendor driver can return
> > > > > + * an error code for next read() or write() operation, which will terminate the
> > > > > + * loop. The user application should then take the next necessary action, for
> > > > > + * example, failing migration or terminating the user application.
> > > > > + *
> > > > > + * The user application can transition from the _SAVING|_RUNNING
> > > > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > > > + * number of pending bytes. The user application should iterate in _SAVING
> > > > > + * (stop-and-copy) until pending_bytes is 0.
> > > > > + *
> > > > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > > > + * While data for this device is available, repeat the following steps:
> > > > > + * a. Read data_offset from where the user application should write data.
> > > > > + * b. Write migration data starting at the migration region + data_offset for
> > > > > + *    the length determined by data_size from the migration source.
> > > > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > > > + *    written in the migration region. Vendor driver should apply the
> > > > > + *    user-provided migration region data to the device resume state.
> > > > > + *
> > > > > + * For the user application, data is opaque. The user application should write
> > > > > + * data in the same order as the data is received and the data should be of
> > > > > + * same transaction size at the source.
> > > > > + */
> > > > > +
> > > > > +struct vfio_device_migration_info {
> > > > > +	__u32 device_state;         /* VFIO device state */
> > > > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > > > +
> > > > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > > > +
> > > > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > > > +
> > > > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > > > +
> > > > > +	__u32 reserved;
> > > > > +	__u64 pending_bytes;
> > > > > +	__u64 data_offset;
> > > > > +	__u64 data_size;
> > > > > +} __attribute__((packed));
> > > > > +
> > > > >  /*
> > > > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > > > >   * which allows direct access to non-MSIX registers which happened to be within
> > > > > -- 
> > > > > 2.7.0
> > > > >     
> > > >   
> > >   
> > 
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-20  1:30           ` Yan Zhao
@ 2020-03-20  2:34             ` Alex Williamson
  2020-03-20  3:06               ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-20  2:34 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, 19 Mar 2020 21:30:39 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 09:09:21PM +0800, Alex Williamson wrote:
> > On Thu, 19 Mar 2020 01:05:54 -0400
> > Yan Zhao <yan.y.zhao@intel.com> wrote:
> >   
> > > On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:  
> > > > On Wed, 18 Mar 2020 21:17:03 -0400
> > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > >     
> > > > > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:    
> > > > > > - Defined MIGRATION region type and sub-type.
> > > > > > 
> > > > > > - Defined vfio_device_migration_info structure which will be placed at the
> > > > > >   0th offset of migration region to get/set VFIO device related
> > > > > >   information. Defined members of structure and usage on read/write access.
> > > > > > 
> > > > > > - Defined device states and state transition details.
> > > > > > 
> > > > > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > > > > 
> > > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > > ---
> > > > > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > > > > >  1 file changed, 227 insertions(+)
> > > > > > 
> > > > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > > > index 9e843a147ead..d0021467af53 100644
> > > > > > --- a/include/uapi/linux/vfio.h
> > > > > > +++ b/include/uapi/linux/vfio.h
> > > > > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > > > > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > > > > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > > > > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > > > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > > > > >  
> > > > > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > > > > >  
> > > > > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > > > > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > > > > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > > > > >  
> > > > > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > > > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > > > > +
> > > > > > +/*
> > > > > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > > > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > > > > + * migration information. Field accesses from this structure are only supported
> > > > > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > > > > + * vendor drivers should return an error.
> > > > > > + *
> > > > > > + * device_state: (read/write)
> > > > > > + *      - The user application writes to this field to inform the vendor driver
> > > > > > + *        about the device state to be transitioned to.
> > > > > > + *      - The vendor driver should take the necessary actions to change the
> > > > > > + *        device state. After successful transition to a given state, the
> > > > > > + *        vendor driver should return success on write(device_state, state)
> > > > > > + *        system call. If the device state transition fails, the vendor driver
> > > > > > + *        should return an appropriate -errno for the fault condition.
> > > > > > + *      - On the user application side, if the device state transition fails,
> > > > > > + *	  that is, if write(device_state, state) returns an error, read
> > > > > > + *	  device_state again to determine the current state of the device from
> > > > > > + *	  the vendor driver.
> > > > > > + *      - The vendor driver should return previous state of the device unless
> > > > > > + *        the vendor driver has encountered an internal error, in which case
> > > > > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > > > > + *      - The user application must use the device reset ioctl to recover the
> > > > > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > > > > + *        indicated to be in a valid device state by reading device_state, the
> > > > > > + *        user application may attempt to transition the device to any valid
> > > > > > + *        state reachable from the current state or terminate itself.
> > > > > > + *
> > > > > > + *      device_state consists of 3 bits:
> > > > > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > > > > + *        it indicates the _STOP state. When the device state is changed to
> > > > > > + *        _STOP, driver should stop the device before write() returns.
> > > > > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > > > > + *        driver should start gathering device state information that will be
> > > > > > + *        provided to the VFIO user application to save the device's state.
> > > > > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > > > > + *        the driver should prepare to resume the device. Data provided through
> > > > > > + *        the migration region should be used to resume the device.
> > > > > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > > > > + *      application should perform a read-modify-write operation on this
> > > > > > + *      field when modifying the specified bits.
> > > > > > + *
> > > > > > + *  +------- _RESUMING
> > > > > > + *  |+------ _SAVING
> > > > > > + *  ||+----- _RUNNING
> > > > > > + *  |||
> > > > > > + *  000b => Device Stopped, not saving or resuming
> > > > > > + *  001b => Device running, which is the default state
> > > > > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > > > > + *  011b => Device running and save the device state, pre-copy state
> > > > > > + *  100b => Device stopped and the device state is resuming
> > > > > > + *  101b => Invalid state
> > > > > > + *  110b => Error state
> > > > > > + *  111b => Invalid state
> > > > > > + *
> > > > > > + * State transitions:
> > > > > > + *
> > > > > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > > > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > > > > + * 0. Running or default state
> > > > > > + *                             |
> > > > > > + *
> > > > > > + * 1. Normal Shutdown (optional)
> > > > > > + *                             |------------------------------------->|
> > > > > > + *
> > > > > > + * 2. Save the state or suspend
> > > > > > + *                             |------------------------->|---------->|
> > > > > > + *
> > > > > > + * 3. Save the state during live migration
> > > > > > + *                             |----------->|------------>|---------->|
> > > > > > + *
> > > > > > + * 4. Resuming
> > > > > > + *                  |<---------|
> > > > > > + *
> > > > > > + * 5. Resumed
> > > > > > + *                  |--------->|
> > > > > > + *
> > > > > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > > > > + * 1. During normal shutdown of the user application, the user application may
> > > > > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > > > > + *    transition is optional. The vendor driver must support this transition but
> > > > > > + *    must not require it.
> > > > > > + * 2. When the user application saves state or suspends the application, the
> > > > > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > > > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > > > > + *    device, save the device state and send it to the application through the
> > > > > > + *    migration region. The sequence to be followed for such transition is given
> > > > > > + *    below.
> > > > > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > > > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > > > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > > > > + *    gathering the device state while the application is still running and send
> > > > > > + *    the device state data to application through the migration region.
> > > > > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > > > > + *    the device, save the device state and send it to the user application
> > > > > > + *    through the migration region.
> > > > > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > > > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > > > > + *    user must not be required to consume all migration data before the device
> > > > > > + *    transitions to a new state, including the stop-and-copy state.
> > > > > > + *    The sequence to be followed for above two transitions is given below.
> > > > > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > > > > + *    the _RUNNING to the _RESUMING state.
> > > > > > + *    In the _RESUMING state, the driver should use the device state data
> > > > > > + *    received through the migration region to resume the device.
> > > > > > + * 5. After providing saved device data to the driver, the application should
> > > > > > + *    change the state from _RESUMING to _RUNNING.
> > > > > > + *
> > > > > > + * reserved:
> > > > > > + *      Reads on this field return zero and writes are ignored.
> > > > > > + *
> > > > > > + * pending_bytes: (read only)
> > > > > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > > > > + *
> > > > > > + * data_offset: (read only)
> > > > > > + *      The user application should read data_offset in the migration region
> > > > > > + *      from where the user application should read the device data during the
> > > > > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > > > > + *      below for details of sequence to be followed.
> > > > > > + *
> > > > > > + * data_size: (read/write)
> > > > > > + *      The user application should read data_size to get the size in bytes of
> > > > > > + *      the data copied in the migration region during the _SAVING state and
> > > > > > + *      write the size in bytes of the data copied in the migration region
> > > > > > + *      during the _RESUMING state.
> > > > > > + *
> > > > > > + * The format of the migration region is as follows:
> > > > > > + *  ------------------------------------------------------------------
> > > > > > + * |vfio_device_migration_info|    data section                      |
> > > > > > + * |                          |     ///////////////////////////////  |
> > > > > > + * ------------------------------------------------------------------
> > > > > > + *   ^                              ^
> > > > > > + *  offset 0-trapped part        data_offset
> > > > > > + *
> > > > > > + * The structure vfio_device_migration_info is always followed by the data
> > > > > > + * section in the region, so data_offset will always be nonzero. The offset
> > > > > > + * from where the data is copied is decided by the kernel driver. The data
> > > > > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > > > > + * driver defines the data section. The data section partition can be defined
> > > > > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > > > > + * page aligned, whereas initial section which contains the
> > > > > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > > > > + * page aligned. The user is not required to access through mmap regardless
> > > > > > + * of the capabilities of the region mmap.
> > > > > > + * The vendor driver should determine whether and how to partition the data
> > > > > > + * section. The vendor driver should return data_offset accordingly.
> > > > > > + *
> > > > > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > > > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > > > > + * follows:
> > > > > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > > > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > > > > + *    effects.
> > > > > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > > > > + *    for that device.
> > > > > > + *    If pending_bytes > 0, perform the following steps.
> > > > > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > > > > + *    available through the data section. The vendor driver should return this
> > > > > > + *    read operation only after data is available from (region + data_offset)
> > > > > > + *    to (region + data_offset + data_size).
> > > > > > + * c. Read data_size, which is the amount of data in bytes available through
> > > > > > + *    the migration region.
> > > > > > + *    Read on data_offset and data_size should return the offset and size of
> > > > > > + *    the current buffer if the user application reads data_offset and
> > > > > > + *    data_size more than once here.      
> > > > > If data region is mmaped, merely reading data_offset and data_size
> > > > > cannot let kernel know what are correct values to return.
> > > > > Consider to add a read operation which is trapped into kernel to let
> > > > > kernel exactly know it needs to move to the next offset and update data_size
> > > > > ?    
> > > > 
> > > > Both operations b. and c. above are to trapped registers, operation d.
> > > > below may potentially be to an mmap'd area, which is why we have step
> > > > f. which indicates to the vendor driver that the data has been
> > > > consumed.  Does that address your concern?  Thanks,
> > > >    
> > > No. :)
> > > the problem is about semantics of data_offset, data_size, and
> > > pending_bytes.
> > > b and c do not tell kernel that the data is read by user.
> > > so, without knowing step d happen, kernel cannot update pending_bytes to
> > > be returned in step f.  
> > 
> > Sorry, I'm still not understanding, I see step f. as the indicator
> > you're looking for.  The user reads pending_bytes to indicate the data
> > in the migration area has been consumed.  The vendor driver updates its
> > internal state on that read and returns the updated value for
> > pending_bytes.  Thanks,
> >   
> we could not regard reading of pending_bytes as an indicator of
> migration data consumed.
> 
> for 1, in migration thread, read of pending_bytes is called every
> iteration, but reads of data_size & data_offset are not (they are
> skippable). so it's possible that the sequence is like
> (1) reading of pending_bytes
> (2) reading of pending_bytes
> (3) reading of pending_bytes
> (4) reading of data_offset & data_size
> (5) reading of pending_bytes
> 
> for 2, it's not right to force kernel to understand qemu's sequence and
> decide that only a read of pending_bytes after reads of data_offset & data_size
> indicates data has been consumed.
> 
> Agree?

No, not really.  We're defining an API that enables the above sequence,
but doesn't require the kernel to understand QEMU's sequence.
Specifically, pending_bytes may be read without side-effects except for
when data is queued to read through the data area of the region.  The
user queues data to read by reading data_offset.  The user then reads
data_size to determine the currently available data chunk size.  This
is followed by consuming the data from the region offset + data_offset.
Only after reading data_offset does the read of pending_bytes signal to
the vendor driver that the user has consumed the data.

If the user were to re-read pending_bytes before consuming the data,
then the data_offset and data_size they may have read is invalid and
they've violated the defined protocol.  We do not, nor do I think we
could, make this a fool proof interface.  The user must adhere to the
protocol, but I believe the specific sequence you've identified is
fully enabled here.  Please confirm.  Thanks,

Alex

> > > > > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > > > > + *    migration region.
> > > > > > + * e. Process the data.
> > > > > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > > > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > > > > + *
> > > > > > + * If an error occurs during the above sequence, the vendor driver can return
> > > > > > + * an error code for next read() or write() operation, which will terminate the
> > > > > > + * loop. The user application should then take the next necessary action, for
> > > > > > + * example, failing migration or terminating the user application.
> > > > > > + *
> > > > > > + * The user application can transition from the _SAVING|_RUNNING
> > > > > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > > > > + * number of pending bytes. The user application should iterate in _SAVING
> > > > > > + * (stop-and-copy) until pending_bytes is 0.
> > > > > > + *
> > > > > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > > > > + * While data for this device is available, repeat the following steps:
> > > > > > + * a. Read data_offset from where the user application should write data.
> > > > > > + * b. Write migration data starting at the migration region + data_offset for
> > > > > > + *    the length determined by data_size from the migration source.
> > > > > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > > > > + *    written in the migration region. Vendor driver should apply the
> > > > > > + *    user-provided migration region data to the device resume state.
> > > > > > + *
> > > > > > + * For the user application, data is opaque. The user application should write
> > > > > > + * data in the same order as the data is received and the data should be of
> > > > > > + * same transaction size at the source.
> > > > > > + */
> > > > > > +
> > > > > > +struct vfio_device_migration_info {
> > > > > > +	__u32 device_state;         /* VFIO device state */
> > > > > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > > > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > > > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > > > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > > > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > > > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > > > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > > > > +
> > > > > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > > > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > > > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > > > > +
> > > > > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > > > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > > > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > > > > +
> > > > > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > > > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > > > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > > > > +
> > > > > > +	__u32 reserved;
> > > > > > +	__u64 pending_bytes;
> > > > > > +	__u64 data_offset;
> > > > > > +	__u64 data_size;
> > > > > > +} __attribute__((packed));
> > > > > > +
> > > > > >  /*
> > > > > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > > > > >   * which allows direct access to non-MSIX registers which happened to be within
> > > > > > -- 
> > > > > > 2.7.0
> > > > > >       
> > > > >     
> > > >     
> > >   
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-20  2:34             ` Alex Williamson
@ 2020-03-20  3:06               ` Yan Zhao
  2020-03-20  4:09                 ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-20  3:06 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Fri, Mar 20, 2020 at 10:34:40AM +0800, Alex Williamson wrote:
> On Thu, 19 Mar 2020 21:30:39 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Thu, Mar 19, 2020 at 09:09:21PM +0800, Alex Williamson wrote:
> > > On Thu, 19 Mar 2020 01:05:54 -0400
> > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > >   
> > > > On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:  
> > > > > On Wed, 18 Mar 2020 21:17:03 -0400
> > > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > >     
> > > > > > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:    
> > > > > > > - Defined MIGRATION region type and sub-type.
> > > > > > > 
> > > > > > > - Defined vfio_device_migration_info structure which will be placed at the
> > > > > > >   0th offset of migration region to get/set VFIO device related
> > > > > > >   information. Defined members of structure and usage on read/write access.
> > > > > > > 
> > > > > > > - Defined device states and state transition details.
> > > > > > > 
> > > > > > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > > > > > 
> > > > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > > > ---
> > > > > > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > > > > > >  1 file changed, 227 insertions(+)
> > > > > > > 
> > > > > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > > > > index 9e843a147ead..d0021467af53 100644
> > > > > > > --- a/include/uapi/linux/vfio.h
> > > > > > > +++ b/include/uapi/linux/vfio.h
> > > > > > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > > > > > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > > > > > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > > > > > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > > > > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > > > > > >  
> > > > > > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > > > > > >  
> > > > > > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > > > > > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > > > > > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > > > > > >  
> > > > > > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > > > > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > > > > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > > > > > + * migration information. Field accesses from this structure are only supported
> > > > > > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > > > > > + * vendor drivers should return an error.
> > > > > > > + *
> > > > > > > + * device_state: (read/write)
> > > > > > > + *      - The user application writes to this field to inform the vendor driver
> > > > > > > + *        about the device state to be transitioned to.
> > > > > > > + *      - The vendor driver should take the necessary actions to change the
> > > > > > > + *        device state. After successful transition to a given state, the
> > > > > > > + *        vendor driver should return success on write(device_state, state)
> > > > > > > + *        system call. If the device state transition fails, the vendor driver
> > > > > > > + *        should return an appropriate -errno for the fault condition.
> > > > > > > + *      - On the user application side, if the device state transition fails,
> > > > > > > + *	  that is, if write(device_state, state) returns an error, read
> > > > > > > + *	  device_state again to determine the current state of the device from
> > > > > > > + *	  the vendor driver.
> > > > > > > + *      - The vendor driver should return previous state of the device unless
> > > > > > > + *        the vendor driver has encountered an internal error, in which case
> > > > > > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > > > > > + *      - The user application must use the device reset ioctl to recover the
> > > > > > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > > > > > + *        indicated to be in a valid device state by reading device_state, the
> > > > > > > + *        user application may attempt to transition the device to any valid
> > > > > > > + *        state reachable from the current state or terminate itself.
> > > > > > > + *
> > > > > > > + *      device_state consists of 3 bits:
> > > > > > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > > > > > + *        it indicates the _STOP state. When the device state is changed to
> > > > > > > + *        _STOP, driver should stop the device before write() returns.
> > > > > > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > > > > > + *        driver should start gathering device state information that will be
> > > > > > > + *        provided to the VFIO user application to save the device's state.
> > > > > > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > > > > > + *        the driver should prepare to resume the device. Data provided through
> > > > > > > + *        the migration region should be used to resume the device.
> > > > > > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > > > > > + *      application should perform a read-modify-write operation on this
> > > > > > > + *      field when modifying the specified bits.
> > > > > > > + *
> > > > > > > + *  +------- _RESUMING
> > > > > > > + *  |+------ _SAVING
> > > > > > > + *  ||+----- _RUNNING
> > > > > > > + *  |||
> > > > > > > + *  000b => Device Stopped, not saving or resuming
> > > > > > > + *  001b => Device running, which is the default state
> > > > > > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > > > > > + *  011b => Device running and save the device state, pre-copy state
> > > > > > > + *  100b => Device stopped and the device state is resuming
> > > > > > > + *  101b => Invalid state
> > > > > > > + *  110b => Error state
> > > > > > > + *  111b => Invalid state
> > > > > > > + *
> > > > > > > + * State transitions:
> > > > > > > + *
> > > > > > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > > > > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > > > > > + * 0. Running or default state
> > > > > > > + *                             |
> > > > > > > + *
> > > > > > > + * 1. Normal Shutdown (optional)
> > > > > > > + *                             |------------------------------------->|
> > > > > > > + *
> > > > > > > + * 2. Save the state or suspend
> > > > > > > + *                             |------------------------->|---------->|
> > > > > > > + *
> > > > > > > + * 3. Save the state during live migration
> > > > > > > + *                             |----------->|------------>|---------->|
> > > > > > > + *
> > > > > > > + * 4. Resuming
> > > > > > > + *                  |<---------|
> > > > > > > + *
> > > > > > > + * 5. Resumed
> > > > > > > + *                  |--------->|
> > > > > > > + *
> > > > > > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > > > > > + * 1. During normal shutdown of the user application, the user application may
> > > > > > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > > > > > + *    transition is optional. The vendor driver must support this transition but
> > > > > > > + *    must not require it.
> > > > > > > + * 2. When the user application saves state or suspends the application, the
> > > > > > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > > > > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > > > > > + *    device, save the device state and send it to the application through the
> > > > > > > + *    migration region. The sequence to be followed for such transition is given
> > > > > > > + *    below.
> > > > > > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > > > > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > > > > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > > > > > + *    gathering the device state while the application is still running and send
> > > > > > > + *    the device state data to application through the migration region.
> > > > > > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > > > > > + *    the device, save the device state and send it to the user application
> > > > > > > + *    through the migration region.
> > > > > > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > > > > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > > > > > + *    user must not be required to consume all migration data before the device
> > > > > > > + *    transitions to a new state, including the stop-and-copy state.
> > > > > > > + *    The sequence to be followed for above two transitions is given below.
> > > > > > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > > > > > + *    the _RUNNING to the _RESUMING state.
> > > > > > > + *    In the _RESUMING state, the driver should use the device state data
> > > > > > > + *    received through the migration region to resume the device.
> > > > > > > + * 5. After providing saved device data to the driver, the application should
> > > > > > > + *    change the state from _RESUMING to _RUNNING.
> > > > > > > + *
> > > > > > > + * reserved:
> > > > > > > + *      Reads on this field return zero and writes are ignored.
> > > > > > > + *
> > > > > > > + * pending_bytes: (read only)
> > > > > > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > > > > > + *
> > > > > > > + * data_offset: (read only)
> > > > > > > + *      The user application should read data_offset in the migration region
> > > > > > > + *      from where the user application should read the device data during the
> > > > > > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > > > > > + *      below for details of sequence to be followed.
> > > > > > > + *
> > > > > > > + * data_size: (read/write)
> > > > > > > + *      The user application should read data_size to get the size in bytes of
> > > > > > > + *      the data copied in the migration region during the _SAVING state and
> > > > > > > + *      write the size in bytes of the data copied in the migration region
> > > > > > > + *      during the _RESUMING state.
> > > > > > > + *
> > > > > > > + * The format of the migration region is as follows:
> > > > > > > + *  ------------------------------------------------------------------
> > > > > > > + * |vfio_device_migration_info|    data section                      |
> > > > > > > + * |                          |     ///////////////////////////////  |
> > > > > > > + * ------------------------------------------------------------------
> > > > > > > + *   ^                              ^
> > > > > > > + *  offset 0-trapped part        data_offset
> > > > > > > + *
> > > > > > > + * The structure vfio_device_migration_info is always followed by the data
> > > > > > > + * section in the region, so data_offset will always be nonzero. The offset
> > > > > > > + * from where the data is copied is decided by the kernel driver. The data
> > > > > > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > > > > > + * driver defines the data section. The data section partition can be defined
> > > > > > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > > > > > + * page aligned, whereas initial section which contains the
> > > > > > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > > > > > + * page aligned. The user is not required to access through mmap regardless
> > > > > > > + * of the capabilities of the region mmap.
> > > > > > > + * The vendor driver should determine whether and how to partition the data
> > > > > > > + * section. The vendor driver should return data_offset accordingly.
> > > > > > > + *
> > > > > > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > > > > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > > > > > + * follows:
> > > > > > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > > > > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > > > > > + *    effects.
> > > > > > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > > > > > + *    for that device.
> > > > > > > + *    If pending_bytes > 0, perform the following steps.
> > > > > > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > > > > > + *    available through the data section. The vendor driver should return this
> > > > > > > + *    read operation only after data is available from (region + data_offset)
> > > > > > > + *    to (region + data_offset + data_size).
> > > > > > > + * c. Read data_size, which is the amount of data in bytes available through
> > > > > > > + *    the migration region.
> > > > > > > + *    Read on data_offset and data_size should return the offset and size of
> > > > > > > + *    the current buffer if the user application reads data_offset and
> > > > > > > + *    data_size more than once here.      
> > > > > > If data region is mmaped, merely reading data_offset and data_size
> > > > > > cannot let kernel know what are correct values to return.
> > > > > > Consider to add a read operation which is trapped into kernel to let
> > > > > > kernel exactly know it needs to move to the next offset and update data_size
> > > > > > ?    
> > > > > 
> > > > > Both operations b. and c. above are to trapped registers, operation d.
> > > > > below may potentially be to an mmap'd area, which is why we have step
> > > > > f. which indicates to the vendor driver that the data has been
> > > > > consumed.  Does that address your concern?  Thanks,
> > > > >    
> > > > No. :)
> > > > the problem is about semantics of data_offset, data_size, and
> > > > pending_bytes.
> > > > b and c do not tell kernel that the data is read by user.
> > > > so, without knowing step d happen, kernel cannot update pending_bytes to
> > > > be returned in step f.  
> > > 
> > > Sorry, I'm still not understanding, I see step f. as the indicator
> > > you're looking for.  The user reads pending_bytes to indicate the data
> > > in the migration area has been consumed.  The vendor driver updates its
> > > internal state on that read and returns the updated value for
> > > pending_bytes.  Thanks,
> > >   
> > we could not regard reading of pending_bytes as an indicator of
> > migration data consumed.
> > 
> > for 1, in migration thread, read of pending_bytes is called every
> > iteration, but reads of data_size & data_offset are not (they are
> > skippable). so it's possible that the sequence is like
> > (1) reading of pending_bytes
> > (2) reading of pending_bytes
> > (3) reading of pending_bytes
> > (4) reading of data_offset & data_size
> > (5) reading of pending_bytes
> > 
> > for 2, it's not right to force kernel to understand qemu's sequence and
> > decide that only a read of pending_bytes after reads of data_offset & data_size
> > indicates data has been consumed.
> > 
> > Agree?
> 
> No, not really.  We're defining an API that enables the above sequence,
> but doesn't require the kernel to understand QEMU's sequence.
> Specifically, pending_bytes may be read without side-effects except for
> when data is queued to read through the data area of the region.  The
> user queues data to read by reading data_offset.  The user then reads
> data_size to determine the currently available data chunk size.  This
> is followed by consuming the data from the region offset + data_offset.
> Only after reading data_offset does the read of pending_bytes signal to
> the vendor driver that the user has consumed the data.
> 
> If the user were to re-read pending_bytes before consuming the data,
> then the data_offset and data_size they may have read is invalid and
> they've violated the defined protocol.  We do not, nor do I think we
> could, make this a fool proof interface.  The user must adhere to the
> protocol, but I believe the specific sequence you've identified is
> fully enabled here.  Please confirm.  Thanks,
> 
 c. Read data_size, which is the amount of data in bytes available through
  the migration region.
  Read on data_offset and data_size should return the offset and size of
  the current buffer if the user application reads data_offset and
  data_size more than once here.      

so, if the sequence is like this:
 (1) reading of pending_bytes
 (2) reading of data_offset & data_size
 (3) reading of data_offset & data_size
 (4) reading of data_offset & data_size
 (5) reading of pending_bytes
(2)-(4) should return the same values (and different values are allowed)
In step (5), pending_bytes should be the value in step (1) - data_size in
step (4).

Is this understanding right?

Thanks
Yan

> 
> > > > > > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > > > > > + *    migration region.
> > > > > > > + * e. Process the data.
> > > > > > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > > > > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > > > > > + *
> > > > > > > + * If an error occurs during the above sequence, the vendor driver can return
> > > > > > > + * an error code for next read() or write() operation, which will terminate the
> > > > > > > + * loop. The user application should then take the next necessary action, for
> > > > > > > + * example, failing migration or terminating the user application.
> > > > > > > + *
> > > > > > > + * The user application can transition from the _SAVING|_RUNNING
> > > > > > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > > > > > + * number of pending bytes. The user application should iterate in _SAVING
> > > > > > > + * (stop-and-copy) until pending_bytes is 0.
> > > > > > > + *
> > > > > > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > > > > > + * While data for this device is available, repeat the following steps:
> > > > > > > + * a. Read data_offset from where the user application should write data.
> > > > > > > + * b. Write migration data starting at the migration region + data_offset for
> > > > > > > + *    the length determined by data_size from the migration source.
> > > > > > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > > > > > + *    written in the migration region. Vendor driver should apply the
> > > > > > > + *    user-provided migration region data to the device resume state.
> > > > > > > + *
> > > > > > > + * For the user application, data is opaque. The user application should write
> > > > > > > + * data in the same order as the data is received and the data should be of
> > > > > > > + * same transaction size at the source.
> > > > > > > + */
> > > > > > > +
> > > > > > > +struct vfio_device_migration_info {
> > > > > > > +	__u32 device_state;         /* VFIO device state */
> > > > > > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > > > > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > > > > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > > > > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > > > > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > > > > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > > > > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > > > > > +
> > > > > > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > > > > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > > > > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > > > > > +
> > > > > > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > > > > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > > > > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > > > > > +
> > > > > > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > > > > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > > > > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > > > > > +
> > > > > > > +	__u32 reserved;
> > > > > > > +	__u64 pending_bytes;
> > > > > > > +	__u64 data_offset;
> > > > > > > +	__u64 data_size;
> > > > > > > +} __attribute__((packed));
> > > > > > > +
> > > > > > >  /*
> > > > > > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > > > > > >   * which allows direct access to non-MSIX registers which happened to be within
> > > > > > > -- 
> > > > > > > 2.7.0
> > > > > > >       
> > > > > >     
> > > > >     
> > > >   
> > >   
> > 
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-20  3:06               ` Yan Zhao
@ 2020-03-20  4:09                 ` Alex Williamson
  2020-03-20  4:20                   ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-20  4:09 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, 19 Mar 2020 23:06:56 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Fri, Mar 20, 2020 at 10:34:40AM +0800, Alex Williamson wrote:
> > On Thu, 19 Mar 2020 21:30:39 -0400
> > Yan Zhao <yan.y.zhao@intel.com> wrote:
> >   
> > > On Thu, Mar 19, 2020 at 09:09:21PM +0800, Alex Williamson wrote:  
> > > > On Thu, 19 Mar 2020 01:05:54 -0400
> > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > >     
> > > > > On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:    
> > > > > > On Wed, 18 Mar 2020 21:17:03 -0400
> > > > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > > >       
> > > > > > > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:      
> > > > > > > > - Defined MIGRATION region type and sub-type.
> > > > > > > > 
> > > > > > > > - Defined vfio_device_migration_info structure which will be placed at the
> > > > > > > >   0th offset of migration region to get/set VFIO device related
> > > > > > > >   information. Defined members of structure and usage on read/write access.
> > > > > > > > 
> > > > > > > > - Defined device states and state transition details.
> > > > > > > > 
> > > > > > > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > > > > > > 
> > > > > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > > > > ---
> > > > > > > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > > > > > > >  1 file changed, 227 insertions(+)
> > > > > > > > 
> > > > > > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > > > > > index 9e843a147ead..d0021467af53 100644
> > > > > > > > --- a/include/uapi/linux/vfio.h
> > > > > > > > +++ b/include/uapi/linux/vfio.h
> > > > > > > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > > > > > > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > > > > > > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > > > > > > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > > > > > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > > > > > > >  
> > > > > > > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > > > > > > >  
> > > > > > > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > > > > > > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > > > > > > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > > > > > > >  
> > > > > > > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > > > > > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > > > > > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > > > > > > + * migration information. Field accesses from this structure are only supported
> > > > > > > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > > > > > > + * vendor drivers should return an error.
> > > > > > > > + *
> > > > > > > > + * device_state: (read/write)
> > > > > > > > + *      - The user application writes to this field to inform the vendor driver
> > > > > > > > + *        about the device state to be transitioned to.
> > > > > > > > + *      - The vendor driver should take the necessary actions to change the
> > > > > > > > + *        device state. After successful transition to a given state, the
> > > > > > > > + *        vendor driver should return success on write(device_state, state)
> > > > > > > > + *        system call. If the device state transition fails, the vendor driver
> > > > > > > > + *        should return an appropriate -errno for the fault condition.
> > > > > > > > + *      - On the user application side, if the device state transition fails,
> > > > > > > > + *	  that is, if write(device_state, state) returns an error, read
> > > > > > > > + *	  device_state again to determine the current state of the device from
> > > > > > > > + *	  the vendor driver.
> > > > > > > > + *      - The vendor driver should return previous state of the device unless
> > > > > > > > + *        the vendor driver has encountered an internal error, in which case
> > > > > > > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > > > > > > + *      - The user application must use the device reset ioctl to recover the
> > > > > > > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > > > > > > + *        indicated to be in a valid device state by reading device_state, the
> > > > > > > > + *        user application may attempt to transition the device to any valid
> > > > > > > > + *        state reachable from the current state or terminate itself.
> > > > > > > > + *
> > > > > > > > + *      device_state consists of 3 bits:
> > > > > > > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > > > > > > + *        it indicates the _STOP state. When the device state is changed to
> > > > > > > > + *        _STOP, driver should stop the device before write() returns.
> > > > > > > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > > > > > > + *        driver should start gathering device state information that will be
> > > > > > > > + *        provided to the VFIO user application to save the device's state.
> > > > > > > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > > > > > > + *        the driver should prepare to resume the device. Data provided through
> > > > > > > > + *        the migration region should be used to resume the device.
> > > > > > > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > > > > > > + *      application should perform a read-modify-write operation on this
> > > > > > > > + *      field when modifying the specified bits.
> > > > > > > > + *
> > > > > > > > + *  +------- _RESUMING
> > > > > > > > + *  |+------ _SAVING
> > > > > > > > + *  ||+----- _RUNNING
> > > > > > > > + *  |||
> > > > > > > > + *  000b => Device Stopped, not saving or resuming
> > > > > > > > + *  001b => Device running, which is the default state
> > > > > > > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > > > > > > + *  011b => Device running and save the device state, pre-copy state
> > > > > > > > + *  100b => Device stopped and the device state is resuming
> > > > > > > > + *  101b => Invalid state
> > > > > > > > + *  110b => Error state
> > > > > > > > + *  111b => Invalid state
> > > > > > > > + *
> > > > > > > > + * State transitions:
> > > > > > > > + *
> > > > > > > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > > > > > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > > > > > > + * 0. Running or default state
> > > > > > > > + *                             |
> > > > > > > > + *
> > > > > > > > + * 1. Normal Shutdown (optional)
> > > > > > > > + *                             |------------------------------------->|
> > > > > > > > + *
> > > > > > > > + * 2. Save the state or suspend
> > > > > > > > + *                             |------------------------->|---------->|
> > > > > > > > + *
> > > > > > > > + * 3. Save the state during live migration
> > > > > > > > + *                             |----------->|------------>|---------->|
> > > > > > > > + *
> > > > > > > > + * 4. Resuming
> > > > > > > > + *                  |<---------|
> > > > > > > > + *
> > > > > > > > + * 5. Resumed
> > > > > > > > + *                  |--------->|
> > > > > > > > + *
> > > > > > > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > > > > > > + * 1. During normal shutdown of the user application, the user application may
> > > > > > > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > > > > > > + *    transition is optional. The vendor driver must support this transition but
> > > > > > > > + *    must not require it.
> > > > > > > > + * 2. When the user application saves state or suspends the application, the
> > > > > > > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > > > > > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > > > > > > + *    device, save the device state and send it to the application through the
> > > > > > > > + *    migration region. The sequence to be followed for such transition is given
> > > > > > > > + *    below.
> > > > > > > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > > > > > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > > > > > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > > > > > > + *    gathering the device state while the application is still running and send
> > > > > > > > + *    the device state data to application through the migration region.
> > > > > > > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > > > > > > + *    the device, save the device state and send it to the user application
> > > > > > > > + *    through the migration region.
> > > > > > > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > > > > > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > > > > > > + *    user must not be required to consume all migration data before the device
> > > > > > > > + *    transitions to a new state, including the stop-and-copy state.
> > > > > > > > + *    The sequence to be followed for above two transitions is given below.
> > > > > > > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > > > > > > + *    the _RUNNING to the _RESUMING state.
> > > > > > > > + *    In the _RESUMING state, the driver should use the device state data
> > > > > > > > + *    received through the migration region to resume the device.
> > > > > > > > + * 5. After providing saved device data to the driver, the application should
> > > > > > > > + *    change the state from _RESUMING to _RUNNING.
> > > > > > > > + *
> > > > > > > > + * reserved:
> > > > > > > > + *      Reads on this field return zero and writes are ignored.
> > > > > > > > + *
> > > > > > > > + * pending_bytes: (read only)
> > > > > > > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > > > > > > + *
> > > > > > > > + * data_offset: (read only)
> > > > > > > > + *      The user application should read data_offset in the migration region
> > > > > > > > + *      from where the user application should read the device data during the
> > > > > > > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > > > > > > + *      below for details of sequence to be followed.
> > > > > > > > + *
> > > > > > > > + * data_size: (read/write)
> > > > > > > > + *      The user application should read data_size to get the size in bytes of
> > > > > > > > + *      the data copied in the migration region during the _SAVING state and
> > > > > > > > + *      write the size in bytes of the data copied in the migration region
> > > > > > > > + *      during the _RESUMING state.
> > > > > > > > + *
> > > > > > > > + * The format of the migration region is as follows:
> > > > > > > > + *  ------------------------------------------------------------------
> > > > > > > > + * |vfio_device_migration_info|    data section                      |
> > > > > > > > + * |                          |     ///////////////////////////////  |
> > > > > > > > + * ------------------------------------------------------------------
> > > > > > > > + *   ^                              ^
> > > > > > > > + *  offset 0-trapped part        data_offset
> > > > > > > > + *
> > > > > > > > + * The structure vfio_device_migration_info is always followed by the data
> > > > > > > > + * section in the region, so data_offset will always be nonzero. The offset
> > > > > > > > + * from where the data is copied is decided by the kernel driver. The data
> > > > > > > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > > > > > > + * driver defines the data section. The data section partition can be defined
> > > > > > > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > > > > > > + * page aligned, whereas initial section which contains the
> > > > > > > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > > > > > > + * page aligned. The user is not required to access through mmap regardless
> > > > > > > > + * of the capabilities of the region mmap.
> > > > > > > > + * The vendor driver should determine whether and how to partition the data
> > > > > > > > + * section. The vendor driver should return data_offset accordingly.
> > > > > > > > + *
> > > > > > > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > > > > > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > > > > > > + * follows:
> > > > > > > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > > > > > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > > > > > > + *    effects.
> > > > > > > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > > > > > > + *    for that device.
> > > > > > > > + *    If pending_bytes > 0, perform the following steps.
> > > > > > > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > > > > > > + *    available through the data section. The vendor driver should return this
> > > > > > > > + *    read operation only after data is available from (region + data_offset)
> > > > > > > > + *    to (region + data_offset + data_size).
> > > > > > > > + * c. Read data_size, which is the amount of data in bytes available through
> > > > > > > > + *    the migration region.
> > > > > > > > + *    Read on data_offset and data_size should return the offset and size of
> > > > > > > > + *    the current buffer if the user application reads data_offset and
> > > > > > > > + *    data_size more than once here.        
> > > > > > > If data region is mmaped, merely reading data_offset and data_size
> > > > > > > cannot let kernel know what are correct values to return.
> > > > > > > Consider to add a read operation which is trapped into kernel to let
> > > > > > > kernel exactly know it needs to move to the next offset and update data_size
> > > > > > > ?      
> > > > > > 
> > > > > > Both operations b. and c. above are to trapped registers, operation d.
> > > > > > below may potentially be to an mmap'd area, which is why we have step
> > > > > > f. which indicates to the vendor driver that the data has been
> > > > > > consumed.  Does that address your concern?  Thanks,
> > > > > >      
> > > > > No. :)
> > > > > the problem is about semantics of data_offset, data_size, and
> > > > > pending_bytes.
> > > > > b and c do not tell kernel that the data is read by user.
> > > > > so, without knowing step d happen, kernel cannot update pending_bytes to
> > > > > be returned in step f.    
> > > > 
> > > > Sorry, I'm still not understanding, I see step f. as the indicator
> > > > you're looking for.  The user reads pending_bytes to indicate the data
> > > > in the migration area has been consumed.  The vendor driver updates its
> > > > internal state on that read and returns the updated value for
> > > > pending_bytes.  Thanks,
> > > >     
> > > we could not regard reading of pending_bytes as an indicator of
> > > migration data consumed.
> > > 
> > > for 1, in migration thread, read of pending_bytes is called every
> > > iteration, but reads of data_size & data_offset are not (they are
> > > skippable). so it's possible that the sequence is like
> > > (1) reading of pending_bytes
> > > (2) reading of pending_bytes
> > > (3) reading of pending_bytes
> > > (4) reading of data_offset & data_size
> > > (5) reading of pending_bytes
> > > 
> > > for 2, it's not right to force kernel to understand qemu's sequence and
> > > decide that only a read of pending_bytes after reads of data_offset & data_size
> > > indicates data has been consumed.
> > > 
> > > Agree?  
> > 
> > No, not really.  We're defining an API that enables the above sequence,
> > but doesn't require the kernel to understand QEMU's sequence.
> > Specifically, pending_bytes may be read without side-effects except for
> > when data is queued to read through the data area of the region.  The
> > user queues data to read by reading data_offset.  The user then reads
> > data_size to determine the currently available data chunk size.  This
> > is followed by consuming the data from the region offset + data_offset.
> > Only after reading data_offset does the read of pending_bytes signal to
> > the vendor driver that the user has consumed the data.
> > 
> > If the user were to re-read pending_bytes before consuming the data,
> > then the data_offset and data_size they may have read is invalid and
> > they've violated the defined protocol.  We do not, nor do I think we
> > could, make this a fool proof interface.  The user must adhere to the
> > protocol, but I believe the specific sequence you've identified is
> > fully enabled here.  Please confirm.  Thanks,
> >   
>  c. Read data_size, which is the amount of data in bytes available through
>   the migration region.
>   Read on data_offset and data_size should return the offset and size of
>   the current buffer if the user application reads data_offset and
>   data_size more than once here.      
> 
> so, if the sequence is like this:
>  (1) reading of pending_bytes
>  (2) reading of data_offset & data_size
>  (3) reading of data_offset & data_size
>  (4) reading of data_offset & data_size
>  (5) reading of pending_bytes
> (2)-(4) should return the same values (and different values are allowed)
> In step (5), pending_bytes should be the value in step (1) - data_size in
> step (4).
> 
> Is this understanding right?

I believe that's correct except the user cannot presume the next value
of pending_bytes, the device might have generated more state between
steps (1) and (5).  If the device is stopped, this might be a
reasonable assumption, but the protocol is to rely on the device
reported pending_bytes rather than calculate.  The user is required to
read_pending bytes to increment to the next data chunk anyway.  Thanks,

Alex

> > > > > > > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > > > > > > + *    migration region.
> > > > > > > > + * e. Process the data.
> > > > > > > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > > > > > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > > > > > > + *
> > > > > > > > + * If an error occurs during the above sequence, the vendor driver can return
> > > > > > > > + * an error code for next read() or write() operation, which will terminate the
> > > > > > > > + * loop. The user application should then take the next necessary action, for
> > > > > > > > + * example, failing migration or terminating the user application.
> > > > > > > > + *
> > > > > > > > + * The user application can transition from the _SAVING|_RUNNING
> > > > > > > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > > > > > > + * number of pending bytes. The user application should iterate in _SAVING
> > > > > > > > + * (stop-and-copy) until pending_bytes is 0.
> > > > > > > > + *
> > > > > > > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > > > > > > + * While data for this device is available, repeat the following steps:
> > > > > > > > + * a. Read data_offset from where the user application should write data.
> > > > > > > > + * b. Write migration data starting at the migration region + data_offset for
> > > > > > > > + *    the length determined by data_size from the migration source.
> > > > > > > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > > > > > > + *    written in the migration region. Vendor driver should apply the
> > > > > > > > + *    user-provided migration region data to the device resume state.
> > > > > > > > + *
> > > > > > > > + * For the user application, data is opaque. The user application should write
> > > > > > > > + * data in the same order as the data is received and the data should be of
> > > > > > > > + * same transaction size at the source.
> > > > > > > > + */
> > > > > > > > +
> > > > > > > > +struct vfio_device_migration_info {
> > > > > > > > +	__u32 device_state;         /* VFIO device state */
> > > > > > > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > > > > > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > > > > > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > > > > > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > > > > > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > > > > > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > > > > > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > > > > > > +
> > > > > > > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > > > > > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > > > > > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > > > > > > +
> > > > > > > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > > > > > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > > > > > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > > > > > > +
> > > > > > > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > > > > > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > > > > > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > > > > > > +
> > > > > > > > +	__u32 reserved;
> > > > > > > > +	__u64 pending_bytes;
> > > > > > > > +	__u64 data_offset;
> > > > > > > > +	__u64 data_size;
> > > > > > > > +} __attribute__((packed));
> > > > > > > > +
> > > > > > > >  /*
> > > > > > > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > > > > > > >   * which allows direct access to non-MSIX registers which happened to be within
> > > > > > > > -- 
> > > > > > > > 2.7.0
> > > > > > > >         
> > > > > > >       
> > > > > >       
> > > > >     
> > > >     
> > >   
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-20  4:09                 ` Alex Williamson
@ 2020-03-20  4:20                   ` Yan Zhao
  0 siblings, 0 replies; 47+ messages in thread
From: Yan Zhao @ 2020-03-20  4:20 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Fri, Mar 20, 2020 at 12:09:18PM +0800, Alex Williamson wrote:
> On Thu, 19 Mar 2020 23:06:56 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Fri, Mar 20, 2020 at 10:34:40AM +0800, Alex Williamson wrote:
> > > On Thu, 19 Mar 2020 21:30:39 -0400
> > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > >   
> > > > On Thu, Mar 19, 2020 at 09:09:21PM +0800, Alex Williamson wrote:  
> > > > > On Thu, 19 Mar 2020 01:05:54 -0400
> > > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > >     
> > > > > > On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:    
> > > > > > > On Wed, 18 Mar 2020 21:17:03 -0400
> > > > > > > Yan Zhao <yan.y.zhao@intel.com> wrote:
> > > > > > >       
> > > > > > > > On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:      
> > > > > > > > > - Defined MIGRATION region type and sub-type.
> > > > > > > > > 
> > > > > > > > > - Defined vfio_device_migration_info structure which will be placed at the
> > > > > > > > >   0th offset of migration region to get/set VFIO device related
> > > > > > > > >   information. Defined members of structure and usage on read/write access.
> > > > > > > > > 
> > > > > > > > > - Defined device states and state transition details.
> > > > > > > > > 
> > > > > > > > > - Defined sequence to be followed while saving and resuming VFIO device.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > > > > > > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > > > > > > > ---
> > > > > > > > >  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
> > > > > > > > >  1 file changed, 227 insertions(+)
> > > > > > > > > 
> > > > > > > > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > > > > > > > index 9e843a147ead..d0021467af53 100644
> > > > > > > > > --- a/include/uapi/linux/vfio.h
> > > > > > > > > +++ b/include/uapi/linux/vfio.h
> > > > > > > > > @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
> > > > > > > > >  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
> > > > > > > > >  #define VFIO_REGION_TYPE_GFX                    (1)
> > > > > > > > >  #define VFIO_REGION_TYPE_CCW			(2)
> > > > > > > > > +#define VFIO_REGION_TYPE_MIGRATION              (3)
> > > > > > > > >  
> > > > > > > > >  /* sub-types for VFIO_REGION_TYPE_PCI_* */
> > > > > > > > >  
> > > > > > > > > @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
> > > > > > > > >  /* sub-types for VFIO_REGION_TYPE_CCW */
> > > > > > > > >  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
> > > > > > > > >  
> > > > > > > > > +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> > > > > > > > > +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * The structure vfio_device_migration_info is placed at the 0th offset of
> > > > > > > > > + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> > > > > > > > > + * migration information. Field accesses from this structure are only supported
> > > > > > > > > + * at their native width and alignment. Otherwise, the result is undefined and
> > > > > > > > > + * vendor drivers should return an error.
> > > > > > > > > + *
> > > > > > > > > + * device_state: (read/write)
> > > > > > > > > + *      - The user application writes to this field to inform the vendor driver
> > > > > > > > > + *        about the device state to be transitioned to.
> > > > > > > > > + *      - The vendor driver should take the necessary actions to change the
> > > > > > > > > + *        device state. After successful transition to a given state, the
> > > > > > > > > + *        vendor driver should return success on write(device_state, state)
> > > > > > > > > + *        system call. If the device state transition fails, the vendor driver
> > > > > > > > > + *        should return an appropriate -errno for the fault condition.
> > > > > > > > > + *      - On the user application side, if the device state transition fails,
> > > > > > > > > + *	  that is, if write(device_state, state) returns an error, read
> > > > > > > > > + *	  device_state again to determine the current state of the device from
> > > > > > > > > + *	  the vendor driver.
> > > > > > > > > + *      - The vendor driver should return previous state of the device unless
> > > > > > > > > + *        the vendor driver has encountered an internal error, in which case
> > > > > > > > > + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> > > > > > > > > + *      - The user application must use the device reset ioctl to recover the
> > > > > > > > > + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> > > > > > > > > + *        indicated to be in a valid device state by reading device_state, the
> > > > > > > > > + *        user application may attempt to transition the device to any valid
> > > > > > > > > + *        state reachable from the current state or terminate itself.
> > > > > > > > > + *
> > > > > > > > > + *      device_state consists of 3 bits:
> > > > > > > > > + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> > > > > > > > > + *        it indicates the _STOP state. When the device state is changed to
> > > > > > > > > + *        _STOP, driver should stop the device before write() returns.
> > > > > > > > > + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> > > > > > > > > + *        driver should start gathering device state information that will be
> > > > > > > > > + *        provided to the VFIO user application to save the device's state.
> > > > > > > > > + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> > > > > > > > > + *        the driver should prepare to resume the device. Data provided through
> > > > > > > > > + *        the migration region should be used to resume the device.
> > > > > > > > > + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> > > > > > > > > + *      application should perform a read-modify-write operation on this
> > > > > > > > > + *      field when modifying the specified bits.
> > > > > > > > > + *
> > > > > > > > > + *  +------- _RESUMING
> > > > > > > > > + *  |+------ _SAVING
> > > > > > > > > + *  ||+----- _RUNNING
> > > > > > > > > + *  |||
> > > > > > > > > + *  000b => Device Stopped, not saving or resuming
> > > > > > > > > + *  001b => Device running, which is the default state
> > > > > > > > > + *  010b => Stop the device & save the device state, stop-and-copy state
> > > > > > > > > + *  011b => Device running and save the device state, pre-copy state
> > > > > > > > > + *  100b => Device stopped and the device state is resuming
> > > > > > > > > + *  101b => Invalid state
> > > > > > > > > + *  110b => Error state
> > > > > > > > > + *  111b => Invalid state
> > > > > > > > > + *
> > > > > > > > > + * State transitions:
> > > > > > > > > + *
> > > > > > > > > + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> > > > > > > > > + *                (100b)     (001b)     (011b)        (010b)       (000b)
> > > > > > > > > + * 0. Running or default state
> > > > > > > > > + *                             |
> > > > > > > > > + *
> > > > > > > > > + * 1. Normal Shutdown (optional)
> > > > > > > > > + *                             |------------------------------------->|
> > > > > > > > > + *
> > > > > > > > > + * 2. Save the state or suspend
> > > > > > > > > + *                             |------------------------->|---------->|
> > > > > > > > > + *
> > > > > > > > > + * 3. Save the state during live migration
> > > > > > > > > + *                             |----------->|------------>|---------->|
> > > > > > > > > + *
> > > > > > > > > + * 4. Resuming
> > > > > > > > > + *                  |<---------|
> > > > > > > > > + *
> > > > > > > > > + * 5. Resumed
> > > > > > > > > + *                  |--------->|
> > > > > > > > > + *
> > > > > > > > > + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> > > > > > > > > + * 1. During normal shutdown of the user application, the user application may
> > > > > > > > > + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> > > > > > > > > + *    transition is optional. The vendor driver must support this transition but
> > > > > > > > > + *    must not require it.
> > > > > > > > > + * 2. When the user application saves state or suspends the application, the
> > > > > > > > > + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> > > > > > > > > + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> > > > > > > > > + *    device, save the device state and send it to the application through the
> > > > > > > > > + *    migration region. The sequence to be followed for such transition is given
> > > > > > > > > + *    below.
> > > > > > > > > + * 3. In live migration of user application, the state transitions from _RUNNING
> > > > > > > > > + *    to pre-copy, to stop-and-copy, and to _STOP.
> > > > > > > > > + *    On state transition from _RUNNING to pre-copy, the driver should start
> > > > > > > > > + *    gathering the device state while the application is still running and send
> > > > > > > > > + *    the device state data to application through the migration region.
> > > > > > > > > + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> > > > > > > > > + *    the device, save the device state and send it to the user application
> > > > > > > > > + *    through the migration region.
> > > > > > > > > + *    Vendor drivers must support the pre-copy state even for implementations
> > > > > > > > > + *    where no data is provided to the user before the stop-and-copy state. The
> > > > > > > > > + *    user must not be required to consume all migration data before the device
> > > > > > > > > + *    transitions to a new state, including the stop-and-copy state.
> > > > > > > > > + *    The sequence to be followed for above two transitions is given below.
> > > > > > > > > + * 4. To start the resuming phase, the device state should be transitioned from
> > > > > > > > > + *    the _RUNNING to the _RESUMING state.
> > > > > > > > > + *    In the _RESUMING state, the driver should use the device state data
> > > > > > > > > + *    received through the migration region to resume the device.
> > > > > > > > > + * 5. After providing saved device data to the driver, the application should
> > > > > > > > > + *    change the state from _RESUMING to _RUNNING.
> > > > > > > > > + *
> > > > > > > > > + * reserved:
> > > > > > > > > + *      Reads on this field return zero and writes are ignored.
> > > > > > > > > + *
> > > > > > > > > + * pending_bytes: (read only)
> > > > > > > > > + *      The number of pending bytes still to be migrated from the vendor driver.
> > > > > > > > > + *
> > > > > > > > > + * data_offset: (read only)
> > > > > > > > > + *      The user application should read data_offset in the migration region
> > > > > > > > > + *      from where the user application should read the device data during the
> > > > > > > > > + *      _SAVING state or write the device data during the _RESUMING state. See
> > > > > > > > > + *      below for details of sequence to be followed.
> > > > > > > > > + *
> > > > > > > > > + * data_size: (read/write)
> > > > > > > > > + *      The user application should read data_size to get the size in bytes of
> > > > > > > > > + *      the data copied in the migration region during the _SAVING state and
> > > > > > > > > + *      write the size in bytes of the data copied in the migration region
> > > > > > > > > + *      during the _RESUMING state.
> > > > > > > > > + *
> > > > > > > > > + * The format of the migration region is as follows:
> > > > > > > > > + *  ------------------------------------------------------------------
> > > > > > > > > + * |vfio_device_migration_info|    data section                      |
> > > > > > > > > + * |                          |     ///////////////////////////////  |
> > > > > > > > > + * ------------------------------------------------------------------
> > > > > > > > > + *   ^                              ^
> > > > > > > > > + *  offset 0-trapped part        data_offset
> > > > > > > > > + *
> > > > > > > > > + * The structure vfio_device_migration_info is always followed by the data
> > > > > > > > > + * section in the region, so data_offset will always be nonzero. The offset
> > > > > > > > > + * from where the data is copied is decided by the kernel driver. The data
> > > > > > > > > + * section can be trapped, mapped, or partitioned, depending on how the kernel
> > > > > > > > > + * driver defines the data section. The data section partition can be defined
> > > > > > > > > + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> > > > > > > > > + * page aligned, whereas initial section which contains the
> > > > > > > > > + * vfio_device_migration_info structure, might not end at the offset, which is
> > > > > > > > > + * page aligned. The user is not required to access through mmap regardless
> > > > > > > > > + * of the capabilities of the region mmap.
> > > > > > > > > + * The vendor driver should determine whether and how to partition the data
> > > > > > > > > + * section. The vendor driver should return data_offset accordingly.
> > > > > > > > > + *
> > > > > > > > > + * The sequence to be followed for the _SAVING|_RUNNING device state or
> > > > > > > > > + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
> > > > > > > > > + * follows:
> > > > > > > > > + * a. Read pending_bytes, indicating the start of a new iteration to get device
> > > > > > > > > + *    data. Repeated read on pending_bytes at this stage should have no side
> > > > > > > > > + *    effects.
> > > > > > > > > + *    If pending_bytes == 0, the user application should not iterate to get data
> > > > > > > > > + *    for that device.
> > > > > > > > > + *    If pending_bytes > 0, perform the following steps.
> > > > > > > > > + * b. Read data_offset, indicating that the vendor driver should make data
> > > > > > > > > + *    available through the data section. The vendor driver should return this
> > > > > > > > > + *    read operation only after data is available from (region + data_offset)
> > > > > > > > > + *    to (region + data_offset + data_size).
> > > > > > > > > + * c. Read data_size, which is the amount of data in bytes available through
> > > > > > > > > + *    the migration region.
> > > > > > > > > + *    Read on data_offset and data_size should return the offset and size of
> > > > > > > > > + *    the current buffer if the user application reads data_offset and
> > > > > > > > > + *    data_size more than once here.        
> > > > > > > > If data region is mmaped, merely reading data_offset and data_size
> > > > > > > > cannot let kernel know what are correct values to return.
> > > > > > > > Consider to add a read operation which is trapped into kernel to let
> > > > > > > > kernel exactly know it needs to move to the next offset and update data_size
> > > > > > > > ?      
> > > > > > > 
> > > > > > > Both operations b. and c. above are to trapped registers, operation d.
> > > > > > > below may potentially be to an mmap'd area, which is why we have step
> > > > > > > f. which indicates to the vendor driver that the data has been
> > > > > > > consumed.  Does that address your concern?  Thanks,
> > > > > > >      
> > > > > > No. :)
> > > > > > the problem is about semantics of data_offset, data_size, and
> > > > > > pending_bytes.
> > > > > > b and c do not tell kernel that the data is read by user.
> > > > > > so, without knowing step d happen, kernel cannot update pending_bytes to
> > > > > > be returned in step f.    
> > > > > 
> > > > > Sorry, I'm still not understanding, I see step f. as the indicator
> > > > > you're looking for.  The user reads pending_bytes to indicate the data
> > > > > in the migration area has been consumed.  The vendor driver updates its
> > > > > internal state on that read and returns the updated value for
> > > > > pending_bytes.  Thanks,
> > > > >     
> > > > we could not regard reading of pending_bytes as an indicator of
> > > > migration data consumed.
> > > > 
> > > > for 1, in migration thread, read of pending_bytes is called every
> > > > iteration, but reads of data_size & data_offset are not (they are
> > > > skippable). so it's possible that the sequence is like
> > > > (1) reading of pending_bytes
> > > > (2) reading of pending_bytes
> > > > (3) reading of pending_bytes
> > > > (4) reading of data_offset & data_size
> > > > (5) reading of pending_bytes
> > > > 
> > > > for 2, it's not right to force kernel to understand qemu's sequence and
> > > > decide that only a read of pending_bytes after reads of data_offset & data_size
> > > > indicates data has been consumed.
> > > > 
> > > > Agree?  
> > > 
> > > No, not really.  We're defining an API that enables the above sequence,
> > > but doesn't require the kernel to understand QEMU's sequence.
> > > Specifically, pending_bytes may be read without side-effects except for
> > > when data is queued to read through the data area of the region.  The
> > > user queues data to read by reading data_offset.  The user then reads
> > > data_size to determine the currently available data chunk size.  This
> > > is followed by consuming the data from the region offset + data_offset.
> > > Only after reading data_offset does the read of pending_bytes signal to
> > > the vendor driver that the user has consumed the data.
> > > 
> > > If the user were to re-read pending_bytes before consuming the data,
> > > then the data_offset and data_size they may have read is invalid and
> > > they've violated the defined protocol.  We do not, nor do I think we
> > > could, make this a fool proof interface.  The user must adhere to the
> > > protocol, but I believe the specific sequence you've identified is
> > > fully enabled here.  Please confirm.  Thanks,
> > >   
> >  c. Read data_size, which is the amount of data in bytes available through
> >   the migration region.
> >   Read on data_offset and data_size should return the offset and size of
> >   the current buffer if the user application reads data_offset and
> >   data_size more than once here.      
> > 
> > so, if the sequence is like this:
> >  (1) reading of pending_bytes
> >  (2) reading of data_offset & data_size
> >  (3) reading of data_offset & data_size
> >  (4) reading of data_offset & data_size
> >  (5) reading of pending_bytes
> > (2)-(4) should return the same values (and different values are allowed)
> > In step (5), pending_bytes should be the value in step (1) - data_size in
> > step (4).
> > 
> > Is this understanding right?
> 
> I believe that's correct except the user cannot presume the next value
> of pending_bytes, the device might have generated more state between
> steps (1) and (5).  If the device is stopped, this might be a
> reasonable assumption, but the protocol is to rely on the device
> reported pending_bytes rather than calculate.  The user is required to
> read_pending bytes to increment to the next data chunk anyway.  Thanks,
> 
ok. got it.
Thanks
Yan

> 
> > > > > > > > > + * d. Read data_size bytes of data from (region + data_offset) from the
> > > > > > > > > + *    migration region.
> > > > > > > > > + * e. Process the data.
> > > > > > > > > + * f. Read pending_bytes, which indicates that the data from the previous
> > > > > > > > > + *    iteration has been read. If pending_bytes > 0, go to step b.
> > > > > > > > > + *
> > > > > > > > > + * If an error occurs during the above sequence, the vendor driver can return
> > > > > > > > > + * an error code for next read() or write() operation, which will terminate the
> > > > > > > > > + * loop. The user application should then take the next necessary action, for
> > > > > > > > > + * example, failing migration or terminating the user application.
> > > > > > > > > + *
> > > > > > > > > + * The user application can transition from the _SAVING|_RUNNING
> > > > > > > > > + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> > > > > > > > > + * number of pending bytes. The user application should iterate in _SAVING
> > > > > > > > > + * (stop-and-copy) until pending_bytes is 0.
> > > > > > > > > + *
> > > > > > > > > + * The sequence to be followed while _RESUMING device state is as follows:
> > > > > > > > > + * While data for this device is available, repeat the following steps:
> > > > > > > > > + * a. Read data_offset from where the user application should write data.
> > > > > > > > > + * b. Write migration data starting at the migration region + data_offset for
> > > > > > > > > + *    the length determined by data_size from the migration source.
> > > > > > > > > + * c. Write data_size, which indicates to the vendor driver that data is
> > > > > > > > > + *    written in the migration region. Vendor driver should apply the
> > > > > > > > > + *    user-provided migration region data to the device resume state.
> > > > > > > > > + *
> > > > > > > > > + * For the user application, data is opaque. The user application should write
> > > > > > > > > + * data in the same order as the data is received and the data should be of
> > > > > > > > > + * same transaction size at the source.
> > > > > > > > > + */
> > > > > > > > > +
> > > > > > > > > +struct vfio_device_migration_info {
> > > > > > > > > +	__u32 device_state;         /* VFIO device state */
> > > > > > > > > +#define VFIO_DEVICE_STATE_STOP      (0)
> > > > > > > > > +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> > > > > > > > > +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> > > > > > > > > +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> > > > > > > > > +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> > > > > > > > > +				     VFIO_DEVICE_STATE_SAVING |  \
> > > > > > > > > +				     VFIO_DEVICE_STATE_RESUMING)
> > > > > > > > > +
> > > > > > > > > +#define VFIO_DEVICE_STATE_VALID(state) \
> > > > > > > > > +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> > > > > > > > > +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> > > > > > > > > +
> > > > > > > > > +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> > > > > > > > > +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> > > > > > > > > +					      VFIO_DEVICE_STATE_RESUMING))
> > > > > > > > > +
> > > > > > > > > +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> > > > > > > > > +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> > > > > > > > > +					     VFIO_DEVICE_STATE_RESUMING)
> > > > > > > > > +
> > > > > > > > > +	__u32 reserved;
> > > > > > > > > +	__u64 pending_bytes;
> > > > > > > > > +	__u64 data_offset;
> > > > > > > > > +	__u64 data_size;
> > > > > > > > > +} __attribute__((packed));
> > > > > > > > > +
> > > > > > > > >  /*
> > > > > > > > >   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> > > > > > > > >   * which allows direct access to non-MSIX registers which happened to be within
> > > > > > > > > -- 
> > > > > > > > > 2.7.0
> > > > > > > > >         
> > > > > > > >       
> > > > > > >       
> > > > > >     
> > > > >     
> > > >   
> > >   
> > 
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-18 19:41 ` [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
  2020-03-19  3:45   ` Alex Williamson
@ 2020-03-20  8:35   ` Yan Zhao
  2020-03-20 15:40     ` Alex Williamson
  1 sibling, 1 reply; 47+ messages in thread
From: Yan Zhao @ 2020-03-20  8:35 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, Mar 19, 2020 at 03:41:12AM +0800, Kirti Wankhede wrote:
> DMA mapped pages, including those pinned by mdev vendor drivers, might
> get unpinned and unmapped while migration is active and device is still
> running. For example, in pre-copy phase while guest driver could access
> those pages, host device or vendor driver can dirty these mapped pages.
> Such pages should be marked dirty so as to maintain memory consistency
> for a user making use of dirty page tracking.
> 
> To get bitmap during unmap, user should set flag
> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> zeroed by user space application. Bitmap size and page size should be set
> by user application.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
>  include/uapi/linux/vfio.h       | 11 +++++++++
>  2 files changed, 62 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index d6417fb02174..aa1ac30f7854 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
>  }
>  
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> -			     struct vfio_iommu_type1_dma_unmap *unmap)
> +			     struct vfio_iommu_type1_dma_unmap *unmap,
> +			     struct vfio_bitmap *bitmap)
>  {
>  	uint64_t mask;
>  	struct vfio_dma *dma, *dma_last = NULL;
> @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  	 * will be returned if these conditions are not met.  The v2 interface
>  	 * will only return success and a size of zero if there were no
>  	 * mappings within the range.
> +	 *
> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> +	 * must be for single mapping. Multiple mappings with this flag set is
> +	 * not supported.
>  	 */
>  	if (iommu->v2) {
>  		dma = vfio_find_dma(iommu, unmap->iova, 1);
> @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			ret = -EINVAL;
>  			goto unlock;
>  		}
> +
> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
dma is probably NULL here!

And this restriction on UNMAP would make some UNMAP operations of vIOMMU
fail.

e.g. below condition indeed happens in reality.
an UNMAP ioctl comes for IOVA range from 0xff800000, of size 0x200000
However, IOVAs in this range are mapped page by page.i.e., dma->size is 0x1000.

Previous, this UNMAP ioctl could unmap successfully as a whole.

Thanks
Yan

> +			ret = -EINVAL;
> +			goto unlock;
> +		}
> +
>  		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
>  		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
>  			ret = -EINVAL;
> @@ -1014,6 +1026,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  		if (dma->task->mm != current->mm)
>  			break;
>  
> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> +		     iommu->dirty_page_tracking)
> +			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
> +					bitmap->pgsize,
> +					(unsigned char __user *) bitmap->data);
> +
>  		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
>  			struct vfio_iommu_type1_dma_unmap nb_unmap;
>  
> @@ -2369,17 +2387,46 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
>  		struct vfio_iommu_type1_dma_unmap unmap;
> -		long ret;
> +		struct vfio_bitmap bitmap = { 0 };
> +		int ret;
>  
>  		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
>  
>  		if (copy_from_user(&unmap, (void __user *)arg, minsz))
>  			return -EFAULT;
>  
> -		if (unmap.argsz < minsz || unmap.flags)
> +		if (unmap.argsz < minsz ||
> +		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
>  			return -EINVAL;
>  
> -		ret = vfio_dma_do_unmap(iommu, &unmap);
> +		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
> +			unsigned long pgshift;
> +			uint64_t iommu_pgsize =
> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (unmap.argsz < (minsz + sizeof(bitmap)))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&bitmap,
> +					   (void __user *)(arg + minsz),
> +					   sizeof(bitmap)))
> +				return -EFAULT;
> +
> +			/* allow only min supported pgsize */
> +			if (bitmap.pgsize != iommu_pgsize)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)bitmap.data, bitmap.size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(bitmap.pgsize);
> +			ret = verify_bitmap_size(unmap.size >> pgshift,
> +						 bitmap.size);
> +			if (ret)
> +				return ret;
> +
> +		}
> +
> +		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
>  		if (ret)
>  			return ret;
>  
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 043e9eafb255..a704e5380f04 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1010,12 +1010,23 @@ struct vfio_bitmap {
>   * field.  No guarantee is made to the user that arbitrary unmaps of iova
>   * or size different from those used in the original mapping call will
>   * succeed.
> + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
> + * before unmapping IO virtual addresses. When this flag is set, user must
> + * provide data[] as structure vfio_bitmap. User must allocate memory to get
> + * bitmap, clear the bitmap memory by setting zero and must set size of
> + * allocated memory in vfio_bitmap.size field. One bit in bitmap
> + * represents per page, page of user provided page size in 'pgsize',
> + * consecutively starting from iova offset. Bit set indicates page at that
> + * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
> + * returned in vfio_bitmap.data
>   */
>  struct vfio_iommu_type1_dma_unmap {
>  	__u32	argsz;
>  	__u32	flags;
> +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
>  	__u64	iova;				/* IO virtual address */
>  	__u64	size;				/* Size of mapping (bytes) */
> +	__u8    data[];
>  };
>  
>  #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
> -- 
> 2.7.0
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-20  8:35   ` Yan Zhao
@ 2020-03-20 15:40     ` Alex Williamson
  2020-03-20 15:47       ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-20 15:40 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Fri, 20 Mar 2020 04:35:29 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:

> On Thu, Mar 19, 2020 at 03:41:12AM +0800, Kirti Wankhede wrote:
> > DMA mapped pages, including those pinned by mdev vendor drivers, might
> > get unpinned and unmapped while migration is active and device is still
> > running. For example, in pre-copy phase while guest driver could access
> > those pages, host device or vendor driver can dirty these mapped pages.
> > Such pages should be marked dirty so as to maintain memory consistency
> > for a user making use of dirty page tracking.
> > 
> > To get bitmap during unmap, user should set flag
> > VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> > zeroed by user space application. Bitmap size and page size should be set
> > by user application.
> > 
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
> >  include/uapi/linux/vfio.h       | 11 +++++++++
> >  2 files changed, 62 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > index d6417fb02174..aa1ac30f7854 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> >  }
> >  
> >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > -			     struct vfio_iommu_type1_dma_unmap *unmap)
> > +			     struct vfio_iommu_type1_dma_unmap *unmap,
> > +			     struct vfio_bitmap *bitmap)
> >  {
> >  	uint64_t mask;
> >  	struct vfio_dma *dma, *dma_last = NULL;
> > @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >  	 * will be returned if these conditions are not met.  The v2 interface
> >  	 * will only return success and a size of zero if there were no
> >  	 * mappings within the range.
> > +	 *
> > +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> > +	 * must be for single mapping. Multiple mappings with this flag set is
> > +	 * not supported.
> >  	 */
> >  	if (iommu->v2) {
> >  		dma = vfio_find_dma(iommu, unmap->iova, 1);
> > @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >  			ret = -EINVAL;
> >  			goto unlock;
> >  		}
> > +
> > +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> > +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {  
> dma is probably NULL here!

Yep, I didn't look closely enough there.  This is situated right
between the check to make sure we're not bisecting a mapping at the
start of the unmap and the check to make sure we're not bisecting a
mapping at the end of the unmap.  There's no guarantee that we have a
valid pointer here.  The test should be in the while() loop below this
code.

> And this restriction on UNMAP would make some UNMAP operations of vIOMMU
> fail.
> 
> e.g. below condition indeed happens in reality.
> an UNMAP ioctl comes for IOVA range from 0xff800000, of size 0x200000
> However, IOVAs in this range are mapped page by page.i.e., dma->size is 0x1000.
> 
> Previous, this UNMAP ioctl could unmap successfully as a whole.

What triggers this in the guest?  Note that it's only when using the
GET_DIRTY_BITMAP flag that this is restricted.  Does the event you're
referring to potentially occur under normal circumstances in that mode?
Thanks,

Alex


> > +			ret = -EINVAL;
> > +			goto unlock;
> > +		}
> > +
> >  		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
> >  		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
> >  			ret = -EINVAL;
> > @@ -1014,6 +1026,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >  		if (dma->task->mm != current->mm)
> >  			break;
> >  
> > +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> > +		     iommu->dirty_page_tracking)
> > +			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
> > +					bitmap->pgsize,
> > +					(unsigned char __user *) bitmap->data);
> > +
> >  		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
> >  			struct vfio_iommu_type1_dma_unmap nb_unmap;
> >  
> > @@ -2369,17 +2387,46 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> >  
> >  	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
> >  		struct vfio_iommu_type1_dma_unmap unmap;
> > -		long ret;
> > +		struct vfio_bitmap bitmap = { 0 };
> > +		int ret;
> >  
> >  		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
> >  
> >  		if (copy_from_user(&unmap, (void __user *)arg, minsz))
> >  			return -EFAULT;
> >  
> > -		if (unmap.argsz < minsz || unmap.flags)
> > +		if (unmap.argsz < minsz ||
> > +		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
> >  			return -EINVAL;
> >  
> > -		ret = vfio_dma_do_unmap(iommu, &unmap);
> > +		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
> > +			unsigned long pgshift;
> > +			uint64_t iommu_pgsize =
> > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			if (unmap.argsz < (minsz + sizeof(bitmap)))
> > +				return -EINVAL;
> > +
> > +			if (copy_from_user(&bitmap,
> > +					   (void __user *)(arg + minsz),
> > +					   sizeof(bitmap)))
> > +				return -EFAULT;
> > +
> > +			/* allow only min supported pgsize */
> > +			if (bitmap.pgsize != iommu_pgsize)
> > +				return -EINVAL;
> > +			if (!access_ok((void __user *)bitmap.data, bitmap.size))
> > +				return -EINVAL;
> > +
> > +			pgshift = __ffs(bitmap.pgsize);
> > +			ret = verify_bitmap_size(unmap.size >> pgshift,
> > +						 bitmap.size);
> > +			if (ret)
> > +				return ret;
> > +
> > +		}
> > +
> > +		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
> >  		if (ret)
> >  			return ret;
> >  
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index 043e9eafb255..a704e5380f04 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -1010,12 +1010,23 @@ struct vfio_bitmap {
> >   * field.  No guarantee is made to the user that arbitrary unmaps of iova
> >   * or size different from those used in the original mapping call will
> >   * succeed.
> > + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
> > + * before unmapping IO virtual addresses. When this flag is set, user must
> > + * provide data[] as structure vfio_bitmap. User must allocate memory to get
> > + * bitmap, clear the bitmap memory by setting zero and must set size of
> > + * allocated memory in vfio_bitmap.size field. One bit in bitmap
> > + * represents per page, page of user provided page size in 'pgsize',
> > + * consecutively starting from iova offset. Bit set indicates page at that
> > + * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
> > + * returned in vfio_bitmap.data
> >   */
> >  struct vfio_iommu_type1_dma_unmap {
> >  	__u32	argsz;
> >  	__u32	flags;
> > +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
> >  	__u64	iova;				/* IO virtual address */
> >  	__u64	size;				/* Size of mapping (bytes) */
> > +	__u8    data[];
> >  };
> >  
> >  #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
> > -- 
> > 2.7.0
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-20 15:40     ` Alex Williamson
@ 2020-03-20 15:47       ` Alex Williamson
  2020-03-20 19:14         ` Kirti Wankhede
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-20 15:47 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Fri, 20 Mar 2020 09:40:39 -0600
Alex Williamson <alex.williamson@redhat.com> wrote:

> On Fri, 20 Mar 2020 04:35:29 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > On Thu, Mar 19, 2020 at 03:41:12AM +0800, Kirti Wankhede wrote:  
> > > DMA mapped pages, including those pinned by mdev vendor drivers, might
> > > get unpinned and unmapped while migration is active and device is still
> > > running. For example, in pre-copy phase while guest driver could access
> > > those pages, host device or vendor driver can dirty these mapped pages.
> > > Such pages should be marked dirty so as to maintain memory consistency
> > > for a user making use of dirty page tracking.
> > > 
> > > To get bitmap during unmap, user should set flag
> > > VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> > > zeroed by user space application. Bitmap size and page size should be set
> > > by user application.
> > > 
> > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > > ---
> > >  drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
> > >  include/uapi/linux/vfio.h       | 11 +++++++++
> > >  2 files changed, 62 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > > index d6417fb02174..aa1ac30f7854 100644
> > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > >  }
> > >  
> > >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > > -			     struct vfio_iommu_type1_dma_unmap *unmap)
> > > +			     struct vfio_iommu_type1_dma_unmap *unmap,
> > > +			     struct vfio_bitmap *bitmap)
> > >  {
> > >  	uint64_t mask;
> > >  	struct vfio_dma *dma, *dma_last = NULL;
> > > @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >  	 * will be returned if these conditions are not met.  The v2 interface
> > >  	 * will only return success and a size of zero if there were no
> > >  	 * mappings within the range.
> > > +	 *
> > > +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> > > +	 * must be for single mapping. Multiple mappings with this flag set is
> > > +	 * not supported.
> > >  	 */
> > >  	if (iommu->v2) {
> > >  		dma = vfio_find_dma(iommu, unmap->iova, 1);
> > > @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >  			ret = -EINVAL;
> > >  			goto unlock;
> > >  		}
> > > +
> > > +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> > > +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {    
> > dma is probably NULL here!  
> 
> Yep, I didn't look closely enough there.  This is situated right
> between the check to make sure we're not bisecting a mapping at the
> start of the unmap and the check to make sure we're not bisecting a
> mapping at the end of the unmap.  There's no guarantee that we have a
> valid pointer here.  The test should be in the while() loop below this
> code.

Actually the test could remain here, we can exit here if we can't find
a dma at the start of the unmap range with the GET_DIRTY_BITMAP flag,
but we absolutely cannot deref dma without testing it.

> > And this restriction on UNMAP would make some UNMAP operations of vIOMMU
> > fail.
> > 
> > e.g. below condition indeed happens in reality.
> > an UNMAP ioctl comes for IOVA range from 0xff800000, of size 0x200000
> > However, IOVAs in this range are mapped page by page.i.e., dma->size is 0x1000.
> > 
> > Previous, this UNMAP ioctl could unmap successfully as a whole.  
> 
> What triggers this in the guest?  Note that it's only when using the
> GET_DIRTY_BITMAP flag that this is restricted.  Does the event you're
> referring to potentially occur under normal circumstances in that mode?
> Thanks,
> 
> Alex
> 
> 
> > > +			ret = -EINVAL;
> > > +			goto unlock;
> > > +		}
> > > +
> > >  		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
> > >  		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
> > >  			ret = -EINVAL;
> > > @@ -1014,6 +1026,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >  		if (dma->task->mm != current->mm)
> > >  			break;
> > >  
> > > +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> > > +		     iommu->dirty_page_tracking)
> > > +			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
> > > +					bitmap->pgsize,
> > > +					(unsigned char __user *) bitmap->data);
> > > +
> > >  		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
> > >  			struct vfio_iommu_type1_dma_unmap nb_unmap;
> > >  
> > > @@ -2369,17 +2387,46 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> > >  
> > >  	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
> > >  		struct vfio_iommu_type1_dma_unmap unmap;
> > > -		long ret;
> > > +		struct vfio_bitmap bitmap = { 0 };
> > > +		int ret;
> > >  
> > >  		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
> > >  
> > >  		if (copy_from_user(&unmap, (void __user *)arg, minsz))
> > >  			return -EFAULT;
> > >  
> > > -		if (unmap.argsz < minsz || unmap.flags)
> > > +		if (unmap.argsz < minsz ||
> > > +		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
> > >  			return -EINVAL;
> > >  
> > > -		ret = vfio_dma_do_unmap(iommu, &unmap);
> > > +		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
> > > +			unsigned long pgshift;
> > > +			uint64_t iommu_pgsize =
> > > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > > +
> > > +			if (unmap.argsz < (minsz + sizeof(bitmap)))
> > > +				return -EINVAL;
> > > +
> > > +			if (copy_from_user(&bitmap,
> > > +					   (void __user *)(arg + minsz),
> > > +					   sizeof(bitmap)))
> > > +				return -EFAULT;
> > > +
> > > +			/* allow only min supported pgsize */
> > > +			if (bitmap.pgsize != iommu_pgsize)
> > > +				return -EINVAL;
> > > +			if (!access_ok((void __user *)bitmap.data, bitmap.size))
> > > +				return -EINVAL;
> > > +
> > > +			pgshift = __ffs(bitmap.pgsize);
> > > +			ret = verify_bitmap_size(unmap.size >> pgshift,
> > > +						 bitmap.size);
> > > +			if (ret)
> > > +				return ret;
> > > +
> > > +		}
> > > +
> > > +		ret = vfio_dma_do_unmap(iommu, &unmap, &bitmap);
> > >  		if (ret)
> > >  			return ret;
> > >  
> > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > index 043e9eafb255..a704e5380f04 100644
> > > --- a/include/uapi/linux/vfio.h
> > > +++ b/include/uapi/linux/vfio.h
> > > @@ -1010,12 +1010,23 @@ struct vfio_bitmap {
> > >   * field.  No guarantee is made to the user that arbitrary unmaps of iova
> > >   * or size different from those used in the original mapping call will
> > >   * succeed.
> > > + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
> > > + * before unmapping IO virtual addresses. When this flag is set, user must
> > > + * provide data[] as structure vfio_bitmap. User must allocate memory to get
> > > + * bitmap, clear the bitmap memory by setting zero and must set size of
> > > + * allocated memory in vfio_bitmap.size field. One bit in bitmap
> > > + * represents per page, page of user provided page size in 'pgsize',
> > > + * consecutively starting from iova offset. Bit set indicates page at that
> > > + * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
> > > + * returned in vfio_bitmap.data
> > >   */
> > >  struct vfio_iommu_type1_dma_unmap {
> > >  	__u32	argsz;
> > >  	__u32	flags;
> > > +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
> > >  	__u64	iova;				/* IO virtual address */
> > >  	__u64	size;				/* Size of mapping (bytes) */
> > > +	__u8    data[];
> > >  };
> > >  
> > >  #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
> > > -- 
> > > 2.7.0
> > >     
> >   
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-20 15:47       ` Alex Williamson
@ 2020-03-20 19:14         ` Kirti Wankhede
  2020-03-20 19:28           ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-20 19:14 UTC (permalink / raw)
  To: Alex Williamson, Yan Zhao
  Cc: cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng, Liu, Yi L,
	mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, Wang,
	Zhi A, qemu-devel, kvm



On 3/20/2020 9:17 PM, Alex Williamson wrote:
> On Fri, 20 Mar 2020 09:40:39 -0600
> Alex Williamson <alex.williamson@redhat.com> wrote:
> 
>> On Fri, 20 Mar 2020 04:35:29 -0400
>> Yan Zhao <yan.y.zhao@intel.com> wrote:
>>
>>> On Thu, Mar 19, 2020 at 03:41:12AM +0800, Kirti Wankhede wrote:
>>>> DMA mapped pages, including those pinned by mdev vendor drivers, might
>>>> get unpinned and unmapped while migration is active and device is still
>>>> running. For example, in pre-copy phase while guest driver could access
>>>> those pages, host device or vendor driver can dirty these mapped pages.
>>>> Such pages should be marked dirty so as to maintain memory consistency
>>>> for a user making use of dirty page tracking.
>>>>
>>>> To get bitmap during unmap, user should set flag
>>>> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
>>>> zeroed by user space application. Bitmap size and page size should be set
>>>> by user application.
>>>>
>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>> ---
>>>>   drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
>>>>   include/uapi/linux/vfio.h       | 11 +++++++++
>>>>   2 files changed, 62 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>>> index d6417fb02174..aa1ac30f7854 100644
>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>> @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
>>>>   }
>>>>   
>>>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>>> -			     struct vfio_iommu_type1_dma_unmap *unmap)
>>>> +			     struct vfio_iommu_type1_dma_unmap *unmap,
>>>> +			     struct vfio_bitmap *bitmap)
>>>>   {
>>>>   	uint64_t mask;
>>>>   	struct vfio_dma *dma, *dma_last = NULL;
>>>> @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>>>   	 * will be returned if these conditions are not met.  The v2 interface
>>>>   	 * will only return success and a size of zero if there were no
>>>>   	 * mappings within the range.
>>>> +	 *
>>>> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
>>>> +	 * must be for single mapping. Multiple mappings with this flag set is
>>>> +	 * not supported.
>>>>   	 */
>>>>   	if (iommu->v2) {
>>>>   		dma = vfio_find_dma(iommu, unmap->iova, 1);
>>>> @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>>>   			ret = -EINVAL;
>>>>   			goto unlock;
>>>>   		}
>>>> +
>>>> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
>>>> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
>>> dma is probably NULL here!
>>
>> Yep, I didn't look closely enough there.  This is situated right
>> between the check to make sure we're not bisecting a mapping at the
>> start of the unmap and the check to make sure we're not bisecting a
>> mapping at the end of the unmap.  There's no guarantee that we have a
>> valid pointer here.  The test should be in the while() loop below this
>> code.
> 
> Actually the test could remain here, we can exit here if we can't find
> a dma at the start of the unmap range with the GET_DIRTY_BITMAP flag,
> but we absolutely cannot deref dma without testing it.
> 

In the check above newly added check, if dma is NULL then its an error 
condition, because Unmap requests must fully cover previous mappings, right?

>>> And this restriction on UNMAP would make some UNMAP operations of vIOMMU
>>> fail.
>>>
>>> e.g. below condition indeed happens in reality.
>>> an UNMAP ioctl comes for IOVA range from 0xff800000, of size 0x200000
>>> However, IOVAs in this range are mapped page by page.i.e., dma->size is 0x1000.
>>>
>>> Previous, this UNMAP ioctl could unmap successfully as a whole.
>>
>> What triggers this in the guest?  Note that it's only when using the
>> GET_DIRTY_BITMAP flag that this is restricted.  Does the event you're
>> referring to potentially occur under normal circumstances in that mode?
>> Thanks,
>>

Such unmap would callback vfio_iommu_map_notify() in QEMU. In 
vfio_iommu_map_notify(), unmap is called on same range <iova, 
iotlb->addr_mask + 1> which was used for map. Secondly unmap with bitmap 
will be called only when device state has _SAVING flag set.

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-20 19:14         ` Kirti Wankhede
@ 2020-03-20 19:28           ` Alex Williamson
  2020-03-23  1:10             ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-20 19:28 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: Yan Zhao, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng, Liu,
	Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Sat, 21 Mar 2020 00:44:32 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 3/20/2020 9:17 PM, Alex Williamson wrote:
> > On Fri, 20 Mar 2020 09:40:39 -0600
> > Alex Williamson <alex.williamson@redhat.com> wrote:
> >   
> >> On Fri, 20 Mar 2020 04:35:29 -0400
> >> Yan Zhao <yan.y.zhao@intel.com> wrote:
> >>  
> >>> On Thu, Mar 19, 2020 at 03:41:12AM +0800, Kirti Wankhede wrote:  
> >>>> DMA mapped pages, including those pinned by mdev vendor drivers, might
> >>>> get unpinned and unmapped while migration is active and device is still
> >>>> running. For example, in pre-copy phase while guest driver could access
> >>>> those pages, host device or vendor driver can dirty these mapped pages.
> >>>> Such pages should be marked dirty so as to maintain memory consistency
> >>>> for a user making use of dirty page tracking.
> >>>>
> >>>> To get bitmap during unmap, user should set flag
> >>>> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> >>>> zeroed by user space application. Bitmap size and page size should be set
> >>>> by user application.
> >>>>
> >>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >>>> ---
> >>>>   drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
> >>>>   include/uapi/linux/vfio.h       | 11 +++++++++
> >>>>   2 files changed, 62 insertions(+), 4 deletions(-)
> >>>>
> >>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >>>> index d6417fb02174..aa1ac30f7854 100644
> >>>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>>> @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> >>>>   }
> >>>>   
> >>>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >>>> -			     struct vfio_iommu_type1_dma_unmap *unmap)
> >>>> +			     struct vfio_iommu_type1_dma_unmap *unmap,
> >>>> +			     struct vfio_bitmap *bitmap)
> >>>>   {
> >>>>   	uint64_t mask;
> >>>>   	struct vfio_dma *dma, *dma_last = NULL;
> >>>> @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >>>>   	 * will be returned if these conditions are not met.  The v2 interface
> >>>>   	 * will only return success and a size of zero if there were no
> >>>>   	 * mappings within the range.
> >>>> +	 *
> >>>> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> >>>> +	 * must be for single mapping. Multiple mappings with this flag set is
> >>>> +	 * not supported.
> >>>>   	 */
> >>>>   	if (iommu->v2) {
> >>>>   		dma = vfio_find_dma(iommu, unmap->iova, 1);
> >>>> @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >>>>   			ret = -EINVAL;
> >>>>   			goto unlock;
> >>>>   		}
> >>>> +
> >>>> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> >>>> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {  
> >>> dma is probably NULL here!  
> >>
> >> Yep, I didn't look closely enough there.  This is situated right
> >> between the check to make sure we're not bisecting a mapping at the
> >> start of the unmap and the check to make sure we're not bisecting a
> >> mapping at the end of the unmap.  There's no guarantee that we have a
> >> valid pointer here.  The test should be in the while() loop below this
> >> code.  
> > 
> > Actually the test could remain here, we can exit here if we can't find
> > a dma at the start of the unmap range with the GET_DIRTY_BITMAP flag,
> > but we absolutely cannot deref dma without testing it.
> >   
> 
> In the check above newly added check, if dma is NULL then its an error 
> condition, because Unmap requests must fully cover previous mappings, right?

Yes, but we'll do a null pointer deref before we return error.
 
> >>> And this restriction on UNMAP would make some UNMAP operations of vIOMMU
> >>> fail.
> >>>
> >>> e.g. below condition indeed happens in reality.
> >>> an UNMAP ioctl comes for IOVA range from 0xff800000, of size 0x200000
> >>> However, IOVAs in this range are mapped page by page.i.e., dma->size is 0x1000.
> >>>
> >>> Previous, this UNMAP ioctl could unmap successfully as a whole.  
> >>
> >> What triggers this in the guest?  Note that it's only when using the
> >> GET_DIRTY_BITMAP flag that this is restricted.  Does the event you're
> >> referring to potentially occur under normal circumstances in that mode?
> >> Thanks,
> >>  
> 
> Such unmap would callback vfio_iommu_map_notify() in QEMU. In 
> vfio_iommu_map_notify(), unmap is called on same range <iova, 
> iotlb->addr_mask + 1> which was used for map. Secondly unmap with bitmap 
> will be called only when device state has _SAVING flag set.

It might be helpful for Yan, and everyone else, to see the latest QEMU
patch series.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-19  6:24   ` Yan Zhao
@ 2020-03-20 19:41     ` Alex Williamson
  2020-03-23  2:43       ` Yan Zhao
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-03-20 19:41 UTC (permalink / raw)
  To: Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Thu, 19 Mar 2020 02:24:33 -0400
Yan Zhao <yan.y.zhao@intel.com> wrote:
> On Thu, Mar 19, 2020 at 03:41:14AM +0800, Kirti Wankhede wrote:
> > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > index 912629320719..deec09f4b0f6 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -72,6 +72,7 @@ struct vfio_iommu {
> >  	bool			v2;
> >  	bool			nesting;
> >  	bool			dirty_page_tracking;
> > +	bool			pinned_page_dirty_scope;
> >  };
> >  
> >  struct vfio_domain {
> > @@ -99,6 +100,7 @@ struct vfio_group {
> >  	struct iommu_group	*iommu_group;
> >  	struct list_head	next;
> >  	bool			mdev_group;	/* An mdev group */
> > +	bool			pinned_page_dirty_scope;
> >  };
> >  
> >  struct vfio_iova {
> > @@ -132,6 +134,10 @@ struct vfio_regions {
> >  static int put_pfn(unsigned long pfn, int prot);
> >  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >  
> > +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> > +					       struct iommu_group *iommu_group);
> > +
> > +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
> >  /*
> >   * This code handles mapping and unmapping of user data buffers
> >   * into DMA'ble space using the IOMMU
> > @@ -556,11 +562,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> >  }
> >  
> >  static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > +				      struct iommu_group *iommu_group,
> >  				      unsigned long *user_pfn,
> >  				      int npage, int prot,
> >  				      unsigned long *phys_pfn)
> >  {
> >  	struct vfio_iommu *iommu = iommu_data;
> > +	struct vfio_group *group;
> >  	int i, j, ret;
> >  	unsigned long remote_vaddr;
> >  	struct vfio_dma *dma;
> > @@ -630,8 +638,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >  				   (vpfn->iova - dma->iova) >> pgshift, 1);
> >  		}
> >  	}  
> 
> Could you provide an interface lightweight than vfio_pin_pages for pass-through
> devices? e.g. vfio_mark_iova_dirty()
> 
> Or at least allowing phys_pfn to be empty for pass-through devices.
> 
> This is really inefficient:
> bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1));
> i.e.
> in order to mark an iova dirty, it has to go through iova ---> pfn --> iova
> while acquiring pfn is not necessary for pass-through devices.

I think this would be possible, but I don't think it should be gating
to this series.  We don't have such consumers yet.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-20 19:28           ` Alex Williamson
@ 2020-03-23  1:10             ` Yan Zhao
  0 siblings, 0 replies; 47+ messages in thread
From: Yan Zhao @ 2020-03-23  1:10 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Sat, Mar 21, 2020 at 03:28:21AM +0800, Alex Williamson wrote:
> On Sat, 21 Mar 2020 00:44:32 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> > On 3/20/2020 9:17 PM, Alex Williamson wrote:
> > > On Fri, 20 Mar 2020 09:40:39 -0600
> > > Alex Williamson <alex.williamson@redhat.com> wrote:
> > >   
> > >> On Fri, 20 Mar 2020 04:35:29 -0400
> > >> Yan Zhao <yan.y.zhao@intel.com> wrote:
> > >>  
> > >>> On Thu, Mar 19, 2020 at 03:41:12AM +0800, Kirti Wankhede wrote:  
> > >>>> DMA mapped pages, including those pinned by mdev vendor drivers, might
> > >>>> get unpinned and unmapped while migration is active and device is still
> > >>>> running. For example, in pre-copy phase while guest driver could access
> > >>>> those pages, host device or vendor driver can dirty these mapped pages.
> > >>>> Such pages should be marked dirty so as to maintain memory consistency
> > >>>> for a user making use of dirty page tracking.
> > >>>>
> > >>>> To get bitmap during unmap, user should set flag
> > >>>> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> > >>>> zeroed by user space application. Bitmap size and page size should be set
> > >>>> by user application.
> > >>>>
> > >>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > >>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> > >>>> ---
> > >>>>   drivers/vfio/vfio_iommu_type1.c | 55 ++++++++++++++++++++++++++++++++++++++---
> > >>>>   include/uapi/linux/vfio.h       | 11 +++++++++
> > >>>>   2 files changed, 62 insertions(+), 4 deletions(-)
> > >>>>
> > >>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > >>>> index d6417fb02174..aa1ac30f7854 100644
> > >>>> --- a/drivers/vfio/vfio_iommu_type1.c
> > >>>> +++ b/drivers/vfio/vfio_iommu_type1.c
> > >>>> @@ -939,7 +939,8 @@ static int verify_bitmap_size(uint64_t npages, uint64_t bitmap_size)
> > >>>>   }
> > >>>>   
> > >>>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >>>> -			     struct vfio_iommu_type1_dma_unmap *unmap)
> > >>>> +			     struct vfio_iommu_type1_dma_unmap *unmap,
> > >>>> +			     struct vfio_bitmap *bitmap)
> > >>>>   {
> > >>>>   	uint64_t mask;
> > >>>>   	struct vfio_dma *dma, *dma_last = NULL;
> > >>>> @@ -990,6 +991,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >>>>   	 * will be returned if these conditions are not met.  The v2 interface
> > >>>>   	 * will only return success and a size of zero if there were no
> > >>>>   	 * mappings within the range.
> > >>>> +	 *
> > >>>> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> > >>>> +	 * must be for single mapping. Multiple mappings with this flag set is
> > >>>> +	 * not supported.
> > >>>>   	 */
> > >>>>   	if (iommu->v2) {
> > >>>>   		dma = vfio_find_dma(iommu, unmap->iova, 1);
> > >>>> @@ -997,6 +1002,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> > >>>>   			ret = -EINVAL;
> > >>>>   			goto unlock;
> > >>>>   		}
> > >>>> +
> > >>>> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> > >>>> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {  
> > >>> dma is probably NULL here!  
> > >>
> > >> Yep, I didn't look closely enough there.  This is situated right
> > >> between the check to make sure we're not bisecting a mapping at the
> > >> start of the unmap and the check to make sure we're not bisecting a
> > >> mapping at the end of the unmap.  There's no guarantee that we have a
> > >> valid pointer here.  The test should be in the while() loop below this
> > >> code.  
> > > 
> > > Actually the test could remain here, we can exit here if we can't find
> > > a dma at the start of the unmap range with the GET_DIRTY_BITMAP flag,
> > > but we absolutely cannot deref dma without testing it.
> > >   
> > 
> > In the check above newly added check, if dma is NULL then its an error 
> > condition, because Unmap requests must fully cover previous mappings, right?
> 
> Yes, but we'll do a null pointer deref before we return error.
>  
> > >>> And this restriction on UNMAP would make some UNMAP operations of vIOMMU
> > >>> fail.
> > >>>
> > >>> e.g. below condition indeed happens in reality.
> > >>> an UNMAP ioctl comes for IOVA range from 0xff800000, of size 0x200000
> > >>> However, IOVAs in this range are mapped page by page.i.e., dma->size is 0x1000.
> > >>>
> > >>> Previous, this UNMAP ioctl could unmap successfully as a whole.  
> > >>
> > >> What triggers this in the guest?  Note that it's only when using the
> > >> GET_DIRTY_BITMAP flag that this is restricted.  Does the event you're
> > >> referring to potentially occur under normal circumstances in that mode?
> > >> Thanks,
> > >>  

it happens in vIOMMU Domain level invalidation of IOTLB
(domain-selective invalidation, see vtd_iotlb_domain_invalidate() in qemu).
common in VTD lazy mode, and NOT just happening once at boot time.
rather than invalidate page by page, it batches the page invalidation.
so, when this invalidation takes place, even higher level page tables
have been invalid and therefore it has to invalidate a bigger combined range.
That's why we see IOVAs are mapped in 4k pages, but are unmapped in 2M
pages.

I think those UNMAPs should also have GET_DIRTY_BIMTAP flag on, right?
> > 
> > Such unmap would callback vfio_iommu_map_notify() in QEMU. In 
> > vfio_iommu_map_notify(), unmap is called on same range <iova, 
> > iotlb->addr_mask + 1> which was used for map. Secondly unmap with bitmap 
> > will be called only when device state has _SAVING flag set.
> 
in this case, iotlb->addr_mask in unmap is 0x200000 -1.
different than 0x1000 -1 used for map.
> It might be helpful for Yan, and everyone else, to see the latest QEMU
> patch series.  Thanks,
>
yes, please. also curious of log_sync part for vIOMMU. given most IOVAs in
address space are unmapped and therefore no IOTLBs are able to be found.

Thanks
Yan

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-20 19:41     ` Alex Williamson
@ 2020-03-23  2:43       ` Yan Zhao
  0 siblings, 0 replies; 47+ messages in thread
From: Yan Zhao @ 2020-03-23  2:43 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

On Sat, Mar 21, 2020 at 03:41:42AM +0800, Alex Williamson wrote:
> On Thu, 19 Mar 2020 02:24:33 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> > On Thu, Mar 19, 2020 at 03:41:14AM +0800, Kirti Wankhede wrote:
> > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > > index 912629320719..deec09f4b0f6 100644
> > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > @@ -72,6 +72,7 @@ struct vfio_iommu {
> > >  	bool			v2;
> > >  	bool			nesting;
> > >  	bool			dirty_page_tracking;
> > > +	bool			pinned_page_dirty_scope;
> > >  };
> > >  
> > >  struct vfio_domain {
> > > @@ -99,6 +100,7 @@ struct vfio_group {
> > >  	struct iommu_group	*iommu_group;
> > >  	struct list_head	next;
> > >  	bool			mdev_group;	/* An mdev group */
> > > +	bool			pinned_page_dirty_scope;
> > >  };
> > >  
> > >  struct vfio_iova {
> > > @@ -132,6 +134,10 @@ struct vfio_regions {
> > >  static int put_pfn(unsigned long pfn, int prot);
> > >  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> > >  
> > > +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> > > +					       struct iommu_group *iommu_group);
> > > +
> > > +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
> > >  /*
> > >   * This code handles mapping and unmapping of user data buffers
> > >   * into DMA'ble space using the IOMMU
> > > @@ -556,11 +562,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> > >  }
> > >  
> > >  static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > > +				      struct iommu_group *iommu_group,
> > >  				      unsigned long *user_pfn,
> > >  				      int npage, int prot,
> > >  				      unsigned long *phys_pfn)
> > >  {
> > >  	struct vfio_iommu *iommu = iommu_data;
> > > +	struct vfio_group *group;
> > >  	int i, j, ret;
> > >  	unsigned long remote_vaddr;
> > >  	struct vfio_dma *dma;
> > > @@ -630,8 +638,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> > >  				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > >  		}
> > >  	}  
> > 
> > Could you provide an interface lightweight than vfio_pin_pages for pass-through
> > devices? e.g. vfio_mark_iova_dirty()
> > 
> > Or at least allowing phys_pfn to be empty for pass-through devices.
> > 
> > This is really inefficient:
> > bitmap_set(dma->bitmap, (vpfn->iova - dma->iova) / pgsize, 1));
> > i.e.
> > in order to mark an iova dirty, it has to go through iova ---> pfn --> iova
> > while acquiring pfn is not necessary for pass-through devices.
> 
> I think this would be possible, but I don't think it should be gating
> to this series.  We don't have such consumers yet.  Thanks,
>
ok. Reasonable.

Thanks
Yan

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-18 19:41 ` [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
  2020-03-19  1:17   ` Yan Zhao
@ 2020-03-23 11:45   ` Auger Eric
  2020-03-24 19:14     ` Kirti Wankhede
  1 sibling, 1 reply; 47+ messages in thread
From: Auger Eric @ 2020-03-23 11:45 UTC (permalink / raw)
  To: Kirti Wankhede, alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

Hi Kirti,

On 3/18/20 8:41 PM, Kirti Wankhede wrote:
> - Defined MIGRATION region type and sub-type.
> 
> - Defined vfio_device_migration_info structure which will be placed at the
>   0th offset of migration region to get/set VFIO device related
>   information. Defined members of structure and usage on read/write access.
> 
> - Defined device states and state transition details.
> 
> - Defined sequence to be followed while saving and resuming VFIO device.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 227 insertions(+)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 9e843a147ead..d0021467af53 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
>  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
>  #define VFIO_REGION_TYPE_GFX                    (1)
>  #define VFIO_REGION_TYPE_CCW			(2)
> +#define VFIO_REGION_TYPE_MIGRATION              (3)
>  
>  /* sub-types for VFIO_REGION_TYPE_PCI_* */
>  
> @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
>  /* sub-types for VFIO_REGION_TYPE_CCW */
>  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
>  
> +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
> +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
> +
> +/*
> + * The structure vfio_device_migration_info is placed at the 0th offset of
> + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
> + * migration information. Field accesses from this structure are only supported
> + * at their native width and alignment. Otherwise, the result is undefined and
> + * vendor drivers should return an error.
> + *
> + * device_state: (read/write)
> + *      - The user application writes to this field to inform the vendor driver
> + *        about the device state to be transitioned to.
> + *      - The vendor driver should take the necessary actions to change the
> + *        device state. After successful transition to a given state, the
> + *        vendor driver should return success on write(device_state, state)
> + *        system call. If the device state transition fails, the vendor driver
> + *        should return an appropriate -errno for the fault condition.
> + *      - On the user application side, if the device state transition fails,
> + *	  that is, if write(device_state, state) returns an error, read
> + *	  device_state again to determine the current state of the device from
> + *	  the vendor driver.
> + *      - The vendor driver should return previous state of the device unless
> + *        the vendor driver has encountered an internal error, in which case
> + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
> + *      - The user application must use the device reset ioctl to recover the
> + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
> + *        indicated to be in a valid device state by reading device_state, the
> + *        user application may attempt to transition the device to any valid
> + *        state reachable from the current state or terminate itself.
> + *
> + *      device_state consists of 3 bits:
> + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
> + *        it indicates the _STOP state. When the device state is changed to
> + *        _STOP, driver should stop the device before write() returns.
> + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
> + *        driver should start gathering device state information that will be
> + *        provided to the VFIO user application to save the device's state.
> + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
> + *        the driver should prepare to resume the device. Data provided through
> + *        the migration region should be used to resume the device.
> + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
> + *      application should perform a read-modify-write operation on this
> + *      field when modifying the specified bits.
> + *
> + *  +------- _RESUMING
> + *  |+------ _SAVING
> + *  ||+----- _RUNNING
> + *  |||
> + *  000b => Device Stopped, not saving or resuming
> + *  001b => Device running, which is the default state
> + *  010b => Stop the device & save the device state, stop-and-copy state
> + *  011b => Device running and save the device state, pre-copy state
> + *  100b => Device stopped and the device state is resuming
> + *  101b => Invalid state
> + *  110b => Error state
> + *  111b => Invalid state
> + *
> + * State transitions:
> + *
> + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
> + *                (100b)     (001b)     (011b)        (010b)       (000b)
> + * 0. Running or default state
> + *                             |
> + *
> + * 1. Normal Shutdown (optional)
> + *                             |------------------------------------->|
> + *
> + * 2. Save the state or suspend
> + *                             |------------------------->|---------->|
> + *
> + * 3. Save the state during live migration
> + *                             |----------->|------------>|---------->|
> + *
> + * 4. Resuming
> + *                  |<---------|
> + *
> + * 5. Resumed
> + *                  |--------->|
> + *
> + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
> + * 1. During normal shutdown of the user application, the user application may
> + *    optionally change the VFIO device state from _RUNNING to _STOP. This
> + *    transition is optional. The vendor driver must support this transition but
> + *    must not require it.
> + * 2. When the user application saves state or suspends the application, the
> + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
> + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
> + *    device, save the device state and send it to the application through the
> + *    migration region. The sequence to be followed for such transition is given
> + *    below.
> + * 3. In live migration of user application, the state transitions from _RUNNING
> + *    to pre-copy, to stop-and-copy, and to _STOP.
> + *    On state transition from _RUNNING to pre-copy, the driver should start
> + *    gathering the device state while the application is still running and send
> + *    the device state data to application through the migration region.
> + *    On state transition from pre-copy to stop-and-copy, the driver must stop
> + *    the device, save the device state and send it to the user application
> + *    through the migration region.
> + *    Vendor drivers must support the pre-copy state even for implementations
> + *    where no data is provided to the user before the stop-and-copy state. The
> + *    user must not be required to consume all migration data before the device
> + *    transitions to a new state, including the stop-and-copy state.
> + *    The sequence to be followed for above two transitions is given below.
> + * 4. To start the resuming phase, the device state should be transitioned from
> + *    the _RUNNING to the _RESUMING state.
> + *    In the _RESUMING state, the driver should use the device state data
> + *    received through the migration region to resume the device.
> + * 5. After providing saved device data to the driver, the application should
> + *    change the state from _RESUMING to _RUNNING.
> + *
> + * reserved:
> + *      Reads on this field return zero and writes are ignored.
> + *
> + * pending_bytes: (read only)
> + *      The number of pending bytes still to be migrated from the vendor driver.
> + *
> + * data_offset: (read only)
> + *      The user application should read data_offset in the migration region
> + *      from where the user application should read the device data during the
> + *      _SAVING state or write the device data during the _RESUMING state.
The sentence above is a bit complex to read and was not understandable
[to me] at first shot. Maybe something like:
offset of the saved data within the migration region. The data at this
offset gets read by the user application during the _SAVING transition
and written by this latter during the _RESUMING state.

 See
> + *      below for details of sequence to be followed.
> + *
> + * data_size: (read/write)
> + *      The user application should read data_size to get the size in bytes of
> + *      the data copied in the migration region during the _SAVING state and
> + *      write the size in bytes of the data copied in the migration region
> + *      during the _RESUMING state.
any alignment constraints on the data_size when restoring data?
when saving data, also the data_size should be properly aligned I guess.
> + *
> + * The format of the migration region is as follows:
> + *  ------------------------------------------------------------------
> + * |vfio_device_migration_info|    data section                      |
> + * |                          |     ///////////////////////////////  |
> + * ------------------------------------------------------------------
> + *   ^                              ^
> + *  offset 0-trapped part        data_offset
> + *
> + * The structure vfio_device_migration_info is always followed by the data
> + * section in the region, so data_offset will always be nonzero
maybe add, whatever the data content
. The offset
> + * from where the data is copied is decided by the kernel driver. The data
> + * section can be trapped, mapped, or partitioned, depending on how the kernel
nit: maybe use the mmap terminology everywhere
> + * driver defines the data section. The data section partition can be defined
> + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
s/should/must
> + * page aligned, whereas initial section which contains the
> + * vfio_device_migration_info structure, might not end at the offset, which is
> + * page aligned. The user is not required to access through mmap regardless
> + * of the capabilities of the region mmap.
> + * The vendor driver should determine whether and how to partition the data
> + * section. The vendor driver should return data_offset accordingly.
> + *
> + * The sequence to be followed for the _SAVING|_RUNNING device state or> + * pre-copy phase and for the _SAVING device state or stop-and-copy
phase is as
> + * follows:
Could we only talk about states mentionned in state transition drawing?
I think it would be simpler to follow.
> + * a. Read pending_bytes, indicating the start of a new iteration to get device
> + *    data. Repeated read on pending_bytes at this stage should have no side
> + *    effects.
> + *    If pending_bytes == 0, the user application should not iterate to get data
> + *    for that device.
> + *    If pending_bytes > 0, perform the following steps.
Does (!pending_bytes) really means that the pre-copy migration is over
and the user app should stop iterating? I understand the device still
runs. We may have completed the migration at some point (pending_bytes
== 0) but for some reason the device resumes some activity and updates
some new dirty bits? Or is there any auto-transition from pre-copy to
stopped?
> + * b. Read data_offset, indicating that the vendor driver should make data
> + *    available through the data section. The vendor driver should return this
> + *    read operation only after data is available from (region + data_offset)
> + *    to (region + data_offset + data_size).
> + * c. Read data_size, which is the amount of data in bytes available through
> + *    the migration region.
> + *    Read on data_offset and data_size should return the offset and size of
> + *    the current buffer if the user application reads data_offset and
> + *    data_size more than once here.
> + * d. Read data_size bytes of data from (region + data_offset) from the
> + *    migration region.
> + * e. Process the data.
> + * f. Read pending_bytes, which indicates that the data from the previous
> + *    iteration has been read. If pending_bytes > 0, go to step b.
If I understand correctly this is a way for the userapp to ack the fact
it consumed the data_size, right? So only after f) pending_bytes -=
data_size, is that correct?

Sorry I am showing late on the review and have missed lots of
discussions. Just for my curiosity was a ring buffer considered at some
point with prod and cons index?
> + *
> + * If an error occurs during the above sequence, the vendor driver can return
> + * an error code for next read() or write() operation, which will terminate the
which write operation? I think write ops are detailed after in the
resume process, right?
> + * loop. The user application should then take the next necessary action, for
> + * example, failing migration or terminating the user application.
> + *
> + * The user application can transition from the _SAVING|_RUNNING
> + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
> + * number of pending bytes. The user application should iterate in _SAVING
> + * (stop-and-copy) until pending_bytes is 0.
> + *
> + * The sequence to be followed while _RESUMING device state is as follows:
> + * While data for this device is available, repeat the following steps:
> + * a. Read data_offset from where the user application should write data.
> + * b. Write migration data starting at the migration region + data_offset for
> + *    the length determined by data_size from the migration source.
> + * c. Write data_size, which indicates to the vendor driver that data is
> + *    written in the migration region. Vendor driver should apply the
> + *    user-provided migration region data to the device resume state.
How does the userapp know when the data it wrote has been consumed by
the device?
> + *
> + * For the user application, data is opaque. The user application should write
> + * data in the same order as the data is received and the data should be of
> + * same transaction size at the source.
> + */
> +
> +struct vfio_device_migration_info {
> +	__u32 device_state;         /* VFIO device state */
> +#define VFIO_DEVICE_STATE_STOP      (0)
> +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
> +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
> +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
> +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
> +				     VFIO_DEVICE_STATE_SAVING |  \
> +				     VFIO_DEVICE_STATE_RESUMING)
> +
> +#define VFIO_DEVICE_STATE_VALID(state) \
> +	(state & VFIO_DEVICE_STATE_RESUMING ? \
> +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
> +
> +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
> +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
> +					      VFIO_DEVICE_STATE_RESUMING))
> +
> +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
> +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
> +					     VFIO_DEVICE_STATE_RESUMING)
> +
> +	__u32 reserved;
> +	__u64 pending_bytes;
> +	__u64 data_offset;
> +	__u64 data_size;
> +} __attribute__((packed));
> +
>  /*
>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>   * which allows direct access to non-MSIX registers which happened to be within
> 
Thanks

Eric


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages
  2020-03-18 19:41 ` [PATCH v14 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
@ 2020-03-23 11:59   ` Auger Eric
  0 siblings, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-03-23 11:59 UTC (permalink / raw)
  To: Kirti Wankhede, alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

Hi Kirti,

On 3/18/20 8:41 PM, Kirti Wankhede wrote:
> vfio_pfn.ref_count is always updated by holding iommu->lock, using atomic
> variable is overkill.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 9fdfae1cb17a..70aeab921d0f 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -112,7 +112,7 @@ struct vfio_pfn {
>  	struct rb_node		node;
>  	dma_addr_t		iova;		/* Device address */
>  	unsigned long		pfn;		/* Host pfn */
> -	atomic_t		ref_count;
> +	unsigned int		ref_count;
>  };
>  
>  struct vfio_regions {

> @@ -233,7 +233,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, dma_addr_t iova,
>  
>  	vpfn->iova = iova;
>  	vpfn->pfn = pfn;
> -	atomic_set(&vpfn->ref_count, 1);
> +	vpfn->ref_count = 1;
>  	vfio_link_pfn(dma, vpfn);
>  	return 0;
>  }
> @@ -251,7 +251,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
>  	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
>  
>  	if (vpfn)
> -		atomic_inc(&vpfn->ref_count);
> +		vpfn->ref_count++;
>  	return vpfn;
>  }
>  
> @@ -259,7 +259,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
>  {
>  	int ret = 0;
>  
> -	if (atomic_dec_and_test(&vpfn->ref_count)) {
> +	vpfn->ref_count--;
> +	if (!vpfn->ref_count) {
>  		ret = put_pfn(vpfn->pfn, dma->prot);
>  		vfio_remove_from_pfn_list(dma, vpfn);
>  	}
> 

Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-19 13:09         ` Alex Williamson
  2020-03-20  1:30           ` Yan Zhao
@ 2020-03-23 14:45           ` Auger Eric
  1 sibling, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-03-23 14:45 UTC (permalink / raw)
  To: Alex Williamson, Yan Zhao
  Cc: Kirti Wankhede, cjia, Tian, Kevin, Yang, Ziye, Liu, Changpeng,
	Liu, Yi L, mlevitsk, eskultet, cohuck, dgilbert, jonathan.davies,
	eauger, aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	Wang, Zhi A, qemu-devel, kvm

Hi,

On 3/19/20 2:09 PM, Alex Williamson wrote:
> On Thu, 19 Mar 2020 01:05:54 -0400
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
>> On Thu, Mar 19, 2020 at 11:49:26AM +0800, Alex Williamson wrote:
>>> On Wed, 18 Mar 2020 21:17:03 -0400
>>> Yan Zhao <yan.y.zhao@intel.com> wrote:
>>>   
>>>> On Thu, Mar 19, 2020 at 03:41:08AM +0800, Kirti Wankhede wrote:  
>>>>> - Defined MIGRATION region type and sub-type.
>>>>>
>>>>> - Defined vfio_device_migration_info structure which will be placed at the
>>>>>   0th offset of migration region to get/set VFIO device related
>>>>>   information. Defined members of structure and usage on read/write access.
>>>>>
>>>>> - Defined device states and state transition details.
>>>>>
>>>>> - Defined sequence to be followed while saving and resuming VFIO device.
>>>>>
>>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>>> ---
>>>>>  include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
>>>>>  1 file changed, 227 insertions(+)
>>>>>
>>>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>>>>> index 9e843a147ead..d0021467af53 100644
>>>>> --- a/include/uapi/linux/vfio.h
>>>>> +++ b/include/uapi/linux/vfio.h
>>>>> @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
>>>>>  #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
>>>>>  #define VFIO_REGION_TYPE_GFX                    (1)
>>>>>  #define VFIO_REGION_TYPE_CCW			(2)
>>>>> +#define VFIO_REGION_TYPE_MIGRATION              (3)
>>>>>  
>>>>>  /* sub-types for VFIO_REGION_TYPE_PCI_* */
>>>>>  
>>>>> @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
>>>>>  /* sub-types for VFIO_REGION_TYPE_CCW */
>>>>>  #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
>>>>>  
>>>>> +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
>>>>> +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
>>>>> +
>>>>> +/*
>>>>> + * The structure vfio_device_migration_info is placed at the 0th offset of
>>>>> + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
>>>>> + * migration information. Field accesses from this structure are only supported
>>>>> + * at their native width and alignment. Otherwise, the result is undefined and
>>>>> + * vendor drivers should return an error.
>>>>> + *
>>>>> + * device_state: (read/write)
>>>>> + *      - The user application writes to this field to inform the vendor driver
>>>>> + *        about the device state to be transitioned to.
>>>>> + *      - The vendor driver should take the necessary actions to change the
>>>>> + *        device state. After successful transition to a given state, the
>>>>> + *        vendor driver should return success on write(device_state, state)
>>>>> + *        system call. If the device state transition fails, the vendor driver
>>>>> + *        should return an appropriate -errno for the fault condition.
>>>>> + *      - On the user application side, if the device state transition fails,
>>>>> + *	  that is, if write(device_state, state) returns an error, read
>>>>> + *	  device_state again to determine the current state of the device from
>>>>> + *	  the vendor driver.
>>>>> + *      - The vendor driver should return previous state of the device unless
>>>>> + *        the vendor driver has encountered an internal error, in which case
>>>>> + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
>>>>> + *      - The user application must use the device reset ioctl to recover the
>>>>> + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
>>>>> + *        indicated to be in a valid device state by reading device_state, the
>>>>> + *        user application may attempt to transition the device to any valid
>>>>> + *        state reachable from the current state or terminate itself.
>>>>> + *
>>>>> + *      device_state consists of 3 bits:
>>>>> + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
>>>>> + *        it indicates the _STOP state. When the device state is changed to
>>>>> + *        _STOP, driver should stop the device before write() returns.
>>>>> + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
>>>>> + *        driver should start gathering device state information that will be
>>>>> + *        provided to the VFIO user application to save the device's state.
>>>>> + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
>>>>> + *        the driver should prepare to resume the device. Data provided through
>>>>> + *        the migration region should be used to resume the device.
>>>>> + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
>>>>> + *      application should perform a read-modify-write operation on this
>>>>> + *      field when modifying the specified bits.
>>>>> + *
>>>>> + *  +------- _RESUMING
>>>>> + *  |+------ _SAVING
>>>>> + *  ||+----- _RUNNING
>>>>> + *  |||
>>>>> + *  000b => Device Stopped, not saving or resuming
>>>>> + *  001b => Device running, which is the default state
>>>>> + *  010b => Stop the device & save the device state, stop-and-copy state
>>>>> + *  011b => Device running and save the device state, pre-copy state
>>>>> + *  100b => Device stopped and the device state is resuming
>>>>> + *  101b => Invalid state
>>>>> + *  110b => Error state
>>>>> + *  111b => Invalid state
>>>>> + *
>>>>> + * State transitions:
>>>>> + *
>>>>> + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
>>>>> + *                (100b)     (001b)     (011b)        (010b)       (000b)
>>>>> + * 0. Running or default state
>>>>> + *                             |
>>>>> + *
>>>>> + * 1. Normal Shutdown (optional)
>>>>> + *                             |------------------------------------->|
>>>>> + *
>>>>> + * 2. Save the state or suspend
>>>>> + *                             |------------------------->|---------->|
>>>>> + *
>>>>> + * 3. Save the state during live migration
>>>>> + *                             |----------->|------------>|---------->|
>>>>> + *
>>>>> + * 4. Resuming
>>>>> + *                  |<---------|
>>>>> + *
>>>>> + * 5. Resumed
>>>>> + *                  |--------->|
>>>>> + *
>>>>> + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
>>>>> + * 1. During normal shutdown of the user application, the user application may
>>>>> + *    optionally change the VFIO device state from _RUNNING to _STOP. This
>>>>> + *    transition is optional. The vendor driver must support this transition but
>>>>> + *    must not require it.
>>>>> + * 2. When the user application saves state or suspends the application, the
>>>>> + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
>>>>> + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
>>>>> + *    device, save the device state and send it to the application through the
>>>>> + *    migration region. The sequence to be followed for such transition is given
>>>>> + *    below.
>>>>> + * 3. In live migration of user application, the state transitions from _RUNNING
>>>>> + *    to pre-copy, to stop-and-copy, and to _STOP.
>>>>> + *    On state transition from _RUNNING to pre-copy, the driver should start
>>>>> + *    gathering the device state while the application is still running and send
>>>>> + *    the device state data to application through the migration region.
>>>>> + *    On state transition from pre-copy to stop-and-copy, the driver must stop
>>>>> + *    the device, save the device state and send it to the user application
>>>>> + *    through the migration region.
>>>>> + *    Vendor drivers must support the pre-copy state even for implementations
>>>>> + *    where no data is provided to the user before the stop-and-copy state. The
>>>>> + *    user must not be required to consume all migration data before the device
>>>>> + *    transitions to a new state, including the stop-and-copy state.
>>>>> + *    The sequence to be followed for above two transitions is given below.
>>>>> + * 4. To start the resuming phase, the device state should be transitioned from
>>>>> + *    the _RUNNING to the _RESUMING state.
>>>>> + *    In the _RESUMING state, the driver should use the device state data
>>>>> + *    received through the migration region to resume the device.
>>>>> + * 5. After providing saved device data to the driver, the application should
>>>>> + *    change the state from _RESUMING to _RUNNING.
>>>>> + *
>>>>> + * reserved:
>>>>> + *      Reads on this field return zero and writes are ignored.
>>>>> + *
>>>>> + * pending_bytes: (read only)
>>>>> + *      The number of pending bytes still to be migrated from the vendor driver.
>>>>> + *
>>>>> + * data_offset: (read only)
>>>>> + *      The user application should read data_offset in the migration region
>>>>> + *      from where the user application should read the device data during the
>>>>> + *      _SAVING state or write the device data during the _RESUMING state. See
>>>>> + *      below for details of sequence to be followed.
>>>>> + *
>>>>> + * data_size: (read/write)
>>>>> + *      The user application should read data_size to get the size in bytes of
>>>>> + *      the data copied in the migration region during the _SAVING state and
>>>>> + *      write the size in bytes of the data copied in the migration region
>>>>> + *      during the _RESUMING state.
>>>>> + *
>>>>> + * The format of the migration region is as follows:
>>>>> + *  ------------------------------------------------------------------
>>>>> + * |vfio_device_migration_info|    data section                      |
>>>>> + * |                          |     ///////////////////////////////  |
>>>>> + * ------------------------------------------------------------------
>>>>> + *   ^                              ^
>>>>> + *  offset 0-trapped part        data_offset
>>>>> + *
>>>>> + * The structure vfio_device_migration_info is always followed by the data
>>>>> + * section in the region, so data_offset will always be nonzero. The offset
>>>>> + * from where the data is copied is decided by the kernel driver. The data
>>>>> + * section can be trapped, mapped, or partitioned, depending on how the kernel
>>>>> + * driver defines the data section. The data section partition can be defined
>>>>> + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
>>>>> + * page aligned, whereas initial section which contains the
>>>>> + * vfio_device_migration_info structure, might not end at the offset, which is
>>>>> + * page aligned. The user is not required to access through mmap regardless
>>>>> + * of the capabilities of the region mmap.
>>>>> + * The vendor driver should determine whether and how to partition the data
>>>>> + * section. The vendor driver should return data_offset accordingly.
>>>>> + *
>>>>> + * The sequence to be followed for the _SAVING|_RUNNING device state or
>>>>> + * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
>>>>> + * follows:
>>>>> + * a. Read pending_bytes, indicating the start of a new iteration to get device
>>>>> + *    data. Repeated read on pending_bytes at this stage should have no side
>>>>> + *    effects.
>>>>> + *    If pending_bytes == 0, the user application should not iterate to get data
>>>>> + *    for that device.
>>>>> + *    If pending_bytes > 0, perform the following steps.
>>>>> + * b. Read data_offset, indicating that the vendor driver should make data
>>>>> + *    available through the data section. The vendor driver should return this
>>>>> + *    read operation only after data is available from (region + data_offset)
>>>>> + *    to (region + data_offset + data_size).
>>>>> + * c. Read data_size, which is the amount of data in bytes available through
>>>>> + *    the migration region.
>>>>> + *    Read on data_offset and data_size should return the offset and size of
>>>>> + *    the current buffer if the user application reads data_offset and
>>>>> + *    data_size more than once here.    
>>>> If data region is mmaped, merely reading data_offset and data_size
>>>> cannot let kernel know what are correct values to return.
>>>> Consider to add a read operation which is trapped into kernel to let
>>>> kernel exactly know it needs to move to the next offset and update data_size
>>>> ?  
>>>
>>> Both operations b. and c. above are to trapped registers, operation d.
>>> below may potentially be to an mmap'd area, which is why we have step
>>> f. which indicates to the vendor driver that the data has been
>>> consumed.  Does that address your concern?  Thanks,
>>>  
>> No. :)
>> the problem is about semantics of data_offset, data_size, and
>> pending_bytes.
>> b and c do not tell kernel that the data is read by user.
>> so, without knowing step d happen, kernel cannot update pending_bytes to
>> be returned in step f.
> 
> Sorry, I'm still not understanding, I see step f. as the indicator
> you're looking for.  The user reads pending_bytes to indicate the data
> in the migration area has been consumed.  The vendor driver updates its
> internal state on that read and returns the updated value for
> pending_bytes.  Thanks,

That's my understanding too. f) tells the data was consumed.

Thanks

Eric
> 
> Alex
>  
>>>>> + * d. Read data_size bytes of data from (region + data_offset) from the
>>>>> + *    migration region.
>>>>> + * e. Process the data.
>>>>> + * f. Read pending_bytes, which indicates that the data from the previous
>>>>> + *    iteration has been read. If pending_bytes > 0, go to step b.
>>>>> + *
>>>>> + * If an error occurs during the above sequence, the vendor driver can return
>>>>> + * an error code for next read() or write() operation, which will terminate the
>>>>> + * loop. The user application should then take the next necessary action, for
>>>>> + * example, failing migration or terminating the user application.
>>>>> + *
>>>>> + * The user application can transition from the _SAVING|_RUNNING
>>>>> + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
>>>>> + * number of pending bytes. The user application should iterate in _SAVING
>>>>> + * (stop-and-copy) until pending_bytes is 0.
>>>>> + *
>>>>> + * The sequence to be followed while _RESUMING device state is as follows:
>>>>> + * While data for this device is available, repeat the following steps:
>>>>> + * a. Read data_offset from where the user application should write data.
>>>>> + * b. Write migration data starting at the migration region + data_offset for
>>>>> + *    the length determined by data_size from the migration source.
>>>>> + * c. Write data_size, which indicates to the vendor driver that data is
>>>>> + *    written in the migration region. Vendor driver should apply the
>>>>> + *    user-provided migration region data to the device resume state.
>>>>> + *
>>>>> + * For the user application, data is opaque. The user application should write
>>>>> + * data in the same order as the data is received and the data should be of
>>>>> + * same transaction size at the source.
>>>>> + */
>>>>> +
>>>>> +struct vfio_device_migration_info {
>>>>> +	__u32 device_state;         /* VFIO device state */
>>>>> +#define VFIO_DEVICE_STATE_STOP      (0)
>>>>> +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
>>>>> +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
>>>>> +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
>>>>> +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
>>>>> +				     VFIO_DEVICE_STATE_SAVING |  \
>>>>> +				     VFIO_DEVICE_STATE_RESUMING)
>>>>> +
>>>>> +#define VFIO_DEVICE_STATE_VALID(state) \
>>>>> +	(state & VFIO_DEVICE_STATE_RESUMING ? \
>>>>> +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
>>>>> +
>>>>> +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
>>>>> +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
>>>>> +					      VFIO_DEVICE_STATE_RESUMING))
>>>>> +
>>>>> +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
>>>>> +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
>>>>> +					     VFIO_DEVICE_STATE_RESUMING)
>>>>> +
>>>>> +	__u32 reserved;
>>>>> +	__u64 pending_bytes;
>>>>> +	__u64 data_offset;
>>>>> +	__u64 data_size;
>>>>> +} __attribute__((packed));
>>>>> +
>>>>>  /*
>>>>>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>>>>>   * which allows direct access to non-MSIX registers which happened to be within
>>>>> -- 
>>>>> 2.7.0
>>>>>     
>>>>   
>>>   
>>
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-23 11:45   ` Auger Eric
@ 2020-03-24 19:14     ` Kirti Wankhede
  0 siblings, 0 replies; 47+ messages in thread
From: Kirti Wankhede @ 2020-03-24 19:14 UTC (permalink / raw)
  To: Auger Eric, alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/23/2020 5:15 PM, Auger Eric wrote:
> Hi Kirti,
> 
> On 3/18/20 8:41 PM, Kirti Wankhede wrote:
>> - Defined MIGRATION region type and sub-type.
>>
>> - Defined vfio_device_migration_info structure which will be placed at the
>>    0th offset of migration region to get/set VFIO device related
>>    information. Defined members of structure and usage on read/write access.
>>
>> - Defined device states and state transition details.
>>
>> - Defined sequence to be followed while saving and resuming VFIO device.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 227 insertions(+)
>>
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 9e843a147ead..d0021467af53 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
>>   #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
>>   #define VFIO_REGION_TYPE_GFX                    (1)
>>   #define VFIO_REGION_TYPE_CCW			(2)
>> +#define VFIO_REGION_TYPE_MIGRATION              (3)
>>   
>>   /* sub-types for VFIO_REGION_TYPE_PCI_* */
>>   
>> @@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
>>   /* sub-types for VFIO_REGION_TYPE_CCW */
>>   #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
>>   
>> +/* sub-types for VFIO_REGION_TYPE_MIGRATION */
>> +#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
>> +
>> +/*
>> + * The structure vfio_device_migration_info is placed at the 0th offset of
>> + * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
>> + * migration information. Field accesses from this structure are only supported
>> + * at their native width and alignment. Otherwise, the result is undefined and
>> + * vendor drivers should return an error.
>> + *
>> + * device_state: (read/write)
>> + *      - The user application writes to this field to inform the vendor driver
>> + *        about the device state to be transitioned to.
>> + *      - The vendor driver should take the necessary actions to change the
>> + *        device state. After successful transition to a given state, the
>> + *        vendor driver should return success on write(device_state, state)
>> + *        system call. If the device state transition fails, the vendor driver
>> + *        should return an appropriate -errno for the fault condition.
>> + *      - On the user application side, if the device state transition fails,
>> + *	  that is, if write(device_state, state) returns an error, read
>> + *	  device_state again to determine the current state of the device from
>> + *	  the vendor driver.
>> + *      - The vendor driver should return previous state of the device unless
>> + *        the vendor driver has encountered an internal error, in which case
>> + *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
>> + *      - The user application must use the device reset ioctl to recover the
>> + *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
>> + *        indicated to be in a valid device state by reading device_state, the
>> + *        user application may attempt to transition the device to any valid
>> + *        state reachable from the current state or terminate itself.
>> + *
>> + *      device_state consists of 3 bits:
>> + *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
>> + *        it indicates the _STOP state. When the device state is changed to
>> + *        _STOP, driver should stop the device before write() returns.
>> + *      - If bit 1 is set, it indicates the _SAVING state, which means that the
>> + *        driver should start gathering device state information that will be
>> + *        provided to the VFIO user application to save the device's state.
>> + *      - If bit 2 is set, it indicates the _RESUMING state, which means that
>> + *        the driver should prepare to resume the device. Data provided through
>> + *        the migration region should be used to resume the device.
>> + *      Bits 3 - 31 are reserved for future use. To preserve them, the user
>> + *      application should perform a read-modify-write operation on this
>> + *      field when modifying the specified bits.
>> + *
>> + *  +------- _RESUMING
>> + *  |+------ _SAVING
>> + *  ||+----- _RUNNING
>> + *  |||
>> + *  000b => Device Stopped, not saving or resuming
>> + *  001b => Device running, which is the default state
>> + *  010b => Stop the device & save the device state, stop-and-copy state
>> + *  011b => Device running and save the device state, pre-copy state
>> + *  100b => Device stopped and the device state is resuming
>> + *  101b => Invalid state
>> + *  110b => Error state
>> + *  111b => Invalid state
>> + *
>> + * State transitions:
>> + *
>> + *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
>> + *                (100b)     (001b)     (011b)        (010b)       (000b)
>> + * 0. Running or default state
>> + *                             |
>> + *
>> + * 1. Normal Shutdown (optional)
>> + *                             |------------------------------------->|
>> + *
>> + * 2. Save the state or suspend
>> + *                             |------------------------->|---------->|
>> + *
>> + * 3. Save the state during live migration
>> + *                             |----------->|------------>|---------->|
>> + *
>> + * 4. Resuming
>> + *                  |<---------|
>> + *
>> + * 5. Resumed
>> + *                  |--------->|
>> + *
>> + * 0. Default state of VFIO device is _RUNNNG when the user application starts.
>> + * 1. During normal shutdown of the user application, the user application may
>> + *    optionally change the VFIO device state from _RUNNING to _STOP. This
>> + *    transition is optional. The vendor driver must support this transition but
>> + *    must not require it.
>> + * 2. When the user application saves state or suspends the application, the
>> + *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
>> + *    On state transition from _RUNNING to stop-and-copy, driver must stop the
>> + *    device, save the device state and send it to the application through the
>> + *    migration region. The sequence to be followed for such transition is given
>> + *    below.
>> + * 3. In live migration of user application, the state transitions from _RUNNING
>> + *    to pre-copy, to stop-and-copy, and to _STOP.
>> + *    On state transition from _RUNNING to pre-copy, the driver should start
>> + *    gathering the device state while the application is still running and send
>> + *    the device state data to application through the migration region.
>> + *    On state transition from pre-copy to stop-and-copy, the driver must stop
>> + *    the device, save the device state and send it to the user application
>> + *    through the migration region.
>> + *    Vendor drivers must support the pre-copy state even for implementations
>> + *    where no data is provided to the user before the stop-and-copy state. The
>> + *    user must not be required to consume all migration data before the device
>> + *    transitions to a new state, including the stop-and-copy state.
>> + *    The sequence to be followed for above two transitions is given below.
>> + * 4. To start the resuming phase, the device state should be transitioned from
>> + *    the _RUNNING to the _RESUMING state.
>> + *    In the _RESUMING state, the driver should use the device state data
>> + *    received through the migration region to resume the device.
>> + * 5. After providing saved device data to the driver, the application should
>> + *    change the state from _RESUMING to _RUNNING.
>> + *
>> + * reserved:
>> + *      Reads on this field return zero and writes are ignored.
>> + *
>> + * pending_bytes: (read only)
>> + *      The number of pending bytes still to be migrated from the vendor driver.
>> + *
>> + * data_offset: (read only)
>> + *      The user application should read data_offset in the migration region
>> + *      from where the user application should read the device data during the
>> + *      _SAVING state or write the device data during the _RESUMING state.
> The sentence above is a bit complex to read and was not understandable
> [to me] at first shot. Maybe something like:
> offset of the saved data within the migration region. The data at this
> offset gets read by the user application during the _SAVING transition
> and written by this latter during the _RESUMING state.
> 

Changing it to:
  * data_offset: (read only)
  *      The user application should read data_offset field from the 
migration
  *      region. The user application should read the device data from this
  *      offset within the migration region during the _SAVING state or 
write
  *      the device data during the _RESUMING state. See below for 
details of
  *      sequence to be followed.



>   See
>> + *      below for details of sequence to be followed.
>> + *
>> + * data_size: (read/write)
>> + *      The user application should read data_size to get the size in bytes of
>> + *      the data copied in the migration region during the _SAVING state and
>> + *      write the size in bytes of the data copied in the migration region
>> + *      during the _RESUMING state.
> any alignment constraints on the data_size when restoring data?

Its mentioned at the end of this bit comment
"The user application should write data in the same order as the data is 
received and the data should be of same transaction size at the source."


> when saving data, also the data_size should be properly aligned I guess.

No, data size is in bytes. data_offset should be page aligned if data 
section is mmaped.

>> + *
>> + * The format of the migration region is as follows:
>> + *  ------------------------------------------------------------------
>> + * |vfio_device_migration_info|    data section                      |
>> + * |                          |     ///////////////////////////////  |
>> + * ------------------------------------------------------------------
>> + *   ^                              ^
>> + *  offset 0-trapped part        data_offset
>> + *
>> + * The structure vfio_device_migration_info is always followed by the data
>> + * section in the region, so data_offset will always be nonzero
> maybe add, whatever the data content

Sorry, I didn't understand this comment.

> . The offset
>> + * from where the data is copied is decided by the kernel driver. The data
>> + * section can be trapped, mapped, or partitioned, depending on how the kernel
> nit: maybe use the mmap terminology everywhere
>> + * driver defines the data section. The data section partition can be defined
>> + * as mapped by the sparse mmap capability. If mmapped, data_offset should be
> s/should/must
>> + * page aligned, whereas initial section which contains the
>> + * vfio_device_migration_info structure, might not end at the offset, which is
>> + * page aligned. The user is not required to access through mmap regardless
>> + * of the capabilities of the region mmap.
>> + * The vendor driver should determine whether and how to partition the data
>> + * section. The vendor driver should return data_offset accordingly.
>> + *
>> + * The sequence to be followed for the _SAVING|_RUNNING device state or> + * pre-copy phase and for the _SAVING device state or stop-and-copy
> phase is as
>> + * follows:
> Could we only talk about states mentionned in state transition drawing?
> I think it would be simpler to follow.
>> + * a. Read pending_bytes, indicating the start of a new iteration to get device
>> + *    data. Repeated read on pending_bytes at this stage should have no side
>> + *    effects.
>> + *    If pending_bytes == 0, the user application should not iterate to get data
>> + *    for that device.
>> + *    If pending_bytes > 0, perform the following steps.
> Does (!pending_bytes) really means that the pre-copy migration is over
> and the user app should stop iterating? I understand the device still
> runs. We may have completed the migration at some point (pending_bytes
> == 0) but for some reason the device resumes some activity and updates
> some new dirty bits? Or is there any auto-transition from pre-copy to
> stopped?

!pending_bytes means that vendor driver don't have any more data in 
pre-copy state but that doesn't mean that user applications/QEMU's 
pre-copy is over. Dirty pages are not part of this data, dirty pages are 
tracked separately.
There is auto transition from pre-copy to stopped state.

>> + * b. Read data_offset, indicating that the vendor driver should make data
>> + *    available through the data section. The vendor driver should return this
>> + *    read operation only after data is available from (region + data_offset)
>> + *    to (region + data_offset + data_size).
>> + * c. Read data_size, which is the amount of data in bytes available through
>> + *    the migration region.
>> + *    Read on data_offset and data_size should return the offset and size of
>> + *    the current buffer if the user application reads data_offset and
>> + *    data_size more than once here.
>> + * d. Read data_size bytes of data from (region + data_offset) from the
>> + *    migration region.
>> + * e. Process the data.
>> + * f. Read pending_bytes, which indicates that the data from the previous
>> + *    iteration has been read. If pending_bytes > 0, go to step b.
> If I understand correctly this is a way for the userapp to ack the fact
> it consumed the data_size, right? So only after f) pending_bytes -=
> data_size, is that correct?
> 

That's right.

> Sorry I am showing late on the review and have missed lots of
> discussions. Just for my curiosity was a ring buffer considered at some
> point with prod and cons index?

What is ring buffer?

>> + *
>> + * If an error occurs during the above sequence, the vendor driver can return
>> + * an error code for next read() or write() operation, which will terminate the
> which write operation? I think write ops are detailed after in the
> resume process, right?

Right, this error handling is applicable to both sequences. moving this 
comment below resume sequence.


>> + * loop. The user application should then take the next necessary action, for
>> + * example, failing migration or terminating the user application.
>> + *
>> + * The user application can transition from the _SAVING|_RUNNING
>> + * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
>> + * number of pending bytes. The user application should iterate in _SAVING
>> + * (stop-and-copy) until pending_bytes is 0.
>> + *
>> + * The sequence to be followed while _RESUMING device state is as follows:
>> + * While data for this device is available, repeat the following steps:
>> + * a. Read data_offset from where the user application should write data.
>> + * b. Write migration data starting at the migration region + data_offset for
>> + *    the length determined by data_size from the migration source.
>> + * c. Write data_size, which indicates to the vendor driver that data is
>> + *    written in the migration region. Vendor driver should apply the
>> + *    user-provided migration region data to the device resume state.
> How does the userapp know when the data it wrote has been consumed by
> the device?

On write(data_size), vendor driver should return only after consuming data.

Thanks,
Kirti

>> + *
>> + * For the user application, data is opaque. The user application should write
>> + * data in the same order as the data is received and the data should be of
>> + * same transaction size at the source.
>> + */
>> +
>> +struct vfio_device_migration_info {
>> +	__u32 device_state;         /* VFIO device state */
>> +#define VFIO_DEVICE_STATE_STOP      (0)
>> +#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
>> +#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
>> +#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
>> +#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
>> +				     VFIO_DEVICE_STATE_SAVING |  \
>> +				     VFIO_DEVICE_STATE_RESUMING)
>> +
>> +#define VFIO_DEVICE_STATE_VALID(state) \
>> +	(state & VFIO_DEVICE_STATE_RESUMING ? \
>> +	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
>> +
>> +#define VFIO_DEVICE_STATE_IS_ERROR(state) \
>> +	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \
>> +					      VFIO_DEVICE_STATE_RESUMING))
>> +
>> +#define VFIO_DEVICE_STATE_SET_ERROR(state) \
>> +	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
>> +					     VFIO_DEVICE_STATE_RESUMING)
>> +
>> +	__u32 reserved;
>> +	__u64 pending_bytes;
>> +	__u64 data_offset;
>> +	__u64 data_size;
>> +} __attribute__((packed));
>> +
>>   /*
>>    * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
>>    * which allows direct access to non-MSIX registers which happened to be within
>>
> Thanks
> 
> Eric
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2020-03-24 19:15 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-18 19:41 [PATCH v14 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
2020-03-18 19:41 ` [PATCH v14 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
2020-03-19  1:17   ` Yan Zhao
2020-03-19  3:49     ` Alex Williamson
2020-03-19  5:05       ` Yan Zhao
2020-03-19 13:09         ` Alex Williamson
2020-03-20  1:30           ` Yan Zhao
2020-03-20  2:34             ` Alex Williamson
2020-03-20  3:06               ` Yan Zhao
2020-03-20  4:09                 ` Alex Williamson
2020-03-20  4:20                   ` Yan Zhao
2020-03-23 14:45           ` Auger Eric
2020-03-23 11:45   ` Auger Eric
2020-03-24 19:14     ` Kirti Wankhede
2020-03-18 19:41 ` [PATCH v14 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
2020-03-23 11:59   ` Auger Eric
2020-03-18 19:41 ` [PATCH v14 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
2020-03-19  3:44   ` Alex Williamson
2020-03-18 19:41 ` [PATCH v14 Kernel 4/7] vfio iommu: Implementation of ioctl " Kirti Wankhede
2020-03-19  3:06   ` Yan Zhao
2020-03-19  4:01     ` Alex Williamson
2020-03-19  4:15       ` Yan Zhao
2020-03-19  4:40         ` Alex Williamson
2020-03-19  6:15           ` Yan Zhao
2020-03-19 13:06             ` Alex Williamson
2020-03-19 16:57               ` Kirti Wankhede
2020-03-20  0:51                 ` Yan Zhao
2020-03-19  3:45   ` Alex Williamson
2020-03-19 14:52     ` Kirti Wankhede
2020-03-19 16:22       ` Alex Williamson
2020-03-19 20:25         ` Kirti Wankhede
2020-03-19 20:54           ` Alex Williamson
2020-03-19 18:57     ` Kirti Wankhede
2020-03-18 19:41 ` [PATCH v14 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
2020-03-19  3:45   ` Alex Williamson
2020-03-20  8:35   ` Yan Zhao
2020-03-20 15:40     ` Alex Williamson
2020-03-20 15:47       ` Alex Williamson
2020-03-20 19:14         ` Kirti Wankhede
2020-03-20 19:28           ` Alex Williamson
2020-03-23  1:10             ` Yan Zhao
2020-03-18 19:41 ` [PATCH v14 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Kirti Wankhede
2020-03-18 19:41 ` [PATCH v14 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
2020-03-19  3:45   ` Alex Williamson
2020-03-19  6:24   ` Yan Zhao
2020-03-20 19:41     ` Alex Williamson
2020-03-23  2:43       ` Yan Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).