kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices
@ 2020-03-12 17:53 Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Hi,

This patch set adds:
* New IOCTL VFIO_IOMMU_DIRTY_PAGES to get dirty pages bitmap with
  respect to IOMMU container rather than per device. All pages pinned by
  vendor driver through vfio_pin_pages external API has to be marked as
  dirty during  migration. When IOMMU capable device is present in the
  container and all pages are pinned and mapped, then all pages are marked
  dirty.
  When there are CPU writes, CPU dirty page tracking can identify dirtied
  pages, but any page pinned by vendor driver can also be written by
  device. As of now there is no device which has hardware support for
  dirty page tracking. So all pages which are pinned should be considered
  as dirty.
  This ioctl is also used to start/stop dirty pages tracking for pinned and
  unpinned pages while migration is active.

* Updated IOCTL VFIO_IOMMU_UNMAP_DMA to get dirty pages bitmap before
  unmapping IO virtual address range.
  With vIOMMU, during pre-copy phase of migration, while CPUs are still
  running, IO virtual address unmap can happen while device still keeping
  reference of guest pfns. Those pages should be reported as dirty before
  unmap, so that VFIO user space application can copy content of those
  pages from source to destination.

* Patch 7 is proposed change to detect if IOMMU capable device driver is
  smart to report pages to be marked dirty by pinning pages using
  vfio_pin_pages() API.


Yet TODO:
Since there is no device which has hardware support for system memmory
dirty bitmap tracking, right now there is no other API from vendor driver
to VFIO IOMMU module to report dirty pages. In future, when such hardware
support will be implemented, an API will be required such that vendor
driver could report dirty pages to VFIO module during migration phases.

Adding revision history from previous QEMU patch set to understand KABI
changes done till now

v12 -> v13
- Changed bitmap allocation in vfio_iommu_type1 to per vfio_dma
- Changed VFIO_IOMMU_DIRTY_PAGES ioctl behaviour to be per vfio_dma range.
- Changed vfio_iommu_type1_dirty_bitmap structure to have separate data
  field.

v11 -> v12
- Changed bitmap allocation in vfio_iommu_type1.
- Remove atomicity of ref_count.
- Updated comments for migration device state structure about error
  reporting.
- Nit picks from v11 reviews

v10 -> v11
- Fix pin pages API to free vpfn if it is marked as unpinned tracking page.
- Added proposal to detect if IOMMU capable device calls external pin pages
  API to mark pages dirty.
- Nit picks from v10 reviews

v9 -> v10:
- Updated existing VFIO_IOMMU_UNMAP_DMA ioctl to get dirty pages bitmap
  during unmap while migration is active
- Added flag in VFIO_IOMMU_GET_INFO to indicate driver support dirty page
  tracking.
- If iommu_mapped, mark all pages dirty.
- Added unpinned pages tracking while migration is active.
- Updated comments for migration device state structure with bit
  combination table and state transition details.

v8 -> v9:
- Split patch set in 2 sets, Kernel and QEMU.
- Dirty pages bitmap is queried from IOMMU container rather than from
  vendor driver for per device. Added 2 ioctls to achieve this.

v7 -> v8:
- Updated comments for KABI
- Added BAR address validation check during PCI device's config space load
  as suggested by Dr. David Alan Gilbert.
- Changed vfio_migration_set_state() to set or clear device state flags.
- Some nit fixes.

v6 -> v7:
- Fix build failures.

v5 -> v6:
- Fix build failure.

v4 -> v5:
- Added decriptive comment about the sequence of access of members of
  structure vfio_device_migration_info to be followed based on Alex's
  suggestion
- Updated get dirty pages sequence.
- As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
  get_object, save_config and load_config.
- Fixed multiple nit picks.
- Tested live migration with multiple vfio device assigned to a VM.

v3 -> v4:
- Added one more bit for _RESUMING flag to be set explicitly.
- data_offset field is read-only for user space application.
- data_size is read for every iteration before reading data from migration,
  that is removed assumption that data will be till end of migration
  region.
- If vendor driver supports mappable sparsed region, map those region
  during setup state of save/load, similarly unmap those from cleanup
  routines.
- Handles race condition that causes data corruption in migration region
  during save device state by adding mutex and serialiaing save_buffer and
  get_dirty_pages routines.
- Skip called get_dirty_pages routine for mapped MMIO region of device.
- Added trace events.
- Split into multiple functional patches.

v2 -> v3:
- Removed enum of VFIO device states. Defined VFIO device state with 2
  bits.
- Re-structured vfio_device_migration_info to keep it minimal and defined
  action on read and write access on its members.

v1 -> v2:
- Defined MIGRATION region type and sub-type which should be used with
  region type capability.
- Re-structured vfio_device_migration_info. This structure will be placed
  at 0th offset of migration region.
- Replaced ioctl with read/write for trapped part of migration region.
- Added both type of access support, trapped or mmapped, for data section
  of the region.
- Moved PCI device functions to pci file.
- Added iteration to get dirty page bitmap until bitmap for all requested
  pages are copied.

Thanks,
Kirti



Kirti Wankhede (7):
  vfio: KABI for migration interface for device state
  vfio iommu: Remove atomicity of ref_count of pinned pages
  vfio iommu: Add ioctl definition for dirty pages tracking.
  vfio iommu: Implementation of ioctl to for dirty pages tracking.
  vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  vfio iommu: Adds flag to indicate dirty pages tracking capability
    support
  vfio: Selective dirty page tracking if IOMMU backed device pins pages

 drivers/vfio/vfio.c             |   9 +-
 drivers/vfio/vfio_iommu_type1.c | 386 ++++++++++++++++++++++++++++++++++++++--
 include/linux/vfio.h            |   4 +-
 include/uapi/linux/vfio.h       | 295 +++++++++++++++++++++++++++++-
 4 files changed, 673 insertions(+), 21 deletions(-)

-- 
2.7.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 1/7] vfio: KABI for migration interface for device state
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

- Defined MIGRATION region type and sub-type.

- Defined vfio_device_migration_info structure which will be placed at the
  0th offset of migration region to get/set VFIO device related
  information. Defined members of structure and usage on read/write access.

- Defined device states and state transition details.

- Defined sequence to be followed while saving and resuming VFIO device.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 include/uapi/linux/vfio.h | 227 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 227 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 9e843a147ead..be34ec278710 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -305,6 +305,7 @@ struct vfio_region_info_cap_type {
 #define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
 #define VFIO_REGION_TYPE_GFX                    (1)
 #define VFIO_REGION_TYPE_CCW			(2)
+#define VFIO_REGION_TYPE_MIGRATION              (3)
 
 /* sub-types for VFIO_REGION_TYPE_PCI_* */
 
@@ -379,6 +380,232 @@ struct vfio_region_gfx_edid {
 /* sub-types for VFIO_REGION_TYPE_CCW */
 #define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
 
+/* sub-types for VFIO_REGION_TYPE_MIGRATION */
+#define VFIO_REGION_SUBTYPE_MIGRATION           (1)
+
+/*
+ * The structure vfio_device_migration_info is placed at the 0th offset of
+ * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related
+ * migration information. Field accesses from this structure are only supported
+ * at their native width and alignment. Otherwise, the result is undefined and
+ * vendor drivers should return an error.
+ *
+ * device_state: (read/write)
+ *      - The user application writes to this field to inform the vendor driver
+ *        about the device state to be transitioned to.
+ *      - The vendor driver should take the necessary actions to change the
+ *        device state. After successful transition to a given state, the
+ *        vendor driver should return success on write(device_state, state)
+ *        system call. If the device state transition fails, the vendor driver
+ *        should return an appropriate -errno for the fault condition.
+ *      - On the user application side, if the device state transition fails,
+ *	  that is, if write(device_state, state) returns an error, read
+ *	  device_state again to determine the current state of the device from
+ *	  the vendor driver.
+ *      - The vendor driver should return previous state of the device unless
+ *        the vendor driver has encountered an internal error, in which case
+ *        the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR.
+ *      - The user application must use the device reset ioctl to recover the
+ *        device from VFIO_DEVICE_STATE_ERROR state. If the device is
+ *        indicated to be in a valid device state by reading device_state, the
+ *        user application may attempt to transition the device to any valid
+ *        state reachable from the current state or terminate itself.
+ *
+ *      device_state consists of 3 bits:
+ *      - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear,
+ *        it indicates the _STOP state. When the device state is changed to
+ *        _STOP, driver should stop the device before write() returns.
+ *      - If bit 1 is set, it indicates the _SAVING state, which means that the
+ *        driver should start gathering device state information that will be
+ *        provided to the VFIO user application to save the device's state.
+ *      - If bit 2 is set, it indicates the _RESUMING state, which means that
+ *        the driver should prepare to resume the device. Data provided through
+ *        the migration region should be used to resume the device.
+ *      Bits 3 - 31 are reserved for future use. To preserve them, the user
+ *      application should perform a read-modify-write operation on this
+ *      field when modifying the specified bits.
+ *
+ *  +------- _RESUMING
+ *  |+------ _SAVING
+ *  ||+----- _RUNNING
+ *  |||
+ *  000b => Device Stopped, not saving or resuming
+ *  001b => Device running, which is the default state
+ *  010b => Stop the device & save the device state, stop-and-copy state
+ *  011b => Device running and save the device state, pre-copy state
+ *  100b => Device stopped and the device state is resuming
+ *  101b => Invalid state
+ *  110b => Error state
+ *  111b => Invalid state
+ *
+ * State transitions:
+ *
+ *              _RESUMING  _RUNNING    Pre-copy    Stop-and-copy   _STOP
+ *                (100b)     (001b)     (011b)        (010b)       (000b)
+ * 0. Running or default state
+ *                             |
+ *
+ * 1. Normal Shutdown (optional)
+ *                             |------------------------------------->|
+ *
+ * 2. Save the state or suspend
+ *                             |------------------------->|---------->|
+ *
+ * 3. Save the state during live migration
+ *                             |----------->|------------>|---------->|
+ *
+ * 4. Resuming
+ *                  |<---------|
+ *
+ * 5. Resumed
+ *                  |--------->|
+ *
+ * 0. Default state of VFIO device is _RUNNNG when the user application starts.
+ * 1. During normal shutdown of the user application, the user application may
+ *    optionally change the VFIO device state from _RUNNING to _STOP. This
+ *    transition is optional. The vendor driver must support this transition but
+ *    must not require it.
+ * 2. When the user application saves state or suspends the application, the
+ *    device state transitions from _RUNNING to stop-and-copy and then to _STOP.
+ *    On state transition from _RUNNING to stop-and-copy, driver must stop the
+ *    device, save the device state and send it to the application through the
+ *    migration region. The sequence to be followed for such transition is given
+ *    below.
+ * 3. In live migration of user application, the state transitions from _RUNNING
+ *    to pre-copy, to stop-and-copy, and to _STOP.
+ *    On state transition from _RUNNING to pre-copy, the driver should start
+ *    gathering the device state while the application is still running and send
+ *    the device state data to application through the migration region.
+ *    On state transition from pre-copy to stop-and-copy, the driver must stop
+ *    the device, save the device state and send it to the user application
+ *    through the migration region.
+ *    Vendor drivers must support the pre-copy state even for implementations
+ *    where no data is provided to the user before the stop-and-copy state. The
+ *    user must not be required to consume all migration data before the device
+ *    transitions to a new state, including the stop-and-copy state.
+ *    The sequence to be followed for above two transitions is given below.
+ * 4. To start the resuming phase, the device state should be transitioned from
+ *    the _RUNNING to the _RESUMING state.
+ *    In the _RESUMING state, the driver should use the device state data
+ *    received through the migration region to resume the device.
+ * 5. After providing saved device data to the driver, the application should
+ *    change the state from _RESUMING to _RUNNING.
+ *
+ * reserved:
+ *      Reads on this field return zero and writes are ignored.
+ *
+ * pending_bytes: (read only)
+ *      The number of pending bytes still to be migrated from the vendor driver.
+ *
+ * data_offset: (read only)
+ *      The user application should read data_offset in the migration region
+ *      from where the user application should read the device data during the
+ *      _SAVING state or write the device data during the _RESUMING state. See
+ *      below for details of sequence to be followed.
+ *
+ * data_size: (read/write)
+ *      The user application should read data_size to get the size in bytes of
+ *      the data copied in the migration region during the _SAVING state and
+ *      write the size in bytes of the data copied in the migration region
+ *      during the _RESUMING state.
+ *
+ * The format of the migration region is as follows:
+ *  ------------------------------------------------------------------
+ * |vfio_device_migration_info|    data section                      |
+ * |                          |     ///////////////////////////////  |
+ * ------------------------------------------------------------------
+ *   ^                              ^
+ *  offset 0-trapped part        data_offset
+ *
+ * The structure vfio_device_migration_info is always followed by the data
+ * section in the region, so data_offset will always be nonzero. The offset
+ * from where the data is copied is decided by the kernel driver. The data
+ * section can be trapped, mapped, or partitioned, depending on how the kernel
+ * driver defines the data section. The data section partition can be defined
+ * as mapped by the sparse mmap capability. If mmapped, data_offset should be
+ * page aligned, whereas initial section which contains the
+ * vfio_device_migration_info structure, might not end at the offset, which is
+ * page aligned. The user is not required to access through mmap regardless
+ * of the capabilities of the region mmap.
+ * The vendor driver should determine whether and how to partition the data
+ * section. The vendor driver should return data_offset accordingly.
+ *
+ * The sequence to be followed for the _SAVING|_RUNNING device state or
+ * pre-copy phase and for the _SAVING device state or stop-and-copy phase is as
+ * follows:
+ * a. Read pending_bytes, indicating the start of a new iteration to get device
+ *    data. Repeated read on pending_bytes at this stage should have no side
+ *    effects.
+ *    If pending_bytes == 0, the user application should not iterate to get data
+ *    for that device.
+ *    If pending_bytes > 0, perform the following steps.
+ * b. Read data_offset, indicating that the vendor driver should make data
+ *    available through the data section. The vendor driver should return this
+ *    read operation only after data is available from (region + data_offset)
+ *    to (region + data_offset + data_size).
+ * c. Read data_size, which is the amount of data in bytes available through
+ *    the migration region.
+ *    Read on data_offset and data_size should return the offset and size of
+ *    the current buffer if the user application reads data_offset and
+ *    data_size more than once here.
+ * d. Read data_size bytes of data from (region + data_offset) from the
+ *    migration region.
+ * e. Process the data.
+ * f. Read pending_bytes, which indicates that the data from the previous
+ *    iteration has been read. If pending_bytes > 0, go to step b.
+ *
+ * If an error occurs during the above sequence, the vendor driver can return
+ * an error code for next read() or write() operation, which will terminate the
+ * loop. The user application should then take the next necessary action, for
+ * example, failing migration or terminating the user application.
+ *
+ * The user application can transition from the _SAVING|_RUNNING
+ * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the
+ * number of pending bytes. The user application should iterate in _SAVING
+ * (stop-and-copy) until pending_bytes is 0.
+ *
+ * The sequence to be followed while _RESUMING device state is as follows:
+ * While data for this device is available, repeat the following steps:
+ * a. Read data_offset from where the user application should write data.
+ * b. Write migration data starting at the migration region + data_offset for
+ *    the length determined by data_size from the migration source.
+ * c. Write data_size, which indicates to the vendor driver that data is
+ *    written in the migration region. Vendor driver should apply the
+ *    user-provided migration region data to the device resume state.
+ *
+ * For the user application, data is opaque. The user application should write
+ * data in the same order as the data is received and the data should be of
+ * same transaction size at the source.
+ */
+
+struct vfio_device_migration_info {
+	__u32 device_state;         /* VFIO device state */
+#define VFIO_DEVICE_STATE_STOP      (0)
+#define VFIO_DEVICE_STATE_RUNNING   (1 << 0)
+#define VFIO_DEVICE_STATE_SAVING    (1 << 1)
+#define VFIO_DEVICE_STATE_RESUMING  (1 << 2)
+#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_RUNNING | \
+				     VFIO_DEVICE_STATE_SAVING |  \
+				     VFIO_DEVICE_STATE_RESUMING)
+
+#define VFIO_DEVICE_STATE_VALID(state) \
+	(state & VFIO_DEVICE_STATE_RESUMING ? \
+	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1)
+
+#define VFIO_DEVICE_STATE_IS_ERROR(state) \
+	(state & VFIO_DEVICE_STATE_MASK == (VFIO_DEVICE_STATE_SAVING | \
+					    VFIO_DEVICE_STATE_RESUMING))
+
+#define VFIO_DEVICE_STATE_SET_ERROR(state) \
+	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \
+					     VFIO_DEVICE_STATE_RESUMING)
+
+	__u32 reserved;
+	__u64 pending_bytes;
+	__u64 data_offset;
+	__u64 data_size;
+} __attribute__((packed));
+
 /*
  * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
  * which allows direct access to non-MSIX registers which happened to be within
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

vfio_pfn.ref_count is always updated by holding iommu->lock, using atomic
variable is overkill.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index a177bf2c6683..d386461e5d11 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -111,7 +111,7 @@ struct vfio_pfn {
 	struct rb_node		node;
 	dma_addr_t		iova;		/* Device address */
 	unsigned long		pfn;		/* Host pfn */
-	atomic_t		ref_count;
+	unsigned int		ref_count;
 };
 
 struct vfio_regions {
@@ -232,7 +232,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, dma_addr_t iova,
 
 	vpfn->iova = iova;
 	vpfn->pfn = pfn;
-	atomic_set(&vpfn->ref_count, 1);
+	vpfn->ref_count = 1;
 	vfio_link_pfn(dma, vpfn);
 	return 0;
 }
@@ -250,7 +250,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
 	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
 
 	if (vpfn)
-		atomic_inc(&vpfn->ref_count);
+		vpfn->ref_count++;
 	return vpfn;
 }
 
@@ -258,7 +258,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
 {
 	int ret = 0;
 
-	if (atomic_dec_and_test(&vpfn->ref_count)) {
+	vpfn->ref_count--;
+	if (!vpfn->ref_count) {
 		ret = put_pfn(vpfn->pfn, dma->prot);
 		vfio_remove_from_pfn_list(dma, vpfn);
 	}
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking.
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to " Kirti Wankhede
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

IOMMU container maintains a list of all pages pinned by vfio_pin_pages API.
All pages pinned by vendor driver through this API should be considered as
dirty during migration. When container consists of IOMMU capable device and
all pages are pinned and mapped, then all pages are marked dirty.
Added support to start/stop pinned and unpinned pages tracking and to get
bitmap of all dirtied pages for requested IO virtual address range.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 include/uapi/linux/vfio.h | 51 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index be34ec278710..02d555cc7036 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1021,6 +1021,57 @@ struct vfio_iommu_type1_dma_unmap {
 #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
 #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
 
+/**
+ * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *                                     struct vfio_iommu_type1_dirty_bitmap)
+ * IOCTL is used for dirty pages tracking. Caller sets argsz, which is size of
+ * struct vfio_iommu_type1_dirty_bitmap. Caller set flag depend on which
+ * operation to perform, details as below:
+ *
+ * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_START set, indicates
+ * migration is active and IOMMU module should track pages which are being
+ * unpinned. Unpinned pages are tracked until tracking is stopped by user
+ * application by setting VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
+ *
+ * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP set, indicates
+ * IOMMU should stop tracking unpinned pages and also free previously tracked
+ * unpinned pages data.
+ *
+ * When IOCTL is called with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set,
+ * IOCTL returns dirty pages bitmap for IOMMU container during migration for
+ * given IOVA range. User must provide data[] as the structure
+ * vfio_iommu_type1_dirty_bitmap_get through which user provides IOVA range and
+ * pgsize. This interface supports to get bitmap of smallest supported pgsize
+ * only and can be modified in future to get bitmap of specified pgsize.
+ * User must allocate memory for bitmap, zero the bitmap memory and set size
+ * of allocated memory in bitmap_size field. One bit is used to represent one
+ * page consecutively starting from iova offset. User should provide page size
+ * in 'pgsize'. Bit set in bitmap indicates page at that offset from iova is
+ * dirty. Caller must set argsz including size of structure
+ * vfio_iommu_type1_dirty_bitmap_get.
+ *
+ * Only one flag should be set at a time.
+ *
+ */
+struct vfio_iommu_type1_dirty_bitmap {
+	__u32        argsz;
+	__u32        flags;
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
+	__u8         data[];
+};
+
+struct vfio_iommu_type1_dirty_bitmap_get {
+	__u64        iova;		/* IO virtual address */
+	__u64        size;		/* Size of iova range */
+	__u64	     pgsize;		/* page size for bitmap */
+	__u64        bitmap_size;	/* in bytes */
+	__u64 __user *bitmap;		/* one bit per page */
+};
+
+#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
+
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (2 preceding siblings ...)
  2020-03-12 17:53 ` [PATCH v13 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-13 18:13   ` Alex Williamson
  2020-03-18 12:13   ` Dr. David Alan Gilbert
  2020-03-12 17:53 ` [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
- Start pinned and unpinned pages tracking while migration is active
- Stop pinned and unpinned dirty pages tracking. This is also used to
  stop dirty pages tracking if migration failed or cancelled.
- Get dirty pages bitmap. This ioctl returns bitmap of dirty pages, its
  user space application responsibility to copy content of dirty pages
  from source to destination during migration.

To prevent DoS attack, memory for bitmap is allocated per vfio_dma
structure. Bitmap size is calculated considering smallest supported page
size. Bitmap is allocated when dirty logging is enabled for those
vfio_dmas whose vpfn list is not empty or whole range is mapped, in
case of pass-through device.

Bitmap is populated for already pinned pages when bitmap is allocated for
a vfio_dma with the smallest supported page size. Update bitmap from
pinning and unpinning functions. When user application queries bitmap,
check if requested page size is same as page size used to populated
bitmap. If it is equal, copy bitmap, but if not equal, return error.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 243 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 237 insertions(+), 6 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index d386461e5d11..435e84269a28 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -70,6 +70,7 @@ struct vfio_iommu {
 	unsigned int		dma_avail;
 	bool			v2;
 	bool			nesting;
+	bool			dirty_page_tracking;
 };
 
 struct vfio_domain {
@@ -90,6 +91,7 @@ struct vfio_dma {
 	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
 	struct task_struct	*task;
 	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
+	unsigned long		*bitmap;
 };
 
 struct vfio_group {
@@ -125,6 +127,7 @@ struct vfio_regions {
 					(!list_empty(&iommu->domain_list))
 
 static int put_pfn(unsigned long pfn, int prot);
+static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
 
 /*
  * This code handles mapping and unmapping of user data buffers
@@ -174,6 +177,76 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
 	rb_erase(&old->node, &iommu->dma_list);
 }
 
+static inline unsigned long dirty_bitmap_bytes(unsigned int npages)
+{
+	if (!npages)
+		return 0;
+
+	return ALIGN(npages, BITS_PER_LONG) / sizeof(unsigned long);
+}
+
+static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, unsigned long pgsize)
+{
+	if (!RB_EMPTY_ROOT(&dma->pfn_list) || dma->iommu_mapped) {
+		unsigned long npages = dma->size / pgsize;
+
+		dma->bitmap = kvzalloc(dirty_bitmap_bytes(npages), GFP_KERNEL);
+		if (!dma->bitmap)
+			return -ENOMEM;
+	}
+	return 0;
+}
+
+static int vfio_dma_all_bitmap_alloc(struct vfio_iommu *iommu,
+				     unsigned long pgsize)
+{
+	struct rb_node *n = rb_first(&iommu->dma_list);
+	int ret;
+
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+		struct rb_node *n;
+
+		ret = vfio_dma_bitmap_alloc(dma, pgsize);
+		if (ret) {
+			struct rb_node *p = rb_prev(n);
+
+			for (; p; p = rb_prev(p)) {
+				struct vfio_dma *dma = rb_entry(n,
+							struct vfio_dma, node);
+
+				kfree(dma->bitmap);
+				dma->bitmap = NULL;
+			}
+			return ret;
+		}
+
+		if (!dma->bitmap)
+			continue;
+
+		for (n = rb_first(&dma->pfn_list); n; n = rb_next(n)) {
+			struct vfio_pfn *vpfn = rb_entry(n, struct vfio_pfn,
+							 node);
+
+			bitmap_set(dma->bitmap,
+				   (vpfn->iova - dma->iova) / pgsize, 1);
+		}
+	}
+	return 0;
+}
+
+static void vfio_dma_all_bitmap_free(struct vfio_iommu *iommu)
+{
+	struct rb_node *n = rb_first(&iommu->dma_list);
+
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
+
+		kfree(dma->bitmap);
+		dma->bitmap = NULL;
+	}
+}
+
 /*
  * Helper Functions for host iova-pfn list
  */
@@ -254,12 +327,16 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
 	return vpfn;
 }
 
-static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
+static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn,
+				  bool do_tracking, unsigned long pgsize)
 {
 	int ret = 0;
 
 	vpfn->ref_count--;
 	if (!vpfn->ref_count) {
+		if (do_tracking && dma->bitmap)
+			bitmap_set(dma->bitmap,
+				   (vpfn->iova - dma->iova) / pgsize, 1);
 		ret = put_pfn(vpfn->pfn, dma->prot);
 		vfio_remove_from_pfn_list(dma, vpfn);
 	}
@@ -484,7 +561,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
 }
 
 static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
-				    bool do_accounting)
+				    bool do_accounting, bool do_tracking,
+				    unsigned long pgsize)
 {
 	int unlocked;
 	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
@@ -492,7 +570,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
 	if (!vpfn)
 		return 0;
 
-	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
+	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn, do_tracking, pgsize);
 
 	if (do_accounting)
 		vfio_lock_acct(dma, -unlocked, true);
@@ -563,9 +641,26 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 
 		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
 		if (ret) {
-			vfio_unpin_page_external(dma, iova, do_accounting);
+			vfio_unpin_page_external(dma, iova, do_accounting,
+						 false, 0);
 			goto pin_unwind;
 		}
+
+		if (iommu->dirty_page_tracking) {
+			unsigned long pgshift =
+					 __ffs(vfio_pgsize_bitmap(iommu));
+
+			if (!dma->bitmap) {
+				ret = vfio_dma_bitmap_alloc(dma, 1 << pgshift);
+				if (ret) {
+					vfio_unpin_page_external(dma, iova,
+						 do_accounting, false, 0);
+					goto pin_unwind;
+				}
+			}
+			bitmap_set(dma->bitmap,
+				   (vpfn->iova - dma->iova) >> pgshift, 1);
+		}
 	}
 
 	ret = i;
@@ -578,7 +673,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 
 		iova = user_pfn[j] << PAGE_SHIFT;
 		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
-		vfio_unpin_page_external(dma, iova, do_accounting);
+		vfio_unpin_page_external(dma, iova, do_accounting, false, 0);
 		phys_pfn[j] = 0;
 	}
 pin_done:
@@ -612,7 +707,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
 		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
 		if (!dma)
 			goto unpin_exit;
-		vfio_unpin_page_external(dma, iova, do_accounting);
+		vfio_unpin_page_external(dma, iova, do_accounting,
+					 iommu->dirty_page_tracking, PAGE_SIZE);
 	}
 
 unpin_exit:
@@ -800,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
 	vfio_unmap_unpin(iommu, dma, true);
 	vfio_unlink_dma(iommu, dma);
 	put_task_struct(dma->task);
+	kfree(dma->bitmap);
 	kfree(dma);
 	iommu->dma_avail++;
 }
@@ -830,6 +927,54 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
 	return bitmap;
 }
 
+static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
+				  size_t size, uint64_t pgsize,
+				  unsigned char __user *bitmap)
+{
+	struct vfio_dma *dma;
+	unsigned long pgshift = __ffs(pgsize);
+	unsigned int npages, bitmap_size;
+
+	dma = vfio_find_dma(iommu, iova, 1);
+
+	if (!dma)
+		return -EINVAL;
+
+	if (dma->iova != iova || dma->size != size)
+		return -EINVAL;
+
+	npages = dma->size >> pgshift;
+	bitmap_size = dirty_bitmap_bytes(npages);
+
+	/* mark all pages dirty if all pages are pinned and mapped. */
+	if (dma->iommu_mapped)
+		bitmap_set(dma->bitmap, 0, npages);
+
+	if (dma->bitmap) {
+		if (copy_to_user((void __user *)bitmap, dma->bitmap,
+				 bitmap_size))
+			return -EFAULT;
+
+		memset(dma->bitmap, 0, bitmap_size);
+	}
+	return 0;
+}
+
+static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
+{
+	long bsize;
+
+	if (!bitmap_size || bitmap_size > SIZE_MAX)
+		return -EINVAL;
+
+	bsize = dirty_bitmap_bytes(npages);
+
+	if (bitmap_size < bsize)
+		return -EINVAL;
+
+	return 0;
+}
+
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			     struct vfio_iommu_type1_dma_unmap *unmap)
 {
@@ -2277,6 +2422,92 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 		return copy_to_user((void __user *)arg, &unmap, minsz) ?
 			-EFAULT : 0;
+	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
+		struct vfio_iommu_type1_dirty_bitmap dirty;
+		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
+				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
+		int ret;
+
+		if (!iommu->v2)
+			return -EACCES;
+
+		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
+				    flags);
+
+		if (copy_from_user(&dirty, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (dirty.argsz < minsz || dirty.flags & ~mask)
+			return -EINVAL;
+
+		/* only one flag should be set at a time */
+		if (__ffs(dirty.flags) != __fls(dirty.flags))
+			return -EINVAL;
+
+		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
+			unsigned long iommu_pgsize =
+					1 << __ffs(vfio_pgsize_bitmap(iommu));
+
+			mutex_lock(&iommu->lock);
+			ret = vfio_dma_all_bitmap_alloc(iommu, iommu_pgsize);
+			if (!ret)
+				iommu->dirty_page_tracking = true;
+			mutex_unlock(&iommu->lock);
+			return ret;
+		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
+			mutex_lock(&iommu->lock);
+			if (iommu->dirty_page_tracking) {
+				iommu->dirty_page_tracking = false;
+				vfio_dma_all_bitmap_free(iommu);
+			}
+			mutex_unlock(&iommu->lock);
+			return 0;
+		} else if (dirty.flags &
+				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
+			struct vfio_iommu_type1_dirty_bitmap_get range;
+			unsigned long pgshift;
+			size_t data_size = dirty.argsz - minsz;
+			uint64_t iommu_pgsize =
+					 1 << __ffs(vfio_pgsize_bitmap(iommu));
+
+			if (!data_size || data_size < sizeof(range))
+				return -EINVAL;
+
+			if (copy_from_user(&range, (void __user *)(arg + minsz),
+					   sizeof(range)))
+				return -EFAULT;
+
+			// allow only min supported pgsize
+			if (range.pgsize != iommu_pgsize)
+				return -EINVAL;
+			if (range.iova & (iommu_pgsize - 1))
+				return -EINVAL;
+			if (!range.size || range.size & (iommu_pgsize - 1))
+				return -EINVAL;
+			if (range.iova + range.size < range.iova)
+				return -EINVAL;
+			if (!access_ok((void __user *)range.bitmap,
+				       range.bitmap_size))
+				return -EINVAL;
+
+			pgshift = __ffs(range.pgsize);
+			ret = verify_bitmap_size(range.size >> pgshift,
+						 range.bitmap_size);
+			if (ret)
+				return ret;
+
+			mutex_lock(&iommu->lock);
+			if (iommu->dirty_page_tracking)
+				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
+					 range.size, range.pgsize,
+					 (unsigned char __user *)range.bitmap);
+			else
+				ret = -EINVAL;
+			mutex_unlock(&iommu->lock);
+
+			return ret;
+		}
 	}
 
 	return -ENOTTY;
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (3 preceding siblings ...)
  2020-03-12 17:53 ` [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to " Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-13 18:45   ` Alex Williamson
  2020-03-12 17:53 ` [PATCH v13 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
  6 siblings, 1 reply; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Pages, pinned by external interface for requested IO virtual address
range,  might get unpinned  and unmapped while migration is active and
device is still running, that is, in pre-copy phase while guest driver
still could access those pages. Host device can write to these pages while
those were mapped. Such pages should be marked dirty so that after
migration guest driver should still be able to complete the operation.

To get bitmap during unmap, user should set flag
VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
zeroed by user space application. Bitmap size and page size should be set
by user application.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 63 +++++++++++++++++++++++++++++++++++++----
 include/uapi/linux/vfio.h       | 12 ++++++++
 2 files changed, 70 insertions(+), 5 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 435e84269a28..4037b82c6db0 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -976,7 +976,8 @@ static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
 }
 
 static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
-			     struct vfio_iommu_type1_dma_unmap *unmap)
+			     struct vfio_iommu_type1_dma_unmap *unmap,
+			     unsigned long *bitmap)
 {
 	uint64_t mask;
 	struct vfio_dma *dma, *dma_last = NULL;
@@ -1027,6 +1028,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 	 * will be returned if these conditions are not met.  The v2 interface
 	 * will only return success and a size of zero if there were no
 	 * mappings within the range.
+	 *
+	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
+	 * must be for single mapping. Multiple mappings with this flag set is
+	 * not supported.
 	 */
 	if (iommu->v2) {
 		dma = vfio_find_dma(iommu, unmap->iova, 1);
@@ -1034,6 +1039,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 			ret = -EINVAL;
 			goto unlock;
 		}
+
+		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
+		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
+			ret = -EINVAL;
+			goto unlock;
+		}
+
 		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
 		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
 			ret = -EINVAL;
@@ -1051,6 +1063,11 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 		if (dma->task->mm != current->mm)
 			break;
 
+		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
+			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
+					       unmap->bitmap_pgsize,
+					      (unsigned char __user *) bitmap);
+
 		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
 			struct vfio_iommu_type1_dma_unmap nb_unmap;
 
@@ -1076,6 +1093,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
 						    &nb_unmap);
 			goto again;
 		}
+
 		unmapped += dma->size;
 		vfio_remove_dma(iommu, dma);
 	}
@@ -2406,22 +2424,57 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
 		struct vfio_iommu_type1_dma_unmap unmap;
-		long ret;
+		unsigned long *bitmap = NULL;
+		long ret, bsize;
 
 		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
 
 		if (copy_from_user(&unmap, (void __user *)arg, minsz))
 			return -EFAULT;
 
-		if (unmap.argsz < minsz || unmap.flags)
+		if (unmap.argsz < minsz ||
+		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
 			return -EINVAL;
 
-		ret = vfio_dma_do_unmap(iommu, &unmap);
+		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
+			unsigned long pgshift;
+			uint64_t iommu_pgsizes = vfio_pgsize_bitmap(iommu);
+			uint64_t iommu_pgmask =
+				 ((uint64_t)1 << __ffs(iommu_pgsizes)) - 1;
+
+			if (copy_from_user(&unmap, (void __user *)arg,
+					   sizeof(unmap)))
+				return -EFAULT;
+
+			pgshift = __ffs(unmap.bitmap_pgsize);
+
+			if (((unmap.bitmap_pgsize - 1) & iommu_pgmask) !=
+			     (unmap.bitmap_pgsize - 1))
+				return -EINVAL;
+
+			if ((unmap.bitmap_pgsize & iommu_pgsizes) !=
+			     unmap.bitmap_pgsize)
+				return -EINVAL;
+			if (unmap.iova + unmap.size < unmap.iova)
+				return -EINVAL;
+			if (!access_ok((void __user *)unmap.bitmap,
+				       unmap.bitmap_size))
+				return -EINVAL;
+
+			bsize = verify_bitmap_size(unmap.size >> pgshift,
+						   unmap.bitmap_size);
+			if (bsize < 0)
+				return bsize;
+			bitmap = unmap.bitmap;
+		}
+
+		ret = vfio_dma_do_unmap(iommu, &unmap, bitmap);
 		if (ret)
 			return ret;
 
-		return copy_to_user((void __user *)arg, &unmap, minsz) ?
+		ret = copy_to_user((void __user *)arg, &unmap, minsz) ?
 			-EFAULT : 0;
+		return ret;
 	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
 		struct vfio_iommu_type1_dirty_bitmap dirty;
 		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 02d555cc7036..12b2094f887e 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -1004,12 +1004,24 @@ struct vfio_iommu_type1_dma_map {
  * field.  No guarantee is made to the user that arbitrary unmaps of iova
  * or size different from those used in the original mapping call will
  * succeed.
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, user should
+ * allocate memory to get bitmap, clear the bitmap memory by setting zero and
+ * should set size of allocated memory in bitmap_size field. One bit in bitmap
+ * represents per page , page of user provided page size in 'bitmap_pgsize',
+ * consecutively starting from iova offset. Bit set indicates page at that
+ * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
+ * returned in bitmap.
  */
 struct vfio_iommu_type1_dma_unmap {
 	__u32	argsz;
 	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
 	__u64	iova;				/* IO virtual address */
 	__u64	size;				/* Size of mapping (bytes) */
+	__u64        bitmap_pgsize;		/* page size for bitmap */
+	__u64        bitmap_size;               /* in bytes */
+	void __user *bitmap;                    /* one bit per page */
 };
 
 #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (4 preceding siblings ...)
  2020-03-12 17:53 ` [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-12 17:53 ` [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
  6 siblings, 0 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Flag VFIO_IOMMU_INFO_DIRTY_PGS in VFIO_IOMMU_GET_INFO indicates that driver
support dirty pages tracking.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio_iommu_type1.c | 3 ++-
 include/uapi/linux/vfio.h       | 5 +++--
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 4037b82c6db0..4f1f116feabc 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2377,7 +2377,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 			info.cap_offset = 0; /* output, no-recopy necessary */
 		}
 
-		info.flags = VFIO_IOMMU_INFO_PGSIZES;
+		info.flags = VFIO_IOMMU_INFO_PGSIZES |
+			     VFIO_IOMMU_INFO_DIRTY_PGS;
 
 		info.iova_pgsizes = vfio_pgsize_bitmap(iommu);
 
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 12b2094f887e..217eaeec1eba 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -947,8 +947,9 @@ struct vfio_device_ioeventfd {
 struct vfio_iommu_type1_info {
 	__u32	argsz;
 	__u32	flags;
-#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
-#define VFIO_IOMMU_INFO_CAPS	(1 << 1)	/* Info supports caps */
+#define VFIO_IOMMU_INFO_PGSIZES   (1 << 0) /* supported page sizes info */
+#define VFIO_IOMMU_INFO_CAPS      (1 << 1) /* Info supports caps */
+#define VFIO_IOMMU_INFO_DIRTY_PGS (1 << 2) /* supports dirty page tracking */
 	__u64	iova_pgsizes;	/* Bitmap of supported page sizes */
 	__u32   cap_offset;	/* Offset within info struct of first cap */
 };
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
                   ` (5 preceding siblings ...)
  2020-03-12 17:53 ` [PATCH v13 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Kirti Wankhede
@ 2020-03-12 17:53 ` Kirti Wankhede
  2020-03-13 20:49   ` Alex Williamson
  6 siblings, 1 reply; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-12 17:53 UTC (permalink / raw)
  To: alex.williamson, cjia
  Cc: kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm, Kirti Wankhede

Added a check such that only singleton IOMMU groups can pin pages.
From the point when vendor driver pins any pages, consider IOMMU group
dirty page scope to be limited to pinned pages.

To optimize to avoid walking list often, added flag
pinned_page_dirty_scope to indicate if all of the vfio_groups for each
vfio_domain in the domain_list dirty page scope is limited to pinned
pages. This flag is updated on first pinned pages request for that IOMMU
group and on attaching/detaching group.

Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Neo Jia <cjia@nvidia.com>
---
 drivers/vfio/vfio.c             |  9 +++++-
 drivers/vfio/vfio_iommu_type1.c | 72 +++++++++++++++++++++++++++++++++++++++--
 include/linux/vfio.h            |  4 ++-
 3 files changed, 80 insertions(+), 5 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index c8482624ca34..79108c1245a5 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -85,6 +85,7 @@ struct vfio_group {
 	atomic_t			opened;
 	wait_queue_head_t		container_q;
 	bool				noiommu;
+	unsigned int			dev_counter;
 	struct kvm			*kvm;
 	struct blocking_notifier_head	notifier;
 };
@@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
 
 	mutex_lock(&group->device_lock);
 	list_add(&device->group_next, &group->device_list);
+	group->dev_counter++;
 	mutex_unlock(&group->device_lock);
 
 	return device;
@@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
 	struct vfio_group *group = device->group;
 
 	list_del(&device->group_next);
+	group->dev_counter--;
 	mutex_unlock(&group->device_lock);
 
 	dev_set_drvdata(device->dev, NULL);
@@ -1895,6 +1898,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	if (!group)
 		return -ENODEV;
 
+	if (group->dev_counter > 1)
+		return -EINVAL;
+
 	ret = vfio_group_add_container_user(group);
 	if (ret)
 		goto err_pin_pages;
@@ -1902,7 +1908,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
+		ret = driver->ops->pin_pages(container->iommu_data,
+					     group->iommu_group, user_pfn,
 					     npage, prot, phys_pfn);
 	else
 		ret = -ENOTTY;
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 4f1f116feabc..18a284b230c0 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -71,6 +71,7 @@ struct vfio_iommu {
 	bool			v2;
 	bool			nesting;
 	bool			dirty_page_tracking;
+	bool			pinned_page_dirty_scope;
 };
 
 struct vfio_domain {
@@ -98,6 +99,7 @@ struct vfio_group {
 	struct iommu_group	*iommu_group;
 	struct list_head	next;
 	bool			mdev_group;	/* An mdev group */
+	bool			has_pinned_pages;
 };
 
 struct vfio_iova {
@@ -129,6 +131,10 @@ struct vfio_regions {
 static int put_pfn(unsigned long pfn, int prot);
 static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group);
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
 /*
  * This code handles mapping and unmapping of user data buffers
  * into DMA'ble space using the IOMMU
@@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
 }
 
 static int vfio_iommu_type1_pin_pages(void *iommu_data,
+				      struct iommu_group *iommu_group,
 				      unsigned long *user_pfn,
 				      int npage, int prot,
 				      unsigned long *phys_pfn)
 {
 	struct vfio_iommu *iommu = iommu_data;
+	struct vfio_group *group;
 	int i, j, ret;
 	unsigned long remote_vaddr;
 	struct vfio_dma *dma;
@@ -662,8 +670,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
 				   (vpfn->iova - dma->iova) >> pgshift, 1);
 		}
 	}
-
 	ret = i;
+
+	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
+	if (!group->has_pinned_pages) {
+		group->has_pinned_pages = true;
+		update_pinned_page_dirty_scope(iommu);
+	}
+
 	goto pin_done;
 
 pin_unwind:
@@ -946,8 +960,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
 	npages = dma->size >> pgshift;
 	bitmap_size = dirty_bitmap_bytes(npages);
 
-	/* mark all pages dirty if all pages are pinned and mapped. */
-	if (dma->iommu_mapped)
+	/*
+	 * mark all pages dirty if any IOMMU capable device is not able
+	 * to report dirty pages and all pages are pinned and mapped.
+	 */
+	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
 		bitmap_set(dma->bitmap, 0, npages);
 
 	if (dma->bitmap) {
@@ -1430,6 +1447,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
 	return NULL;
 }
 
+static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
+					       struct iommu_group *iommu_group)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group = NULL;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		group = find_iommu_group(domain, iommu_group);
+		if (group)
+			return group;
+	}
+
+	if (iommu->external_domain)
+		group = find_iommu_group(iommu->external_domain, iommu_group);
+
+	return group;
+}
+
+static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
+{
+	struct vfio_domain *domain;
+	struct vfio_group *group;
+
+	list_for_each_entry(domain, &iommu->domain_list, next) {
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->has_pinned_pages) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	if (iommu->external_domain) {
+		domain = iommu->external_domain;
+		list_for_each_entry(group, &domain->group_list, next) {
+			if (!group->has_pinned_pages) {
+				iommu->pinned_page_dirty_scope = false;
+				return;
+			}
+		}
+	}
+
+	iommu->pinned_page_dirty_scope = true;
+}
+
 static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
 				  phys_addr_t *base)
 {
@@ -1836,6 +1898,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 
 			list_add(&group->next,
 				 &iommu->external_domain->group_list);
+			update_pinned_page_dirty_scope(iommu);
 			mutex_unlock(&iommu->lock);
 
 			return 0;
@@ -1958,6 +2021,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 done:
 	/* Delete the old one and insert new iova list */
 	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
+	update_pinned_page_dirty_scope(iommu);
 	mutex_unlock(&iommu->lock);
 	vfio_iommu_resv_free(&group_resv_regions);
 
@@ -1972,6 +2036,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 out_free:
 	kfree(domain);
 	kfree(group);
+	update_pinned_page_dirty_scope(iommu);
 	mutex_unlock(&iommu->lock);
 	return ret;
 }
@@ -2176,6 +2241,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 		vfio_iommu_iova_free(&iova_copy);
 
 detach_group_done:
+	update_pinned_page_dirty_scope(iommu);
 	mutex_unlock(&iommu->lock);
 }
 
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index e42a711a2800..da29802d6276 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
 					struct iommu_group *group);
 	void		(*detach_group)(void *iommu_data,
 					struct iommu_group *group);
-	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
+	int		(*pin_pages)(void *iommu_data,
+				     struct iommu_group *group,
+				     unsigned long *user_pfn,
 				     int npage, int prot,
 				     unsigned long *phys_pfn);
 	int		(*unpin_pages)(void *iommu_data,
-- 
2.7.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
  2020-03-12 17:53 ` [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to " Kirti Wankhede
@ 2020-03-13 18:13   ` Alex Williamson
  2020-03-16 18:49     ` Kirti Wankhede
  2020-03-18 12:13   ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2020-03-13 18:13 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 12 Mar 2020 23:23:24 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

Subject: s/to //

> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> - Start pinned and unpinned pages tracking while migration is active
> - Stop pinned and unpinned dirty pages tracking. This is also used to
>   stop dirty pages tracking if migration failed or cancelled.

I'm not sure why we need to specify "pinned and unpinned" above, it
works for iommu mapped pages too, and we expect it to later work other
dirtying mechanisms, like Yan's dma_rw interface.

> - Get dirty pages bitmap. This ioctl returns bitmap of dirty pages, its
>   user space application responsibility to copy content of dirty pages
>   from source to destination during migration.
> 
> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> structure. Bitmap size is calculated considering smallest supported page
> size. Bitmap is allocated when dirty logging is enabled for those
> vfio_dmas whose vpfn list is not empty or whole range is mapped, in
> case of pass-through device.
> 
> Bitmap is populated for already pinned pages when bitmap is allocated for
> a vfio_dma with the smallest supported page size. Update bitmap from
> pinning and unpinning functions. When user application queries bitmap,
> check if requested page size is same as page size used to populated
> bitmap. If it is equal, copy bitmap, but if not equal, return error.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 243 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 237 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index d386461e5d11..435e84269a28 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -70,6 +70,7 @@ struct vfio_iommu {
>  	unsigned int		dma_avail;
>  	bool			v2;
>  	bool			nesting;
> +	bool			dirty_page_tracking;
>  };
>  
>  struct vfio_domain {
> @@ -90,6 +91,7 @@ struct vfio_dma {
>  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>  	struct task_struct	*task;
>  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> +	unsigned long		*bitmap;
>  };
>  
>  struct vfio_group {
> @@ -125,6 +127,7 @@ struct vfio_regions {
>  					(!list_empty(&iommu->domain_list))
>  
>  static int put_pfn(unsigned long pfn, int prot);
> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
>  /*
>   * This code handles mapping and unmapping of user data buffers
> @@ -174,6 +177,76 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>  	rb_erase(&old->node, &iommu->dma_list);
>  }
>  
> +static inline unsigned long dirty_bitmap_bytes(unsigned int npages)
> +{
> +	if (!npages)
> +		return 0;
> +
> +	return ALIGN(npages, BITS_PER_LONG) / sizeof(unsigned long);

Yes, but no.  The math works out here on LP64 systems because
sizeof(unsigned long) == BITS_PER_BYTE, but sizeof(unsigned long) is
irrelevant, we wants BITS_PER_BYTE.  Also, the UAPI defines the bitmap
as an array of __u64, so rather than BITS_PER_LONG I think we want
BITS_PER_TYPE(u64).

> +}
> +
> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, unsigned long pgsize)
> +{
> +	if (!RB_EMPTY_ROOT(&dma->pfn_list) || dma->iommu_mapped) {

Yan's patch series allows vendor drivers to dirty pages regardless of
either of these.  I think we should do away with this an
unconditionally allocate a bitmap per vfio_dma.

> +		unsigned long npages = dma->size / pgsize;
> +
> +		dma->bitmap = kvzalloc(dirty_bitmap_bytes(npages), GFP_KERNEL);
> +		if (!dma->bitmap)
> +			return -ENOMEM;
> +	}
> +	return 0;
> +}
> +
> +static int vfio_dma_all_bitmap_alloc(struct vfio_iommu *iommu,
> +				     unsigned long pgsize)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +	int ret;
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +		struct rb_node *n;
> +
> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
> +		if (ret) {
> +			struct rb_node *p = rb_prev(n);
> +
> +			for (; p; p = rb_prev(p)) {
> +				struct vfio_dma *dma = rb_entry(n,
> +							struct vfio_dma, node);
> +
> +				kfree(dma->bitmap);
> +				dma->bitmap = NULL;
> +			}
> +			return ret;
> +		}
> +
> +		if (!dma->bitmap)
> +			continue;
> +
> +		for (n = rb_first(&dma->pfn_list); n; n = rb_next(n)) {
> +			struct vfio_pfn *vpfn = rb_entry(n, struct vfio_pfn,
> +							 node);
> +
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) / pgsize, 1);
> +		}
> +	}
> +	return 0;
> +}
> +
> +static void vfio_dma_all_bitmap_free(struct vfio_iommu *iommu)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> +		kfree(dma->bitmap);
> +		dma->bitmap = NULL;
> +	}
> +}
> +
>  /*
>   * Helper Functions for host iova-pfn list
>   */
> @@ -254,12 +327,16 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
>  	return vpfn;
>  }
>  
> -static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
> +static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn,
> +				  bool do_tracking, unsigned long pgsize)
>  {
>  	int ret = 0;
>  
>  	vpfn->ref_count--;
>  	if (!vpfn->ref_count) {
> +		if (do_tracking && dma->bitmap)
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) / pgsize, 1);

This seems wrong, or at best redundant.  The dirty bitmap should always
reflect all outstanding pinned pages.  When ref_count drops to zero we
can stop tracking it, but it should already be set in the bitmap and we
shouldn't need to do anything here.

>  		ret = put_pfn(vpfn->pfn, dma->prot);
>  		vfio_remove_from_pfn_list(dma, vpfn);
>  	}
> @@ -484,7 +561,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
>  }
>  
>  static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> -				    bool do_accounting)
> +				    bool do_accounting, bool do_tracking,
> +				    unsigned long pgsize)
>  {
>  	int unlocked;
>  	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
> @@ -492,7 +570,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>  	if (!vpfn)
>  		return 0;
>  
> -	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
> +	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn, do_tracking, pgsize);

This is the only use of do_tracking, which as discussed above shouldn't
need to be done in this function, thus we shouldn't need to pass pgsize
either.

>  
>  	if (do_accounting)
>  		vfio_lock_acct(dma, -unlocked, true);
> @@ -563,9 +641,26 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  
>  		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
>  		if (ret) {
> -			vfio_unpin_page_external(dma, iova, do_accounting);
> +			vfio_unpin_page_external(dma, iova, do_accounting,
> +						 false, 0);

I might use PAGE_SIZE here rather than 0 just for sanity, but as above
we don't need these extra args.

>  			goto pin_unwind;
>  		}
> +
> +		if (iommu->dirty_page_tracking) {

If we unconditionally allocated the bitmap, we could just test
dma->bitmap here and get rid of the allocation branch below (we'll need
to allocate the bitmap with the vfio_dma if new mappings are created
while dirty tracking is enabled though).

> +			unsigned long pgshift =
> +					 __ffs(vfio_pgsize_bitmap(iommu));

Maybe a follow-up patch could cache this, it can only potentially
change as each new group is added.  We'd probably want to deny adding a
group if the minimum pagesize changed while dirty tracking is enabled.

> +
> +			if (!dma->bitmap) {
> +				ret = vfio_dma_bitmap_alloc(dma, 1 << pgshift);
> +				if (ret) {
> +					vfio_unpin_page_external(dma, iova,
> +						 do_accounting, false, 0);
> +					goto pin_unwind;
> +				}
> +			}
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> +		}
>  	}
>  
>  	ret = i;
> @@ -578,7 +673,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  
>  		iova = user_pfn[j] << PAGE_SHIFT;
>  		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
> -		vfio_unpin_page_external(dma, iova, do_accounting);
> +		vfio_unpin_page_external(dma, iova, do_accounting, false, 0);

Unneeded.

>  		phys_pfn[j] = 0;
>  	}
>  pin_done:
> @@ -612,7 +707,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
>  		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
>  		if (!dma)
>  			goto unpin_exit;
> -		vfio_unpin_page_external(dma, iova, do_accounting);
> +		vfio_unpin_page_external(dma, iova, do_accounting,
> +					 iommu->dirty_page_tracking, PAGE_SIZE);

Same.

>  	}
>  
>  unpin_exit:
> @@ -800,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unmap_unpin(iommu, dma, true);
>  	vfio_unlink_dma(iommu, dma);
>  	put_task_struct(dma->task);
> +	kfree(dma->bitmap);
>  	kfree(dma);
>  	iommu->dma_avail++;
>  }
> @@ -830,6 +927,54 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>  	return bitmap;
>  }
>  
> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> +				  size_t size, uint64_t pgsize,
> +				  unsigned char __user *bitmap)
> +{
> +	struct vfio_dma *dma;
> +	unsigned long pgshift = __ffs(pgsize);
> +	unsigned int npages, bitmap_size;
> +
> +	dma = vfio_find_dma(iommu, iova, 1);
> +
> +	if (!dma)
> +		return -EINVAL;
> +
> +	if (dma->iova != iova || dma->size != size)
> +		return -EINVAL;
> +
> +	npages = dma->size >> pgshift;
> +	bitmap_size = dirty_bitmap_bytes(npages);
> +
> +	/* mark all pages dirty if all pages are pinned and mapped. */
> +	if (dma->iommu_mapped)
> +		bitmap_set(dma->bitmap, 0, npages);

Oops, we only test if dma->bitmap exists below.  Always allocating the
bitmap would resolve this too.

> +
> +	if (dma->bitmap) {
> +		if (copy_to_user((void __user *)bitmap, dma->bitmap,
> +				 bitmap_size))
> +			return -EFAULT;
> +
> +		memset(dma->bitmap, 0, bitmap_size);

Nope, we need to reprocess the bitmap to set all pinned pages as dirty
again.  Those need to be reported _every_ time the user asks for the
bitmap overlapping them.  We consider them to be continuously dirtied.

> +	}
> +	return 0;
> +}
> +
> +static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
> +{
> +	long bsize;
> +
> +	if (!bitmap_size || bitmap_size > SIZE_MAX)
> +		return -EINVAL;

bitmap_size should be a size_t if we're going to compare it to
SIZE_MAX.  bsize and the return of dirty_bitmap_bytes() should probably
also be a size_t.

> +
> +	bsize = dirty_bitmap_bytes(npages);
> +
> +	if (bitmap_size < bsize)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			     struct vfio_iommu_type1_dma_unmap *unmap)
>  {
> @@ -2277,6 +2422,92 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> +		int ret;
> +
> +		if (!iommu->v2)
> +			return -EACCES;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> +				    flags);
> +
> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> +			return -EINVAL;
> +
> +		/* only one flag should be set at a time */
> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> +			return -EINVAL;
> +
> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> +			unsigned long iommu_pgsize =
> +					1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			mutex_lock(&iommu->lock);
> +			ret = vfio_dma_all_bitmap_alloc(iommu, iommu_pgsize);
> +			if (!ret)
> +				iommu->dirty_page_tracking = true;
> +			mutex_unlock(&iommu->lock);
> +			return ret;

If a user called this more than once we'd re-allocate all the bitmaps
and leak the previous ones.  Let's avoid that by testing
dirty_page_tracking before we actually take any action.  Do we want to
return error or success though if dirty tracking is already enabled?
I'm inclined to say success because we've provided no means for the
user to check the status.

> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking) {
> +				iommu->dirty_page_tracking = false;
> +				vfio_dma_all_bitmap_free(iommu);
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return 0;

This one essentially does what I'm suggesting above already.  Repeated
calls take no additional action and don't return an error.

> +		} else if (dirty.flags &
> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> +			unsigned long pgshift;
> +			size_t data_size = dirty.argsz - minsz;
> +			uint64_t iommu_pgsize =
> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (!data_size || data_size < sizeof(range))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> +					   sizeof(range)))
> +				return -EFAULT;
> +
> +			// allow only min supported pgsize

Proper comment style please.

> +			if (range.pgsize != iommu_pgsize)
> +				return -EINVAL;
> +			if (range.iova & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (!range.size || range.size & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (range.iova + range.size < range.iova)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)range.bitmap,
> +				       range.bitmap_size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(range.pgsize);
> +			ret = verify_bitmap_size(range.size >> pgshift,
> +						 range.bitmap_size);
> +			if (ret)
> +				return ret;
> +
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking)
> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> +					 range.size, range.pgsize,
> +					 (unsigned char __user *)range.bitmap);
> +			else
> +				ret = -EINVAL;
> +			mutex_unlock(&iommu->lock);
> +
> +			return ret;
> +		}
>  	}
>  
>  	return -ENOTTY;


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-12 17:53 ` [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
@ 2020-03-13 18:45   ` Alex Williamson
  2020-03-17 18:28     ` Kirti Wankhede
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2020-03-13 18:45 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 12 Mar 2020 23:23:25 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Pages, pinned by external interface for requested IO virtual address
> range,  might get unpinned  and unmapped while migration is active and

"DMA mapped pages, including those pinned by mdev vendor drivers, might
get..."

> device is still running, that is, in pre-copy phase while guest driver

"...running.  For example, in pre-copy..."

> still could access those pages. Host device can write to these pages while
> those were mapped.

"...those pages, host device or vendor driver can dirty these mapped
pages."

> Such pages should be marked dirty so that after
> migration guest driver should still be able to complete the operation.

Complete what operation?  We need to report these dirty pages in order
to maintain memory consistency for a user making use of dirty page
tracking.

> To get bitmap during unmap, user should set flag
> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
> zeroed by user space application. Bitmap size and page size should be set
> by user application.

It seems like zeroed pages are no longer strictly necessary now that we
require requests to match existing mappings, right?
 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 63 +++++++++++++++++++++++++++++++++++++----
>  include/uapi/linux/vfio.h       | 12 ++++++++
>  2 files changed, 70 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 435e84269a28..4037b82c6db0 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -976,7 +976,8 @@ static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
>  }
>  
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> -			     struct vfio_iommu_type1_dma_unmap *unmap)
> +			     struct vfio_iommu_type1_dma_unmap *unmap,
> +			     unsigned long *bitmap)
>  {
>  	uint64_t mask;
>  	struct vfio_dma *dma, *dma_last = NULL;
> @@ -1027,6 +1028,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  	 * will be returned if these conditions are not met.  The v2 interface
>  	 * will only return success and a size of zero if there were no
>  	 * mappings within the range.
> +	 *
> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
> +	 * must be for single mapping. Multiple mappings with this flag set is
> +	 * not supported.
>  	 */
>  	if (iommu->v2) {
>  		dma = vfio_find_dma(iommu, unmap->iova, 1);
> @@ -1034,6 +1039,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			ret = -EINVAL;
>  			goto unlock;
>  		}
> +
> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
> +			ret = -EINVAL;
> +			goto unlock;
> +		}
> +
>  		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
>  		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
>  			ret = -EINVAL;
> @@ -1051,6 +1063,11 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  		if (dma->task->mm != current->mm)
>  			break;
>  
> +		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
> +			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
> +					       unmap->bitmap_pgsize,
> +					      (unsigned char __user *) bitmap);
> +
>  		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
>  			struct vfio_iommu_type1_dma_unmap nb_unmap;
>  
> @@ -1076,6 +1093,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  						    &nb_unmap);
>  			goto again;
>  		}
> +
>  		unmapped += dma->size;
>  		vfio_remove_dma(iommu, dma);
>  	}

Spurious white space.

> @@ -2406,22 +2424,57 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
>  		struct vfio_iommu_type1_dma_unmap unmap;
> -		long ret;
> +		unsigned long *bitmap = NULL;

Shouldn't this have a __user attribute?  Also long doesn't seem the
right type. void would be ok here.

> +		long ret, bsize;
>  
>  		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
>  
>  		if (copy_from_user(&unmap, (void __user *)arg, minsz))
>  			return -EFAULT;
>  
> -		if (unmap.argsz < minsz || unmap.flags)
> +		if (unmap.argsz < minsz ||
> +		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
>  			return -EINVAL;
>  
> -		ret = vfio_dma_do_unmap(iommu, &unmap);
> +		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
> +			unsigned long pgshift;
> +			uint64_t iommu_pgsizes = vfio_pgsize_bitmap(iommu);
> +			uint64_t iommu_pgmask =
> +				 ((uint64_t)1 << __ffs(iommu_pgsizes)) - 1;
> +

Need to test that unmap.argsz includes this.

> +			if (copy_from_user(&unmap, (void __user *)arg,
> +					   sizeof(unmap)))
> +				return -EFAULT;
> +
> +			pgshift = __ffs(unmap.bitmap_pgsize);
> +
> +			if (((unmap.bitmap_pgsize - 1) & iommu_pgmask) !=
> +			     (unmap.bitmap_pgsize - 1))
> +				return -EINVAL;
> +
> +			if ((unmap.bitmap_pgsize & iommu_pgsizes) !=
> +			     unmap.bitmap_pgsize)
> +				return -EINVAL;
> +			if (unmap.iova + unmap.size < unmap.iova)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)unmap.bitmap,
> +				       unmap.bitmap_size))
> +				return -EINVAL;

These tests should be identical to the previous patch.

> +
> +			bsize = verify_bitmap_size(unmap.size >> pgshift,
> +						   unmap.bitmap_size);
> +			if (bsize < 0)
> +				return bsize;
> +			bitmap = unmap.bitmap;
> +		}
> +
> +		ret = vfio_dma_do_unmap(iommu, &unmap, bitmap);
>  		if (ret)
>  			return ret;
>  
> -		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> +		ret = copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +		return ret;

Why?  Leftover debugging?

>  	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
>  		struct vfio_iommu_type1_dirty_bitmap dirty;
>  		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 02d555cc7036..12b2094f887e 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -1004,12 +1004,24 @@ struct vfio_iommu_type1_dma_map {
>   * field.  No guarantee is made to the user that arbitrary unmaps of iova
>   * or size different from those used in the original mapping call will
>   * succeed.
> + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
> + * before unmapping IO virtual addresses. When this flag is set, user should
> + * allocate memory to get bitmap, clear the bitmap memory by setting zero and
> + * should set size of allocated memory in bitmap_size field. One bit in bitmap
> + * represents per page , page of user provided page size in 'bitmap_pgsize',
> + * consecutively starting from iova offset. Bit set indicates page at that
> + * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
> + * returned in bitmap.
>   */
>  struct vfio_iommu_type1_dma_unmap {
>  	__u32	argsz;
>  	__u32	flags;
> +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
>  	__u64	iova;				/* IO virtual address */
>  	__u64	size;				/* Size of mapping (bytes) */
> +	__u64        bitmap_pgsize;		/* page size for bitmap */
> +	__u64        bitmap_size;               /* in bytes */
> +	void __user *bitmap;                    /* one bit per page */

This suggests to me that we should further split struct
vfio_iommu_type1_dirty_bitmap_get so that we can use the same
sub-structure here.  For example:

struct vfio_bitmap {
	__u64 pgsize;
	__u64 size;
	__u64 __user *data;
};

Note we still have a void* rather than __u64* in original above.

Also, why wouldn't we take the same data[] approach as the previous
patch, defining this as the data when the GET_DIRTY_BITMAP flag is set?

Previous patch would be updated to something like:

struct vfio_iommu_type1_dirty_bitmap_get {
	__u64 iova;
	__u64 size;
	struct vfio_bitmap bitmap;
}; 

>  };
>  
>  #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-12 17:53 ` [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
@ 2020-03-13 20:49   ` Alex Williamson
  2020-03-17 18:28     ` Kirti Wankhede
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2020-03-13 20:49 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Thu, 12 Mar 2020 23:23:27 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Added a check such that only singleton IOMMU groups can pin pages.
> From the point when vendor driver pins any pages, consider IOMMU group
> dirty page scope to be limited to pinned pages.
> 
> To optimize to avoid walking list often, added flag
> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
> vfio_domain in the domain_list dirty page scope is limited to pinned
> pages. This flag is updated on first pinned pages request for that IOMMU
> group and on attaching/detaching group.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio.c             |  9 +++++-
>  drivers/vfio/vfio_iommu_type1.c | 72 +++++++++++++++++++++++++++++++++++++++--
>  include/linux/vfio.h            |  4 ++-
>  3 files changed, 80 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index c8482624ca34..79108c1245a5 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -85,6 +85,7 @@ struct vfio_group {
>  	atomic_t			opened;
>  	wait_queue_head_t		container_q;
>  	bool				noiommu;
> +	unsigned int			dev_counter;
>  	struct kvm			*kvm;
>  	struct blocking_notifier_head	notifier;
>  };
> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
>  
>  	mutex_lock(&group->device_lock);
>  	list_add(&device->group_next, &group->device_list);
> +	group->dev_counter++;
>  	mutex_unlock(&group->device_lock);
>  
>  	return device;
> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
>  	struct vfio_group *group = device->group;
>  
>  	list_del(&device->group_next);
> +	group->dev_counter--;
>  	mutex_unlock(&group->device_lock);
>  
>  	dev_set_drvdata(device->dev, NULL);
> @@ -1895,6 +1898,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>  	if (!group)
>  		return -ENODEV;
>  
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
> +
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
>  		goto err_pin_pages;
> @@ -1902,7 +1908,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>  	container = group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->pin_pages))
> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
> +		ret = driver->ops->pin_pages(container->iommu_data,
> +					     group->iommu_group, user_pfn,
>  					     npage, prot, phys_pfn);
>  	else
>  		ret = -ENOTTY;
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 4f1f116feabc..18a284b230c0 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -71,6 +71,7 @@ struct vfio_iommu {
>  	bool			v2;
>  	bool			nesting;
>  	bool			dirty_page_tracking;
> +	bool			pinned_page_dirty_scope;
>  };
>  
>  struct vfio_domain {
> @@ -98,6 +99,7 @@ struct vfio_group {
>  	struct iommu_group	*iommu_group;
>  	struct list_head	next;
>  	bool			mdev_group;	/* An mdev group */
> +	bool			has_pinned_pages;

I'm afraid over time this name will be confusing, should we simply
call it pinned_page_dirty_scope per vfio_group as well?   We might have
to adapt this over time as we get new ways to dirty pages, but each
group voting towards the same value being set on the vfio_iommu object
seems like a good starting point.

>  };
>  
>  struct vfio_iova {
> @@ -129,6 +131,10 @@ struct vfio_regions {
>  static int put_pfn(unsigned long pfn, int prot);
>  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> +					       struct iommu_group *iommu_group);
> +
> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
>  /*
>   * This code handles mapping and unmapping of user data buffers
>   * into DMA'ble space using the IOMMU
> @@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>  }
>  
>  static int vfio_iommu_type1_pin_pages(void *iommu_data,
> +				      struct iommu_group *iommu_group,
>  				      unsigned long *user_pfn,
>  				      int npage, int prot,
>  				      unsigned long *phys_pfn)
>  {
>  	struct vfio_iommu *iommu = iommu_data;
> +	struct vfio_group *group;
>  	int i, j, ret;
>  	unsigned long remote_vaddr;
>  	struct vfio_dma *dma;
> @@ -662,8 +670,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  				   (vpfn->iova - dma->iova) >> pgshift, 1);
>  		}
>  	}
> -
>  	ret = i;
> +
> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
> +	if (!group->has_pinned_pages) {
> +		group->has_pinned_pages = true;
> +		update_pinned_page_dirty_scope(iommu);
> +	}
> +
>  	goto pin_done;
>  
>  pin_unwind:
> @@ -946,8 +960,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>  	npages = dma->size >> pgshift;
>  	bitmap_size = dirty_bitmap_bytes(npages);
>  
> -	/* mark all pages dirty if all pages are pinned and mapped. */
> -	if (dma->iommu_mapped)
> +	/*
> +	 * mark all pages dirty if any IOMMU capable device is not able
> +	 * to report dirty pages and all pages are pinned and mapped.
> +	 */
> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>  		bitmap_set(dma->bitmap, 0, npages);
>  
>  	if (dma->bitmap) {
> @@ -1430,6 +1447,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
>  	return NULL;
>  }
>  
> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> +					       struct iommu_group *iommu_group)
> +{
> +	struct vfio_domain *domain;
> +	struct vfio_group *group = NULL;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		group = find_iommu_group(domain, iommu_group);
> +		if (group)
> +			return group;
> +	}
> +
> +	if (iommu->external_domain)
> +		group = find_iommu_group(iommu->external_domain, iommu_group);
> +
> +	return group;
> +}
> +
> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> +{
> +	struct vfio_domain *domain;
> +	struct vfio_group *group;
> +
> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			if (!group->has_pinned_pages) {
> +				iommu->pinned_page_dirty_scope = false;
> +				return;
> +			}
> +		}
> +	}
> +
> +	if (iommu->external_domain) {
> +		domain = iommu->external_domain;
> +		list_for_each_entry(group, &domain->group_list, next) {
> +			if (!group->has_pinned_pages) {
> +				iommu->pinned_page_dirty_scope = false;
> +				return;
> +			}
> +		}
> +	}
> +
> +	iommu->pinned_page_dirty_scope = true;
> +}
> +
>  static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
>  				  phys_addr_t *base)
>  {
> @@ -1836,6 +1898,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  
>  			list_add(&group->next,
>  				 &iommu->external_domain->group_list);
> +			update_pinned_page_dirty_scope(iommu);
>  			mutex_unlock(&iommu->lock);
>  
>  			return 0;
> @@ -1958,6 +2021,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  done:
>  	/* Delete the old one and insert new iova list */
>  	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> +	update_pinned_page_dirty_scope(iommu);
>  	mutex_unlock(&iommu->lock);
>  	vfio_iommu_resv_free(&group_resv_regions);
>  

At this point we've added an iommu backed group that can't possibly
have pages pinned on behalf of this group yet, can't we just set
iommu->pinned_page_dirty_scope = false?

In the previous case, aren't we adding a non-iommu backed group, so
should we presume the scope is pinned pages even before we have any?
We could almost forego the iommu scope update, but it could be the
first group added if we're going to preemptively assume the scope of
the group.

> @@ -1972,6 +2036,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  out_free:
>  	kfree(domain);
>  	kfree(group);
> +	update_pinned_page_dirty_scope(iommu);

This one looks like paranoia given how late we update when the group is
added.

>  	mutex_unlock(&iommu->lock);
>  	return ret;
>  }
> @@ -2176,6 +2241,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  		vfio_iommu_iova_free(&iova_copy);
>  
>  detach_group_done:
> +	update_pinned_page_dirty_scope(iommu);

We only need to do this if the group we're removing does not have
pinned page dirty scope, right?  I think we have all the info here to
make that optimization.

>  	mutex_unlock(&iommu->lock);
>  }
>  
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index e42a711a2800..da29802d6276 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
>  					struct iommu_group *group);
>  	void		(*detach_group)(void *iommu_data,
>  					struct iommu_group *group);
> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
> +	int		(*pin_pages)(void *iommu_data,
> +				     struct iommu_group *group,
> +				     unsigned long *user_pfn,
>  				     int npage, int prot,
>  				     unsigned long *phys_pfn);
>  	int		(*unpin_pages)(void *iommu_data,


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
  2020-03-13 18:13   ` Alex Williamson
@ 2020-03-16 18:49     ` Kirti Wankhede
  2020-03-16 19:25       ` Alex Williamson
  0 siblings, 1 reply; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-16 18:49 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
	t=1584384512; bh=doVOeLJvCHlyBrT5oFIdrSU0nDjHXrwn0mWTtTMPgy4=;
	h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From:
	 Message-ID:Date:User-Agent:MIME-Version:In-Reply-To:
	 X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language:
	 Content-Transfer-Encoding;
	b=YBSt0RPo6EJ1dn6dm0dbNWZwnjiLLq9AOO6JOSOpq8DPlN0Rg4XNfilwq7e9s39B0
	 0CZyR9qnUO20Hx0WsJmP9YijZDiNeaV8C17VFED1mq+jxEKYBya4KUT9AROMvJ7kaj
	 EMMKbzAgMWBPBlc+g/LyAbE8/yh9BHw0FYFwRM1mMMTtLCAK9EGDA+WmVXI0tEN1vv
	 huuL80B3FikXs8xtoDBf6qTllgspVTxuya44ps2P+FjQJO6Kc9TLfaSU40jxdA7J+1
	 BD+ODhHtLPSyzQZ7Q11Zsx9HAECnBLtrGgoVxV6kO4IMuSOJvGFYQ4JVrr9+kT3ES6
	 Dj3GPxmCUt2yw==


On 3/13/2020 11:43 PM, Alex Williamson wrote:
> On Thu, 12 Mar 2020 23:23:24 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
> Subject: s/to //
> 
>> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
>> - Start pinned and unpinned pages tracking while migration is active
>> - Stop pinned and unpinned dirty pages tracking. This is also used to
>>    stop dirty pages tracking if migration failed or cancelled.
> 
> I'm not sure why we need to specify "pinned and unpinned" above, it
> works for iommu mapped pages too, and we expect it to later work other
> dirtying mechanisms, like Yan's dma_rw interface.
> 

iommu mapped pages are also pinned.
When there are CPU writes, like Yan's dma_rw, then KVM core takes care 
of marking those pages dirty, then we don't need to track those pages 
again here, right?


>> - Get dirty pages bitmap. This ioctl returns bitmap of dirty pages, its
>>    user space application responsibility to copy content of dirty pages
>>    from source to destination during migration.
>>
>> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
>> structure. Bitmap size is calculated considering smallest supported page
>> size. Bitmap is allocated when dirty logging is enabled for those
>> vfio_dmas whose vpfn list is not empty or whole range is mapped, in
>> case of pass-through device.
>>
>> Bitmap is populated for already pinned pages when bitmap is allocated for
>> a vfio_dma with the smallest supported page size. Update bitmap from
>> pinning and unpinning functions. When user application queries bitmap,
>> check if requested page size is same as page size used to populated
>> bitmap. If it is equal, copy bitmap, but if not equal, return error.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 243 +++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 237 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index d386461e5d11..435e84269a28 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -70,6 +70,7 @@ struct vfio_iommu {
>>   	unsigned int		dma_avail;
>>   	bool			v2;
>>   	bool			nesting;
>> +	bool			dirty_page_tracking;
>>   };
>>   
>>   struct vfio_domain {
>> @@ -90,6 +91,7 @@ struct vfio_dma {
>>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>>   	struct task_struct	*task;
>>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
>> +	unsigned long		*bitmap;
>>   };
>>   
>>   struct vfio_group {
>> @@ -125,6 +127,7 @@ struct vfio_regions {
>>   					(!list_empty(&iommu->domain_list))
>>   
>>   static int put_pfn(unsigned long pfn, int prot);
>> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>>   
>>   /*
>>    * This code handles mapping and unmapping of user data buffers
>> @@ -174,6 +177,76 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>>   	rb_erase(&old->node, &iommu->dma_list);
>>   }
>>   
>> +static inline unsigned long dirty_bitmap_bytes(unsigned int npages)
>> +{
>> +	if (!npages)
>> +		return 0;
>> +
>> +	return ALIGN(npages, BITS_PER_LONG) / sizeof(unsigned long);
> 
> Yes, but no.  The math works out here on LP64 systems because
> sizeof(unsigned long) == BITS_PER_BYTE, but sizeof(unsigned long) is
> irrelevant, we wants BITS_PER_BYTE.  Also, the UAPI defines the bitmap
> as an array of __u64, so rather than BITS_PER_LONG I think we want
> BITS_PER_TYPE(u64).
> 

Changing it to macro:

#define DIRTY_BITMAP_BYTES(n)   (ALIGN(n, BITS_PER_TYPE(u64)) / 
BITS_PER_BYTE)


>> +}
>> +
>> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, unsigned long pgsize)
>> +{
>> +	if (!RB_EMPTY_ROOT(&dma->pfn_list) || dma->iommu_mapped) {
> 
> Yan's patch series allows vendor drivers to dirty pages regardless of
> either of these.  I think we should do away with this an
> unconditionally allocate a bitmap per vfio_dma.
>

Same as above - CPU writes are tracked by KVM core.

>> +		unsigned long npages = dma->size / pgsize;
>> +
>> +		dma->bitmap = kvzalloc(dirty_bitmap_bytes(npages), GFP_KERNEL);
>> +		if (!dma->bitmap)
>> +			return -ENOMEM;
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int vfio_dma_all_bitmap_alloc(struct vfio_iommu *iommu,
>> +				     unsigned long pgsize)
>> +{
>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>> +	int ret;
>> +
>> +	for (; n; n = rb_next(n)) {
>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +		struct rb_node *n;
>> +
>> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
>> +		if (ret) {
>> +			struct rb_node *p = rb_prev(n);
>> +
>> +			for (; p; p = rb_prev(p)) {
>> +				struct vfio_dma *dma = rb_entry(n,
>> +							struct vfio_dma, node);
>> +
>> +				kfree(dma->bitmap);
>> +				dma->bitmap = NULL;
>> +			}
>> +			return ret;
>> +		}
>> +
>> +		if (!dma->bitmap)
>> +			continue;
>> +
>> +		for (n = rb_first(&dma->pfn_list); n; n = rb_next(n)) {
>> +			struct vfio_pfn *vpfn = rb_entry(n, struct vfio_pfn,
>> +							 node);
>> +
>> +			bitmap_set(dma->bitmap,
>> +				   (vpfn->iova - dma->iova) / pgsize, 1);
>> +		}
>> +	}
>> +	return 0;
>> +}
>> +
>> +static void vfio_dma_all_bitmap_free(struct vfio_iommu *iommu)
>> +{
>> +	struct rb_node *n = rb_first(&iommu->dma_list);
>> +
>> +	for (; n; n = rb_next(n)) {
>> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
>> +
>> +		kfree(dma->bitmap);
>> +		dma->bitmap = NULL;
>> +	}
>> +}
>> +
>>   /*
>>    * Helper Functions for host iova-pfn list
>>    */
>> @@ -254,12 +327,16 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
>>   	return vpfn;
>>   }
>>   
>> -static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
>> +static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn,
>> +				  bool do_tracking, unsigned long pgsize)
>>   {
>>   	int ret = 0;
>>   
>>   	vpfn->ref_count--;
>>   	if (!vpfn->ref_count) {
>> +		if (do_tracking && dma->bitmap)
>> +			bitmap_set(dma->bitmap,
>> +				   (vpfn->iova - dma->iova) / pgsize, 1);
> 
> This seems wrong, or at best redundant.  The dirty bitmap should always
> reflect all outstanding pinned pages.  When ref_count drops to zero we
> can stop tracking it, but it should already be set in the bitmap and we
> shouldn't need to do anything here.
> 

Yes, this is redundant. Removing it.

>>   		ret = put_pfn(vpfn->pfn, dma->prot);
>>   		vfio_remove_from_pfn_list(dma, vpfn);
>>   	}
>> @@ -484,7 +561,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
>>   }
>>   
>>   static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>> -				    bool do_accounting)
>> +				    bool do_accounting, bool do_tracking,
>> +				    unsigned long pgsize)
>>   {
>>   	int unlocked;
>>   	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
>> @@ -492,7 +570,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>>   	if (!vpfn)
>>   		return 0;
>>   
>> -	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
>> +	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn, do_tracking, pgsize);
> 
> This is the only use of do_tracking, which as discussed above shouldn't
> need to be done in this function, thus we shouldn't need to pass pgsize
> either.
> 

Right changing all calls to vfio_iova_put_vfio_pfn()

>>   
>>   	if (do_accounting)
>>   		vfio_lock_acct(dma, -unlocked, true);
>> @@ -563,9 +641,26 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>   
>>   		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
>>   		if (ret) {
>> -			vfio_unpin_page_external(dma, iova, do_accounting);
>> +			vfio_unpin_page_external(dma, iova, do_accounting,
>> +						 false, 0);
> 
> I might use PAGE_SIZE here rather than 0 just for sanity, but as above
> we don't need these extra args.
> 
>>   			goto pin_unwind;
>>   		}
>> +
>> +		if (iommu->dirty_page_tracking) {
> 
> If we unconditionally allocated the bitmap, we could just test
> dma->bitmap here and get rid of the allocation branch below (we'll need
> to allocate the bitmap with the vfio_dma if new mappings are created
> while dirty tracking is enabled though).
> 
>> +			unsigned long pgshift =
>> +					 __ffs(vfio_pgsize_bitmap(iommu));
> 
> Maybe a follow-up patch could cache this, it can only potentially
> change as each new group is added.  We'd probably want to deny adding a
> group if the minimum pagesize changed while dirty tracking is enabled.
> 
>> +
>> +			if (!dma->bitmap) {
>> +				ret = vfio_dma_bitmap_alloc(dma, 1 << pgshift);
>> +				if (ret) {
>> +					vfio_unpin_page_external(dma, iova,
>> +						 do_accounting, false, 0);
>> +					goto pin_unwind;
>> +				}
>> +			}
>> +			bitmap_set(dma->bitmap,
>> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
>> +		}
>>   	}
>>   
>>   	ret = i;
>> @@ -578,7 +673,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>   
>>   		iova = user_pfn[j] << PAGE_SHIFT;
>>   		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
>> -		vfio_unpin_page_external(dma, iova, do_accounting);
>> +		vfio_unpin_page_external(dma, iova, do_accounting, false, 0);
> 
> Unneeded.
> 
>>   		phys_pfn[j] = 0;
>>   	}
>>   pin_done:
>> @@ -612,7 +707,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
>>   		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
>>   		if (!dma)
>>   			goto unpin_exit;
>> -		vfio_unpin_page_external(dma, iova, do_accounting);
>> +		vfio_unpin_page_external(dma, iova, do_accounting,
>> +					 iommu->dirty_page_tracking, PAGE_SIZE);
> 
> Same.
> 
>>   	}
>>   
>>   unpin_exit:
>> @@ -800,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>>   	vfio_unmap_unpin(iommu, dma, true);
>>   	vfio_unlink_dma(iommu, dma);
>>   	put_task_struct(dma->task);
>> +	kfree(dma->bitmap);
>>   	kfree(dma);
>>   	iommu->dma_avail++;
>>   }
>> @@ -830,6 +927,54 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>>   	return bitmap;
>>   }
>>   
>> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>> +				  size_t size, uint64_t pgsize,
>> +				  unsigned char __user *bitmap)
>> +{
>> +	struct vfio_dma *dma;
>> +	unsigned long pgshift = __ffs(pgsize);
>> +	unsigned int npages, bitmap_size;
>> +
>> +	dma = vfio_find_dma(iommu, iova, 1);
>> +
>> +	if (!dma)
>> +		return -EINVAL;
>> +
>> +	if (dma->iova != iova || dma->size != size)
>> +		return -EINVAL;
>> +
>> +	npages = dma->size >> pgshift;
>> +	bitmap_size = dirty_bitmap_bytes(npages);
>> +
>> +	/* mark all pages dirty if all pages are pinned and mapped. */
>> +	if (dma->iommu_mapped)
>> +		bitmap_set(dma->bitmap, 0, npages);
> 
> Oops, we only test if dma->bitmap exists below.  Always allocating the
> bitmap would resolve this too.
> 
>> +
>> +	if (dma->bitmap) {
>> +		if (copy_to_user((void __user *)bitmap, dma->bitmap,
>> +				 bitmap_size))
>> +			return -EFAULT;
>> +
>> +		memset(dma->bitmap, 0, bitmap_size);
> 
> Nope, we need to reprocess the bitmap to set all pinned pages as dirty
> again.  Those need to be reported _every_ time the user asks for the
> bitmap overlapping them.  We consider them to be continuously dirtied.
> 
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
>> +{
>> +	long bsize;
>> +
>> +	if (!bitmap_size || bitmap_size > SIZE_MAX)
>> +		return -EINVAL;
> 
> bitmap_size should be a size_t if we're going to compare it to
> SIZE_MAX.  bsize and the return of dirty_bitmap_bytes() should probably
> also be a size_t.
> 

changing all parameters to uint64_t to be consistent with uapi and 
compare with UINT_MAX.

>> +
>> +	bsize = dirty_bitmap_bytes(npages);
>> +
>> +	if (bitmap_size < bsize)
>> +		return -EINVAL;
>> +
>> +	return 0;
>> +}
>> +
>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   			     struct vfio_iommu_type1_dma_unmap *unmap)
>>   {
>> @@ -2277,6 +2422,92 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>   
>>   		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>>   			-EFAULT : 0;
>> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
>> +		struct vfio_iommu_type1_dirty_bitmap dirty;
>> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
>> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
>> +		int ret;
>> +
>> +		if (!iommu->v2)
>> +			return -EACCES;
>> +
>> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
>> +				    flags);
>> +
>> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
>> +			return -EFAULT;
>> +
>> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
>> +			return -EINVAL;
>> +
>> +		/* only one flag should be set at a time */
>> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
>> +			return -EINVAL;
>> +
>> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
>> +			unsigned long iommu_pgsize =
>> +					1 << __ffs(vfio_pgsize_bitmap(iommu));
>> +
>> +			mutex_lock(&iommu->lock);
>> +			ret = vfio_dma_all_bitmap_alloc(iommu, iommu_pgsize);
>> +			if (!ret)
>> +				iommu->dirty_page_tracking = true;
>> +			mutex_unlock(&iommu->lock);
>> +			return ret;
> 
> If a user called this more than once we'd re-allocate all the bitmaps
> and leak the previous ones.  Let's avoid that by testing
> dirty_page_tracking before we actually take any action.  Do we want to
> return error or success though if dirty tracking is already enabled?
> I'm inclined to say success because we've provided no means for the
> user to check the status.

Success seems better.

Thanks,
Kirti

> 
>> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
>> +			mutex_lock(&iommu->lock);
>> +			if (iommu->dirty_page_tracking) {
>> +				iommu->dirty_page_tracking = false;
>> +				vfio_dma_all_bitmap_free(iommu);
>> +			}
>> +			mutex_unlock(&iommu->lock);
>> +			return 0;
> 
> This one essentially does what I'm suggesting above already.  Repeated
> calls take no additional action and don't return an error.
> 
>> +		} else if (dirty.flags &
>> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
>> +			struct vfio_iommu_type1_dirty_bitmap_get range;
>> +			unsigned long pgshift;
>> +			size_t data_size = dirty.argsz - minsz;
>> +			uint64_t iommu_pgsize =
>> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
>> +
>> +			if (!data_size || data_size < sizeof(range))
>> +				return -EINVAL;
>> +
>> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
>> +					   sizeof(range)))
>> +				return -EFAULT;
>> +
>> +			// allow only min supported pgsize
> 
> Proper comment style please.
> 
>> +			if (range.pgsize != iommu_pgsize)
>> +				return -EINVAL;
>> +			if (range.iova & (iommu_pgsize - 1))
>> +				return -EINVAL;
>> +			if (!range.size || range.size & (iommu_pgsize - 1))
>> +				return -EINVAL;
>> +			if (range.iova + range.size < range.iova)
>> +				return -EINVAL;
>> +			if (!access_ok((void __user *)range.bitmap,
>> +				       range.bitmap_size))
>> +				return -EINVAL;
>> +
>> +			pgshift = __ffs(range.pgsize);
>> +			ret = verify_bitmap_size(range.size >> pgshift,
>> +						 range.bitmap_size);
>> +			if (ret)
>> +				return ret;
>> +
>> +			mutex_lock(&iommu->lock);
>> +			if (iommu->dirty_page_tracking)
>> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
>> +					 range.size, range.pgsize,
>> +					 (unsigned char __user *)range.bitmap);
>> +			else
>> +				ret = -EINVAL;
>> +			mutex_unlock(&iommu->lock);
>> +
>> +			return ret;
>> +		}
>>   	}
>>   
>>   	return -ENOTTY;
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
  2020-03-16 18:49     ` Kirti Wankhede
@ 2020-03-16 19:25       ` Alex Williamson
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2020-03-16 19:25 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Tue, 17 Mar 2020 00:19:05 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:
> On 3/13/2020 11:43 PM, Alex Williamson wrote:
> > On Thu, 12 Mar 2020 23:23:24 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> > 
> > Subject: s/to //
> >   
> >> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> >> - Start pinned and unpinned pages tracking while migration is active
> >> - Stop pinned and unpinned dirty pages tracking. This is also used to
> >>    stop dirty pages tracking if migration failed or cancelled.  
> > 
> > I'm not sure why we need to specify "pinned and unpinned" above, it
> > works for iommu mapped pages too, and we expect it to later work other
> > dirtying mechanisms, like Yan's dma_rw interface.
> >   
> 
> iommu mapped pages are also pinned.
> When there are CPU writes, like Yan's dma_rw, then KVM core takes care 
> of marking those pages dirty, then we don't need to track those pages 
> again here, right?

No, KVM will mark vCPU writes as dirty, Yan's DMA r/w is specifically
providing an interface for the *host* CPU to dirty pages as if it was
device DMA.  We need to fill that gap.

Yes, IOMMU mapped pages are pinned, but as soon as we start talking
about pinned and unpinned, I drift towards explicit page pinning
through mdev.

> >> - Get dirty pages bitmap. This ioctl returns bitmap of dirty pages, its
> >>    user space application responsibility to copy content of dirty pages
> >>    from source to destination during migration.
> >>
> >> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> >> structure. Bitmap size is calculated considering smallest supported page
> >> size. Bitmap is allocated when dirty logging is enabled for those
> >> vfio_dmas whose vpfn list is not empty or whole range is mapped, in
> >> case of pass-through device.
> >>
> >> Bitmap is populated for already pinned pages when bitmap is allocated for
> >> a vfio_dma with the smallest supported page size. Update bitmap from
> >> pinning and unpinning functions. When user application queries bitmap,
> >> check if requested page size is same as page size used to populated
> >> bitmap. If it is equal, copy bitmap, but if not equal, return error.
> >>
> >> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >> ---
> >>   drivers/vfio/vfio_iommu_type1.c | 243 +++++++++++++++++++++++++++++++++++++++-
> >>   1 file changed, 237 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index d386461e5d11..435e84269a28 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -70,6 +70,7 @@ struct vfio_iommu {
> >>   	unsigned int		dma_avail;
> >>   	bool			v2;
> >>   	bool			nesting;
> >> +	bool			dirty_page_tracking;
> >>   };
> >>   
> >>   struct vfio_domain {
> >> @@ -90,6 +91,7 @@ struct vfio_dma {
> >>   	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> >>   	struct task_struct	*task;
> >>   	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> >> +	unsigned long		*bitmap;
> >>   };
> >>   
> >>   struct vfio_group {
> >> @@ -125,6 +127,7 @@ struct vfio_regions {
> >>   					(!list_empty(&iommu->domain_list))
> >>   
> >>   static int put_pfn(unsigned long pfn, int prot);
> >> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >>   
> >>   /*
> >>    * This code handles mapping and unmapping of user data buffers
> >> @@ -174,6 +177,76 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> >>   	rb_erase(&old->node, &iommu->dma_list);
> >>   }
> >>   
> >> +static inline unsigned long dirty_bitmap_bytes(unsigned int npages)
> >> +{
> >> +	if (!npages)
> >> +		return 0;
> >> +
> >> +	return ALIGN(npages, BITS_PER_LONG) / sizeof(unsigned long);  
> > 
> > Yes, but no.  The math works out here on LP64 systems because
> > sizeof(unsigned long) == BITS_PER_BYTE, but sizeof(unsigned long) is
> > irrelevant, we wants BITS_PER_BYTE.  Also, the UAPI defines the bitmap
> > as an array of __u64, so rather than BITS_PER_LONG I think we want
> > BITS_PER_TYPE(u64).
> >   
> 
> Changing it to macro:
> 
> #define DIRTY_BITMAP_BYTES(n)   (ALIGN(n, BITS_PER_TYPE(u64)) / 
> BITS_PER_BYTE)
> 
> 
> >> +}
> >> +
> >> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, unsigned long pgsize)
> >> +{
> >> +	if (!RB_EMPTY_ROOT(&dma->pfn_list) || dma->iommu_mapped) {  
> > 
> > Yan's patch series allows vendor drivers to dirty pages regardless of
> > either of these.  I think we should do away with this an
> > unconditionally allocate a bitmap per vfio_dma.
> >  
> 
> Same as above - CPU writes are tracked by KVM core.

As above, nope.

> >> +		unsigned long npages = dma->size / pgsize;
> >> +
> >> +		dma->bitmap = kvzalloc(dirty_bitmap_bytes(npages), GFP_KERNEL);
> >> +		if (!dma->bitmap)
> >> +			return -ENOMEM;
> >> +	}
> >> +	return 0;
> >> +}
> >> +
> >> +static int vfio_dma_all_bitmap_alloc(struct vfio_iommu *iommu,
> >> +				     unsigned long pgsize)
> >> +{
> >> +	struct rb_node *n = rb_first(&iommu->dma_list);
> >> +	int ret;
> >> +
> >> +	for (; n; n = rb_next(n)) {
> >> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >> +		struct rb_node *n;
> >> +
> >> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
> >> +		if (ret) {
> >> +			struct rb_node *p = rb_prev(n);
> >> +
> >> +			for (; p; p = rb_prev(p)) {
> >> +				struct vfio_dma *dma = rb_entry(n,
> >> +							struct vfio_dma, node);
> >> +
> >> +				kfree(dma->bitmap);
> >> +				dma->bitmap = NULL;
> >> +			}
> >> +			return ret;
> >> +		}
> >> +
> >> +		if (!dma->bitmap)
> >> +			continue;
> >> +
> >> +		for (n = rb_first(&dma->pfn_list); n; n = rb_next(n)) {
> >> +			struct vfio_pfn *vpfn = rb_entry(n, struct vfio_pfn,
> >> +							 node);
> >> +
> >> +			bitmap_set(dma->bitmap,
> >> +				   (vpfn->iova - dma->iova) / pgsize, 1);
> >> +		}
> >> +	}
> >> +	return 0;
> >> +}
> >> +
> >> +static void vfio_dma_all_bitmap_free(struct vfio_iommu *iommu)
> >> +{
> >> +	struct rb_node *n = rb_first(&iommu->dma_list);
> >> +
> >> +	for (; n; n = rb_next(n)) {
> >> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> >> +
> >> +		kfree(dma->bitmap);
> >> +		dma->bitmap = NULL;
> >> +	}
> >> +}
> >> +
> >>   /*
> >>    * Helper Functions for host iova-pfn list
> >>    */
> >> @@ -254,12 +327,16 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
> >>   	return vpfn;
> >>   }
> >>   
> >> -static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
> >> +static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn,
> >> +				  bool do_tracking, unsigned long pgsize)
> >>   {
> >>   	int ret = 0;
> >>   
> >>   	vpfn->ref_count--;
> >>   	if (!vpfn->ref_count) {
> >> +		if (do_tracking && dma->bitmap)
> >> +			bitmap_set(dma->bitmap,
> >> +				   (vpfn->iova - dma->iova) / pgsize, 1);  
> > 
> > This seems wrong, or at best redundant.  The dirty bitmap should always
> > reflect all outstanding pinned pages.  When ref_count drops to zero we
> > can stop tracking it, but it should already be set in the bitmap and we
> > shouldn't need to do anything here.
> >   
> 
> Yes, this is redundant. Removing it.
> 
> >>   		ret = put_pfn(vpfn->pfn, dma->prot);
> >>   		vfio_remove_from_pfn_list(dma, vpfn);
> >>   	}
> >> @@ -484,7 +561,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
> >>   }
> >>   
> >>   static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> >> -				    bool do_accounting)
> >> +				    bool do_accounting, bool do_tracking,
> >> +				    unsigned long pgsize)
> >>   {
> >>   	int unlocked;
> >>   	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
> >> @@ -492,7 +570,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> >>   	if (!vpfn)
> >>   		return 0;
> >>   
> >> -	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
> >> +	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn, do_tracking, pgsize);  
> > 
> > This is the only use of do_tracking, which as discussed above shouldn't
> > need to be done in this function, thus we shouldn't need to pass pgsize
> > either.
> >   
> 
> Right changing all calls to vfio_iova_put_vfio_pfn()
> 
> >>   
> >>   	if (do_accounting)
> >>   		vfio_lock_acct(dma, -unlocked, true);
> >> @@ -563,9 +641,26 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>   
> >>   		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
> >>   		if (ret) {
> >> -			vfio_unpin_page_external(dma, iova, do_accounting);
> >> +			vfio_unpin_page_external(dma, iova, do_accounting,
> >> +						 false, 0);  
> > 
> > I might use PAGE_SIZE here rather than 0 just for sanity, but as above
> > we don't need these extra args.
> >   
> >>   			goto pin_unwind;
> >>   		}
> >> +
> >> +		if (iommu->dirty_page_tracking) {  
> > 
> > If we unconditionally allocated the bitmap, we could just test
> > dma->bitmap here and get rid of the allocation branch below (we'll need
> > to allocate the bitmap with the vfio_dma if new mappings are created
> > while dirty tracking is enabled though).
> >   
> >> +			unsigned long pgshift =
> >> +					 __ffs(vfio_pgsize_bitmap(iommu));  
> > 
> > Maybe a follow-up patch could cache this, it can only potentially
> > change as each new group is added.  We'd probably want to deny adding a
> > group if the minimum pagesize changed while dirty tracking is enabled.
> >   
> >> +
> >> +			if (!dma->bitmap) {
> >> +				ret = vfio_dma_bitmap_alloc(dma, 1 << pgshift);
> >> +				if (ret) {
> >> +					vfio_unpin_page_external(dma, iova,
> >> +						 do_accounting, false, 0);
> >> +					goto pin_unwind;
> >> +				}
> >> +			}
> >> +			bitmap_set(dma->bitmap,
> >> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> >> +		}
> >>   	}
> >>   
> >>   	ret = i;
> >> @@ -578,7 +673,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>   
> >>   		iova = user_pfn[j] << PAGE_SHIFT;
> >>   		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
> >> -		vfio_unpin_page_external(dma, iova, do_accounting);
> >> +		vfio_unpin_page_external(dma, iova, do_accounting, false, 0);  
> > 
> > Unneeded.
> >   
> >>   		phys_pfn[j] = 0;
> >>   	}
> >>   pin_done:
> >> @@ -612,7 +707,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
> >>   		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
> >>   		if (!dma)
> >>   			goto unpin_exit;
> >> -		vfio_unpin_page_external(dma, iova, do_accounting);
> >> +		vfio_unpin_page_external(dma, iova, do_accounting,
> >> +					 iommu->dirty_page_tracking, PAGE_SIZE);  
> > 
> > Same.
> >   
> >>   	}
> >>   
> >>   unpin_exit:
> >> @@ -800,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> >>   	vfio_unmap_unpin(iommu, dma, true);
> >>   	vfio_unlink_dma(iommu, dma);
> >>   	put_task_struct(dma->task);
> >> +	kfree(dma->bitmap);
> >>   	kfree(dma);
> >>   	iommu->dma_avail++;
> >>   }
> >> @@ -830,6 +927,54 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> >>   	return bitmap;
> >>   }
> >>   
> >> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> >> +				  size_t size, uint64_t pgsize,
> >> +				  unsigned char __user *bitmap)
> >> +{
> >> +	struct vfio_dma *dma;
> >> +	unsigned long pgshift = __ffs(pgsize);
> >> +	unsigned int npages, bitmap_size;
> >> +
> >> +	dma = vfio_find_dma(iommu, iova, 1);
> >> +
> >> +	if (!dma)
> >> +		return -EINVAL;
> >> +
> >> +	if (dma->iova != iova || dma->size != size)
> >> +		return -EINVAL;
> >> +
> >> +	npages = dma->size >> pgshift;
> >> +	bitmap_size = dirty_bitmap_bytes(npages);
> >> +
> >> +	/* mark all pages dirty if all pages are pinned and mapped. */
> >> +	if (dma->iommu_mapped)
> >> +		bitmap_set(dma->bitmap, 0, npages);  
> > 
> > Oops, we only test if dma->bitmap exists below.  Always allocating the
> > bitmap would resolve this too.
> >   
> >> +
> >> +	if (dma->bitmap) {
> >> +		if (copy_to_user((void __user *)bitmap, dma->bitmap,
> >> +				 bitmap_size))
> >> +			return -EFAULT;
> >> +
> >> +		memset(dma->bitmap, 0, bitmap_size);  
> > 
> > Nope, we need to reprocess the bitmap to set all pinned pages as dirty
> > again.  Those need to be reported _every_ time the user asks for the
> > bitmap overlapping them.  We consider them to be continuously dirtied.
> >   
> >> +	}
> >> +	return 0;
> >> +}
> >> +
> >> +static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
> >> +{
> >> +	long bsize;
> >> +
> >> +	if (!bitmap_size || bitmap_size > SIZE_MAX)
> >> +		return -EINVAL;  
> > 
> > bitmap_size should be a size_t if we're going to compare it to
> > SIZE_MAX.  bsize and the return of dirty_bitmap_bytes() should probably
> > also be a size_t.
> >   
> 
> changing all parameters to uint64_t to be consistent with uapi and 
> compare with UINT_MAX.

UINT_MAX seems arbitrary, is this specified in our API?  The size of a
vfio_dma is limited to what the user is able to pin, and therefore
their locked memory limit, but do we have an explicit limit elsewhere
that results in this limit here.  I think a 4GB bitmap would track
something like 2^47 bytes of memory, that's pretty excessive, but still
an arbitrary limit.

BTW, we probably do need to be careful about allocating dirty bitmaps
for non-page backed mappings, those aren't limited by the user's locked
memory.  Being sure that we can allocate the bitmap is another reason
to unconditionally create the bitmap when logging is enabled.  Thanks,

Alex

> >> +
> >> +	bsize = dirty_bitmap_bytes(npages);
> >> +
> >> +	if (bitmap_size < bsize)
> >> +		return -EINVAL;
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >>   			     struct vfio_iommu_type1_dma_unmap *unmap)
> >>   {
> >> @@ -2277,6 +2422,92 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> >>   
> >>   		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> >>   			-EFAULT : 0;
> >> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> >> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> >> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> >> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> >> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> >> +		int ret;
> >> +
> >> +		if (!iommu->v2)
> >> +			return -EACCES;
> >> +
> >> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> >> +				    flags);
> >> +
> >> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> >> +			return -EFAULT;
> >> +
> >> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> >> +			return -EINVAL;
> >> +
> >> +		/* only one flag should be set at a time */
> >> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> >> +			return -EINVAL;
> >> +
> >> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> >> +			unsigned long iommu_pgsize =
> >> +					1 << __ffs(vfio_pgsize_bitmap(iommu));
> >> +
> >> +			mutex_lock(&iommu->lock);
> >> +			ret = vfio_dma_all_bitmap_alloc(iommu, iommu_pgsize);
> >> +			if (!ret)
> >> +				iommu->dirty_page_tracking = true;
> >> +			mutex_unlock(&iommu->lock);
> >> +			return ret;  
> > 
> > If a user called this more than once we'd re-allocate all the bitmaps
> > and leak the previous ones.  Let's avoid that by testing
> > dirty_page_tracking before we actually take any action.  Do we want to
> > return error or success though if dirty tracking is already enabled?
> > I'm inclined to say success because we've provided no means for the
> > user to check the status.  
> 
> Success seems better.
> 
> Thanks,
> Kirti
> 
> >   
> >> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> >> +			mutex_lock(&iommu->lock);
> >> +			if (iommu->dirty_page_tracking) {
> >> +				iommu->dirty_page_tracking = false;
> >> +				vfio_dma_all_bitmap_free(iommu);
> >> +			}
> >> +			mutex_unlock(&iommu->lock);
> >> +			return 0;  
> > 
> > This one essentially does what I'm suggesting above already.  Repeated
> > calls take no additional action and don't return an error.
> >   
> >> +		} else if (dirty.flags &
> >> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> >> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> >> +			unsigned long pgshift;
> >> +			size_t data_size = dirty.argsz - minsz;
> >> +			uint64_t iommu_pgsize =
> >> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> >> +
> >> +			if (!data_size || data_size < sizeof(range))
> >> +				return -EINVAL;
> >> +
> >> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> >> +					   sizeof(range)))
> >> +				return -EFAULT;
> >> +
> >> +			// allow only min supported pgsize  
> > 
> > Proper comment style please.
> >   
> >> +			if (range.pgsize != iommu_pgsize)
> >> +				return -EINVAL;
> >> +			if (range.iova & (iommu_pgsize - 1))
> >> +				return -EINVAL;
> >> +			if (!range.size || range.size & (iommu_pgsize - 1))
> >> +				return -EINVAL;
> >> +			if (range.iova + range.size < range.iova)
> >> +				return -EINVAL;
> >> +			if (!access_ok((void __user *)range.bitmap,
> >> +				       range.bitmap_size))
> >> +				return -EINVAL;
> >> +
> >> +			pgshift = __ffs(range.pgsize);
> >> +			ret = verify_bitmap_size(range.size >> pgshift,
> >> +						 range.bitmap_size);
> >> +			if (ret)
> >> +				return ret;
> >> +
> >> +			mutex_lock(&iommu->lock);
> >> +			if (iommu->dirty_page_tracking)
> >> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> >> +					 range.size, range.pgsize,
> >> +					 (unsigned char __user *)range.bitmap);
> >> +			else
> >> +				ret = -EINVAL;
> >> +			mutex_unlock(&iommu->lock);
> >> +
> >> +			return ret;
> >> +		}
> >>   	}
> >>   
> >>   	return -ENOTTY;  
> >   
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap
  2020-03-13 18:45   ` Alex Williamson
@ 2020-03-17 18:28     ` Kirti Wankhede
  0 siblings, 0 replies; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-17 18:28 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/14/2020 12:15 AM, Alex Williamson wrote:
> On Thu, 12 Mar 2020 23:23:25 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> Pages, pinned by external interface for requested IO virtual address
>> range,  might get unpinned  and unmapped while migration is active and
> 
> "DMA mapped pages, including those pinned by mdev vendor drivers, might
> get..."
> 
>> device is still running, that is, in pre-copy phase while guest driver
> 
> "...running.  For example, in pre-copy..."
> 
>> still could access those pages. Host device can write to these pages while
>> those were mapped.
> 
> "...those pages, host device or vendor driver can dirty these mapped
> pages."
> 
>> Such pages should be marked dirty so that after
>> migration guest driver should still be able to complete the operation.
> 
> Complete what operation?  

For whatever operation guest driver was using that memory for.

> We need to report these dirty pages in order
> to maintain memory consistency for a user making use of dirty page
> tracking.
> 
>> To get bitmap during unmap, user should set flag
>> VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP, bitmap memory should be allocated and
>> zeroed by user space application. Bitmap size and page size should be set
>> by user application.
> 
> It seems like zeroed pages are no longer strictly necessary now that we
> require requests to match existing mappings, right?
> 

Right.

>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 63 +++++++++++++++++++++++++++++++++++++----
>>   include/uapi/linux/vfio.h       | 12 ++++++++
>>   2 files changed, 70 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 435e84269a28..4037b82c6db0 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -976,7 +976,8 @@ static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
>>   }
>>   
>>   static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>> -			     struct vfio_iommu_type1_dma_unmap *unmap)
>> +			     struct vfio_iommu_type1_dma_unmap *unmap,
>> +			     unsigned long *bitmap)
>>   {
>>   	uint64_t mask;
>>   	struct vfio_dma *dma, *dma_last = NULL;
>> @@ -1027,6 +1028,10 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   	 * will be returned if these conditions are not met.  The v2 interface
>>   	 * will only return success and a size of zero if there were no
>>   	 * mappings within the range.
>> +	 *
>> +	 * When VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP flag is set, unmap request
>> +	 * must be for single mapping. Multiple mappings with this flag set is
>> +	 * not supported.
>>   	 */
>>   	if (iommu->v2) {
>>   		dma = vfio_find_dma(iommu, unmap->iova, 1);
>> @@ -1034,6 +1039,13 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   			ret = -EINVAL;
>>   			goto unlock;
>>   		}
>> +
>> +		if ((unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) &&
>> +		    (dma->iova != unmap->iova || dma->size != unmap->size)) {
>> +			ret = -EINVAL;
>> +			goto unlock;
>> +		}
>> +
>>   		dma = vfio_find_dma(iommu, unmap->iova + unmap->size - 1, 0);
>>   		if (dma && dma->iova + dma->size != unmap->iova + unmap->size) {
>>   			ret = -EINVAL;
>> @@ -1051,6 +1063,11 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   		if (dma->task->mm != current->mm)
>>   			break;
>>   
>> +		if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
>> +			vfio_iova_dirty_bitmap(iommu, dma->iova, dma->size,
>> +					       unmap->bitmap_pgsize,
>> +					      (unsigned char __user *) bitmap);
>> +
>>   		if (!RB_EMPTY_ROOT(&dma->pfn_list)) {
>>   			struct vfio_iommu_type1_dma_unmap nb_unmap;
>>   
>> @@ -1076,6 +1093,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>>   						    &nb_unmap);
>>   			goto again;
>>   		}
>> +
>>   		unmapped += dma->size;
>>   		vfio_remove_dma(iommu, dma);
>>   	}
> 
> Spurious white space.
> 
>> @@ -2406,22 +2424,57 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>   
>>   	} else if (cmd == VFIO_IOMMU_UNMAP_DMA) {
>>   		struct vfio_iommu_type1_dma_unmap unmap;
>> -		long ret;
>> +		unsigned long *bitmap = NULL;
> 
> Shouldn't this have a __user attribute?  Also long doesn't seem the
> right type. void would be ok here.
> 

Removed this with the use of vfio_bitmap structure.

>> +		long ret, bsize;
>>   
>>   		minsz = offsetofend(struct vfio_iommu_type1_dma_unmap, size);
>>   
>>   		if (copy_from_user(&unmap, (void __user *)arg, minsz))
>>   			return -EFAULT;
>>   
>> -		if (unmap.argsz < minsz || unmap.flags)
>> +		if (unmap.argsz < minsz ||
>> +		    unmap.flags & ~VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP)
>>   			return -EINVAL;
>>   
>> -		ret = vfio_dma_do_unmap(iommu, &unmap);
>> +		if (unmap.flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) {
>> +			unsigned long pgshift;
>> +			uint64_t iommu_pgsizes = vfio_pgsize_bitmap(iommu);
>> +			uint64_t iommu_pgmask =
>> +				 ((uint64_t)1 << __ffs(iommu_pgsizes)) - 1;
>> +
> 
> Need to test that unmap.argsz includes this.
> 

Added.

>> +			if (copy_from_user(&unmap, (void __user *)arg,
>> +					   sizeof(unmap)))
>> +				return -EFAULT;
>> +
>> +			pgshift = __ffs(unmap.bitmap_pgsize);
>> +
>> +			if (((unmap.bitmap_pgsize - 1) & iommu_pgmask) !=
>> +			     (unmap.bitmap_pgsize - 1))
>> +				return -EINVAL;
>> +
>> +			if ((unmap.bitmap_pgsize & iommu_pgsizes) !=
>> +			     unmap.bitmap_pgsize)
>> +				return -EINVAL;
>> +			if (unmap.iova + unmap.size < unmap.iova)
>> +				return -EINVAL;
>> +			if (!access_ok((void __user *)unmap.bitmap,
>> +				       unmap.bitmap_size))
>> +				return -EINVAL;
> 
> These tests should be identical to the previous patch.
>

Updating tests here. Removing redundant tests which are already in 
vfio_dma_do_unmap()

>> +
>> +			bsize = verify_bitmap_size(unmap.size >> pgshift,
>> +						   unmap.bitmap_size);
>> +			if (bsize < 0)
>> +				return bsize;
>> +			bitmap = unmap.bitmap;
>> +		}
>> +
>> +		ret = vfio_dma_do_unmap(iommu, &unmap, bitmap);
>>   		if (ret)
>>   			return ret;
>>   
>> -		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>> +		ret = copy_to_user((void __user *)arg, &unmap, minsz) ?
>>   			-EFAULT : 0;
>> +		return ret;
> 
> Why?  Leftover debugging?
> 

Yeah, keeping the original one.


>>   	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
>>   		struct vfio_iommu_type1_dirty_bitmap dirty;
>>   		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 02d555cc7036..12b2094f887e 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -1004,12 +1004,24 @@ struct vfio_iommu_type1_dma_map {
>>    * field.  No guarantee is made to the user that arbitrary unmaps of iova
>>    * or size different from those used in the original mapping call will
>>    * succeed.
>> + * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get dirty bitmap
>> + * before unmapping IO virtual addresses. When this flag is set, user should
>> + * allocate memory to get bitmap, clear the bitmap memory by setting zero and
>> + * should set size of allocated memory in bitmap_size field. One bit in bitmap
>> + * represents per page , page of user provided page size in 'bitmap_pgsize',
>> + * consecutively starting from iova offset. Bit set indicates page at that
>> + * offset from iova is dirty. Bitmap of pages in the range of unmapped size is
>> + * returned in bitmap.
>>    */
>>   struct vfio_iommu_type1_dma_unmap {
>>   	__u32	argsz;
>>   	__u32	flags;
>> +#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
>>   	__u64	iova;				/* IO virtual address */
>>   	__u64	size;				/* Size of mapping (bytes) */
>> +	__u64        bitmap_pgsize;		/* page size for bitmap */
>> +	__u64        bitmap_size;               /* in bytes */
>> +	void __user *bitmap;                    /* one bit per page */
> 
> This suggests to me that we should further split struct
> vfio_iommu_type1_dirty_bitmap_get so that we can use the same
> sub-structure here.  For example:
> 
> struct vfio_bitmap {
> 	__u64 pgsize;
> 	__u64 size;
> 	__u64 __user *data;
> };
> 
> Note we still have a void* rather than __u64* in original above.
> 
> Also, why wouldn't we take the same data[] approach as the previous
> patch, defining this as the data when the GET_DIRTY_BITMAP flag is set?
> 
> Previous patch would be updated to something like:
> 
> struct vfio_iommu_type1_dirty_bitmap_get {
> 	__u64 iova;
> 	__u64 size;
> 	struct vfio_bitmap bitmap;
> };
> 
>>   };
>>   
>>   #define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
> 

Ok. Updating.

Thanks,
Kirti

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-13 20:49   ` Alex Williamson
@ 2020-03-17 18:28     ` Kirti Wankhede
  2020-03-17 19:00       ` Alex Williamson
  0 siblings, 1 reply; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-17 18:28 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/14/2020 2:19 AM, Alex Williamson wrote:
> On Thu, 12 Mar 2020 23:23:27 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> Added a check such that only singleton IOMMU groups can pin pages.
>>  From the point when vendor driver pins any pages, consider IOMMU group
>> dirty page scope to be limited to pinned pages.
>>
>> To optimize to avoid walking list often, added flag
>> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
>> vfio_domain in the domain_list dirty page scope is limited to pinned
>> pages. This flag is updated on first pinned pages request for that IOMMU
>> group and on attaching/detaching group.
>>
>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>> ---
>>   drivers/vfio/vfio.c             |  9 +++++-
>>   drivers/vfio/vfio_iommu_type1.c | 72 +++++++++++++++++++++++++++++++++++++++--
>>   include/linux/vfio.h            |  4 ++-
>>   3 files changed, 80 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
>> index c8482624ca34..79108c1245a5 100644
>> --- a/drivers/vfio/vfio.c
>> +++ b/drivers/vfio/vfio.c
>> @@ -85,6 +85,7 @@ struct vfio_group {
>>   	atomic_t			opened;
>>   	wait_queue_head_t		container_q;
>>   	bool				noiommu;
>> +	unsigned int			dev_counter;
>>   	struct kvm			*kvm;
>>   	struct blocking_notifier_head	notifier;
>>   };
>> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
>>   
>>   	mutex_lock(&group->device_lock);
>>   	list_add(&device->group_next, &group->device_list);
>> +	group->dev_counter++;
>>   	mutex_unlock(&group->device_lock);
>>   
>>   	return device;
>> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
>>   	struct vfio_group *group = device->group;
>>   
>>   	list_del(&device->group_next);
>> +	group->dev_counter--;
>>   	mutex_unlock(&group->device_lock);
>>   
>>   	dev_set_drvdata(device->dev, NULL);
>> @@ -1895,6 +1898,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>>   	if (!group)
>>   		return -ENODEV;
>>   
>> +	if (group->dev_counter > 1)
>> +		return -EINVAL;
>> +
>>   	ret = vfio_group_add_container_user(group);
>>   	if (ret)
>>   		goto err_pin_pages;
>> @@ -1902,7 +1908,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>>   	container = group->container;
>>   	driver = container->iommu_driver;
>>   	if (likely(driver && driver->ops->pin_pages))
>> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
>> +		ret = driver->ops->pin_pages(container->iommu_data,
>> +					     group->iommu_group, user_pfn,
>>   					     npage, prot, phys_pfn);
>>   	else
>>   		ret = -ENOTTY;
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 4f1f116feabc..18a284b230c0 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -71,6 +71,7 @@ struct vfio_iommu {
>>   	bool			v2;
>>   	bool			nesting;
>>   	bool			dirty_page_tracking;
>> +	bool			pinned_page_dirty_scope;
>>   };
>>   
>>   struct vfio_domain {
>> @@ -98,6 +99,7 @@ struct vfio_group {
>>   	struct iommu_group	*iommu_group;
>>   	struct list_head	next;
>>   	bool			mdev_group;	/* An mdev group */
>> +	bool			has_pinned_pages;
> 
> I'm afraid over time this name will be confusing, should we simply
> call it pinned_page_dirty_scope per vfio_group as well? 

Updating as you suggested, but I hope it doesn't look confusing.

>  We might have
> to adapt this over time as we get new ways to dirty pages, but each
> group voting towards the same value being set on the vfio_iommu object
> seems like a good starting point.
> 
>>   };
>>   
>>   struct vfio_iova {
>> @@ -129,6 +131,10 @@ struct vfio_regions {
>>   static int put_pfn(unsigned long pfn, int prot);
>>   static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>>   
>> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
>> +					       struct iommu_group *iommu_group);
>> +
>> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
>>   /*
>>    * This code handles mapping and unmapping of user data buffers
>>    * into DMA'ble space using the IOMMU
>> @@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>>   }
>>   
>>   static int vfio_iommu_type1_pin_pages(void *iommu_data,
>> +				      struct iommu_group *iommu_group,
>>   				      unsigned long *user_pfn,
>>   				      int npage, int prot,
>>   				      unsigned long *phys_pfn)
>>   {
>>   	struct vfio_iommu *iommu = iommu_data;
>> +	struct vfio_group *group;
>>   	int i, j, ret;
>>   	unsigned long remote_vaddr;
>>   	struct vfio_dma *dma;
>> @@ -662,8 +670,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>   				   (vpfn->iova - dma->iova) >> pgshift, 1);
>>   		}
>>   	}
>> -
>>   	ret = i;
>> +
>> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
>> +	if (!group->has_pinned_pages) {
>> +		group->has_pinned_pages = true;
>> +		update_pinned_page_dirty_scope(iommu);
>> +	}
>> +
>>   	goto pin_done;
>>   
>>   pin_unwind:
>> @@ -946,8 +960,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>>   	npages = dma->size >> pgshift;
>>   	bitmap_size = dirty_bitmap_bytes(npages);
>>   
>> -	/* mark all pages dirty if all pages are pinned and mapped. */
>> -	if (dma->iommu_mapped)
>> +	/*
>> +	 * mark all pages dirty if any IOMMU capable device is not able
>> +	 * to report dirty pages and all pages are pinned and mapped.
>> +	 */
>> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>>   		bitmap_set(dma->bitmap, 0, npages);
>>   
>>   	if (dma->bitmap) {
>> @@ -1430,6 +1447,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
>>   	return NULL;
>>   }
>>   
>> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
>> +					       struct iommu_group *iommu_group)
>> +{
>> +	struct vfio_domain *domain;
>> +	struct vfio_group *group = NULL;
>> +
>> +	list_for_each_entry(domain, &iommu->domain_list, next) {
>> +		group = find_iommu_group(domain, iommu_group);
>> +		if (group)
>> +			return group;
>> +	}
>> +
>> +	if (iommu->external_domain)
>> +		group = find_iommu_group(iommu->external_domain, iommu_group);
>> +
>> +	return group;
>> +}
>> +
>> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
>> +{
>> +	struct vfio_domain *domain;
>> +	struct vfio_group *group;
>> +
>> +	list_for_each_entry(domain, &iommu->domain_list, next) {
>> +		list_for_each_entry(group, &domain->group_list, next) {
>> +			if (!group->has_pinned_pages) {
>> +				iommu->pinned_page_dirty_scope = false;
>> +				return;
>> +			}
>> +		}
>> +	}
>> +
>> +	if (iommu->external_domain) {
>> +		domain = iommu->external_domain;
>> +		list_for_each_entry(group, &domain->group_list, next) {
>> +			if (!group->has_pinned_pages) {
>> +				iommu->pinned_page_dirty_scope = false;
>> +				return;
>> +			}
>> +		}
>> +	}
>> +
>> +	iommu->pinned_page_dirty_scope = true;
>> +}
>> +
>>   static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
>>   				  phys_addr_t *base)
>>   {
>> @@ -1836,6 +1898,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>>   
>>   			list_add(&group->next,
>>   				 &iommu->external_domain->group_list);
>> +			update_pinned_page_dirty_scope(iommu);
>>   			mutex_unlock(&iommu->lock);
>>   
>>   			return 0;
>> @@ -1958,6 +2021,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>>   done:
>>   	/* Delete the old one and insert new iova list */
>>   	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
>> +	update_pinned_page_dirty_scope(iommu);
>>   	mutex_unlock(&iommu->lock);
>>   	vfio_iommu_resv_free(&group_resv_regions);
>>   
> 
> At this point we've added an iommu backed group that can't possibly
> have pages pinned on behalf of this group yet, can't we just set
> iommu->pinned_page_dirty_scope = false?
> 

Right, changing.

> In the previous case, aren't we adding a non-iommu backed group, so
> should we presume the scope is pinned pages even before we have any?

Anyways we are updating it when pages are pinned, I think better not to 
presume.

> We could almost forego the iommu scope update, but it could be the
> first group added if we're going to preemptively assume the scope of
> the group.
> 
>> @@ -1972,6 +2036,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>>   out_free:
>>   	kfree(domain);
>>   	kfree(group);
>> +	update_pinned_page_dirty_scope(iommu);
> 
> This one looks like paranoia given how late we update when the group is
> added.
> 
>>   	mutex_unlock(&iommu->lock);
>>   	return ret;
>>   }
>> @@ -2176,6 +2241,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>>   		vfio_iommu_iova_free(&iova_copy);
>>   
>>   detach_group_done:
>> +	update_pinned_page_dirty_scope(iommu);
> 
> We only need to do this if the group we're removing does not have
> pinned page dirty scope, right?  I think we have all the info here to
> make that optimization.
> 

There could be more than one group that doesn't have pinned page dirty 
scope, better to run through update_pinned_page_dirty_scope() function.

Thanks,
Kirti

>>   	mutex_unlock(&iommu->lock);
>>   }
>>   
>> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
>> index e42a711a2800..da29802d6276 100644
>> --- a/include/linux/vfio.h
>> +++ b/include/linux/vfio.h
>> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
>>   					struct iommu_group *group);
>>   	void		(*detach_group)(void *iommu_data,
>>   					struct iommu_group *group);
>> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
>> +	int		(*pin_pages)(void *iommu_data,
>> +				     struct iommu_group *group,
>> +				     unsigned long *user_pfn,
>>   				     int npage, int prot,
>>   				     unsigned long *phys_pfn);
>>   	int		(*unpin_pages)(void *iommu_data,
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-17 18:28     ` Kirti Wankhede
@ 2020-03-17 19:00       ` Alex Williamson
  2020-03-18 15:00         ` Kirti Wankhede
  0 siblings, 1 reply; 20+ messages in thread
From: Alex Williamson @ 2020-03-17 19:00 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Tue, 17 Mar 2020 23:58:38 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 3/14/2020 2:19 AM, Alex Williamson wrote:
> > On Thu, 12 Mar 2020 23:23:27 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> Added a check such that only singleton IOMMU groups can pin pages.
> >>  From the point when vendor driver pins any pages, consider IOMMU group
> >> dirty page scope to be limited to pinned pages.
> >>
> >> To optimize to avoid walking list often, added flag
> >> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
> >> vfio_domain in the domain_list dirty page scope is limited to pinned
> >> pages. This flag is updated on first pinned pages request for that IOMMU
> >> group and on attaching/detaching group.
> >>
> >> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >> ---
> >>   drivers/vfio/vfio.c             |  9 +++++-
> >>   drivers/vfio/vfio_iommu_type1.c | 72 +++++++++++++++++++++++++++++++++++++++--
> >>   include/linux/vfio.h            |  4 ++-
> >>   3 files changed, 80 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> >> index c8482624ca34..79108c1245a5 100644
> >> --- a/drivers/vfio/vfio.c
> >> +++ b/drivers/vfio/vfio.c
> >> @@ -85,6 +85,7 @@ struct vfio_group {
> >>   	atomic_t			opened;
> >>   	wait_queue_head_t		container_q;
> >>   	bool				noiommu;
> >> +	unsigned int			dev_counter;
> >>   	struct kvm			*kvm;
> >>   	struct blocking_notifier_head	notifier;
> >>   };
> >> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
> >>   
> >>   	mutex_lock(&group->device_lock);
> >>   	list_add(&device->group_next, &group->device_list);
> >> +	group->dev_counter++;
> >>   	mutex_unlock(&group->device_lock);
> >>   
> >>   	return device;
> >> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
> >>   	struct vfio_group *group = device->group;
> >>   
> >>   	list_del(&device->group_next);
> >> +	group->dev_counter--;
> >>   	mutex_unlock(&group->device_lock);
> >>   
> >>   	dev_set_drvdata(device->dev, NULL);
> >> @@ -1895,6 +1898,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> >>   	if (!group)
> >>   		return -ENODEV;
> >>   
> >> +	if (group->dev_counter > 1)
> >> +		return -EINVAL;
> >> +
> >>   	ret = vfio_group_add_container_user(group);
> >>   	if (ret)
> >>   		goto err_pin_pages;
> >> @@ -1902,7 +1908,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> >>   	container = group->container;
> >>   	driver = container->iommu_driver;
> >>   	if (likely(driver && driver->ops->pin_pages))
> >> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
> >> +		ret = driver->ops->pin_pages(container->iommu_data,
> >> +					     group->iommu_group, user_pfn,
> >>   					     npage, prot, phys_pfn);
> >>   	else
> >>   		ret = -ENOTTY;
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index 4f1f116feabc..18a284b230c0 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -71,6 +71,7 @@ struct vfio_iommu {
> >>   	bool			v2;
> >>   	bool			nesting;
> >>   	bool			dirty_page_tracking;
> >> +	bool			pinned_page_dirty_scope;
> >>   };
> >>   
> >>   struct vfio_domain {
> >> @@ -98,6 +99,7 @@ struct vfio_group {
> >>   	struct iommu_group	*iommu_group;
> >>   	struct list_head	next;
> >>   	bool			mdev_group;	/* An mdev group */
> >> +	bool			has_pinned_pages;  
> > 
> > I'm afraid over time this name will be confusing, should we simply
> > call it pinned_page_dirty_scope per vfio_group as well?   
> 
> Updating as you suggested, but I hope it doesn't look confusing.
> 
> >  We might have
> > to adapt this over time as we get new ways to dirty pages, but each
> > group voting towards the same value being set on the vfio_iommu object
> > seems like a good starting point.
> >   
> >>   };
> >>   
> >>   struct vfio_iova {
> >> @@ -129,6 +131,10 @@ struct vfio_regions {
> >>   static int put_pfn(unsigned long pfn, int prot);
> >>   static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >>   
> >> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> >> +					       struct iommu_group *iommu_group);
> >> +
> >> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
> >>   /*
> >>    * This code handles mapping and unmapping of user data buffers
> >>    * into DMA'ble space using the IOMMU
> >> @@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> >>   }
> >>   
> >>   static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >> +				      struct iommu_group *iommu_group,
> >>   				      unsigned long *user_pfn,
> >>   				      int npage, int prot,
> >>   				      unsigned long *phys_pfn)
> >>   {
> >>   	struct vfio_iommu *iommu = iommu_data;
> >> +	struct vfio_group *group;
> >>   	int i, j, ret;
> >>   	unsigned long remote_vaddr;
> >>   	struct vfio_dma *dma;
> >> @@ -662,8 +670,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>   				   (vpfn->iova - dma->iova) >> pgshift, 1);
> >>   		}
> >>   	}
> >> -
> >>   	ret = i;
> >> +
> >> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
> >> +	if (!group->has_pinned_pages) {
> >> +		group->has_pinned_pages = true;
> >> +		update_pinned_page_dirty_scope(iommu);
> >> +	}
> >> +
> >>   	goto pin_done;
> >>   
> >>   pin_unwind:
> >> @@ -946,8 +960,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> >>   	npages = dma->size >> pgshift;
> >>   	bitmap_size = dirty_bitmap_bytes(npages);
> >>   
> >> -	/* mark all pages dirty if all pages are pinned and mapped. */
> >> -	if (dma->iommu_mapped)
> >> +	/*
> >> +	 * mark all pages dirty if any IOMMU capable device is not able
> >> +	 * to report dirty pages and all pages are pinned and mapped.
> >> +	 */
> >> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
> >>   		bitmap_set(dma->bitmap, 0, npages);
> >>   
> >>   	if (dma->bitmap) {
> >> @@ -1430,6 +1447,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
> >>   	return NULL;
> >>   }
> >>   
> >> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> >> +					       struct iommu_group *iommu_group)
> >> +{
> >> +	struct vfio_domain *domain;
> >> +	struct vfio_group *group = NULL;
> >> +
> >> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> >> +		group = find_iommu_group(domain, iommu_group);
> >> +		if (group)
> >> +			return group;
> >> +	}
> >> +
> >> +	if (iommu->external_domain)
> >> +		group = find_iommu_group(iommu->external_domain, iommu_group);
> >> +
> >> +	return group;
> >> +}
> >> +
> >> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> >> +{
> >> +	struct vfio_domain *domain;
> >> +	struct vfio_group *group;
> >> +
> >> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> >> +		list_for_each_entry(group, &domain->group_list, next) {
> >> +			if (!group->has_pinned_pages) {
> >> +				iommu->pinned_page_dirty_scope = false;
> >> +				return;
> >> +			}
> >> +		}
> >> +	}
> >> +
> >> +	if (iommu->external_domain) {
> >> +		domain = iommu->external_domain;
> >> +		list_for_each_entry(group, &domain->group_list, next) {
> >> +			if (!group->has_pinned_pages) {
> >> +				iommu->pinned_page_dirty_scope = false;
> >> +				return;
> >> +			}
> >> +		}
> >> +	}
> >> +
> >> +	iommu->pinned_page_dirty_scope = true;
> >> +}
> >> +
> >>   static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
> >>   				  phys_addr_t *base)
> >>   {
> >> @@ -1836,6 +1898,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> >>   
> >>   			list_add(&group->next,
> >>   				 &iommu->external_domain->group_list);
> >> +			update_pinned_page_dirty_scope(iommu);
> >>   			mutex_unlock(&iommu->lock);
> >>   
> >>   			return 0;
> >> @@ -1958,6 +2021,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> >>   done:
> >>   	/* Delete the old one and insert new iova list */
> >>   	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> >> +	update_pinned_page_dirty_scope(iommu);
> >>   	mutex_unlock(&iommu->lock);
> >>   	vfio_iommu_resv_free(&group_resv_regions);
> >>     
> > 
> > At this point we've added an iommu backed group that can't possibly
> > have pages pinned on behalf of this group yet, can't we just set
> > iommu->pinned_page_dirty_scope = false?
> >   
> 
> Right, changing.
> 
> > In the previous case, aren't we adding a non-iommu backed group, so
> > should we presume the scope is pinned pages even before we have any?  
> 
> Anyways we are updating it when pages are pinned, I think better not to 
> presume.

If there's no iommu backing then the device doesn't have access to
dirty the pages itself, how else will they get dirty?  Perhaps I was a
little use in using the word "presume", I think there's a proof that
the pages must have limited dirty-scope.

> > We could almost forego the iommu scope update, but it could be the
> > first group added if we're going to preemptively assume the scope of
> > the group.
> >   
> >> @@ -1972,6 +2036,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> >>   out_free:
> >>   	kfree(domain);
> >>   	kfree(group);
> >> +	update_pinned_page_dirty_scope(iommu);  
> > 
> > This one looks like paranoia given how late we update when the group is
> > added.
> >   
> >>   	mutex_unlock(&iommu->lock);
> >>   	return ret;
> >>   }
> >> @@ -2176,6 +2241,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
> >>   		vfio_iommu_iova_free(&iova_copy);
> >>   
> >>   detach_group_done:
> >> +	update_pinned_page_dirty_scope(iommu);  
> > 
> > We only need to do this if the group we're removing does not have
> > pinned page dirty scope, right?  I think we have all the info here to
> > make that optimization.
> >   
> 
> There could be more than one group that doesn't have pinned page dirty 
> scope, better to run through update_pinned_page_dirty_scope() function.

Maybe I stated it wrong above, but I think we have this table:


iommu|group
-----+--------+---------+
XXXXX|   0    |    1    |
-----+--------+---------+
  0  |   A    |    B    |
-----+--------+---------+
  1  |   C    |    D    |
-----+--------+---------+

A: If we are NOT dirty-page-scope at the iommu and we remove a group
that is NOT dirty-page-scope, we need to check because that might have
been the group preventing the iommu from being dirty-page-scope.

B: If we are NOT dirty-page-scope at the iommu and we remove a group
that IS dirty-page-scope, we know that group wasn't limiting the scope
at the iommu.

C: If the iommu IS dirty-page-scope, we can't remove a group that is
NOT dirty page scope, this case is impossible.

D: If the iommu IS dirty-page-scope and we remove a group that IS dirty-
page-scope, nothing changes.

So I think we only need to update on A, or A+C since C cannot happen.
In B and D removing a group with dirt-page-scope cannot change the
iommu scope.  Thanks,

Alex

> >>   	mutex_unlock(&iommu->lock);
> >>   }
> >>   
> >> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> >> index e42a711a2800..da29802d6276 100644
> >> --- a/include/linux/vfio.h
> >> +++ b/include/linux/vfio.h
> >> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
> >>   					struct iommu_group *group);
> >>   	void		(*detach_group)(void *iommu_data,
> >>   					struct iommu_group *group);
> >> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
> >> +	int		(*pin_pages)(void *iommu_data,
> >> +				     struct iommu_group *group,
> >> +				     unsigned long *user_pfn,
> >>   				     int npage, int prot,
> >>   				     unsigned long *phys_pfn);
> >>   	int		(*unpin_pages)(void *iommu_data,  
> >   
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
  2020-03-12 17:53 ` [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to " Kirti Wankhede
  2020-03-13 18:13   ` Alex Williamson
@ 2020-03-18 12:13   ` Dr. David Alan Gilbert
  2020-03-18 13:32     ` Alex Williamson
  1 sibling, 1 reply; 20+ messages in thread
From: Dr. David Alan Gilbert @ 2020-03-18 12:13 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: alex.williamson, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm

* Kirti Wankhede (kwankhede@nvidia.com) wrote:
> VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> - Start pinned and unpinned pages tracking while migration is active
> - Stop pinned and unpinned dirty pages tracking. This is also used to
>   stop dirty pages tracking if migration failed or cancelled.
> - Get dirty pages bitmap. This ioctl returns bitmap of dirty pages, its
>   user space application responsibility to copy content of dirty pages
>   from source to destination during migration.
> 
> To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> structure. Bitmap size is calculated considering smallest supported page
> size. Bitmap is allocated when dirty logging is enabled for those
> vfio_dmas whose vpfn list is not empty or whole range is mapped, in
> case of pass-through device.
> 
> Bitmap is populated for already pinned pages when bitmap is allocated for
> a vfio_dma with the smallest supported page size. Update bitmap from
> pinning and unpinning functions. When user application queries bitmap,
> check if requested page size is same as page size used to populated
> bitmap. If it is equal, copy bitmap, but if not equal, return error.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 243 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 237 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index d386461e5d11..435e84269a28 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -70,6 +70,7 @@ struct vfio_iommu {
>  	unsigned int		dma_avail;
>  	bool			v2;
>  	bool			nesting;
> +	bool			dirty_page_tracking;
>  };
>  
>  struct vfio_domain {
> @@ -90,6 +91,7 @@ struct vfio_dma {
>  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
>  	struct task_struct	*task;
>  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> +	unsigned long		*bitmap;
>  };
>  
>  struct vfio_group {
> @@ -125,6 +127,7 @@ struct vfio_regions {
>  					(!list_empty(&iommu->domain_list))
>  
>  static int put_pfn(unsigned long pfn, int prot);
> +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>  
>  /*
>   * This code handles mapping and unmapping of user data buffers
> @@ -174,6 +177,76 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
>  	rb_erase(&old->node, &iommu->dma_list);
>  }
>  
> +static inline unsigned long dirty_bitmap_bytes(unsigned int npages)
> +{
> +	if (!npages)
> +		return 0;
> +
> +	return ALIGN(npages, BITS_PER_LONG) / sizeof(unsigned long);
> +}
> +
> +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, unsigned long pgsize)
> +{
> +	if (!RB_EMPTY_ROOT(&dma->pfn_list) || dma->iommu_mapped) {
> +		unsigned long npages = dma->size / pgsize;
> +
> +		dma->bitmap = kvzalloc(dirty_bitmap_bytes(npages), GFP_KERNEL);
> +		if (!dma->bitmap)
> +			return -ENOMEM;
> +	}
> +	return 0;
> +}
> +
> +static int vfio_dma_all_bitmap_alloc(struct vfio_iommu *iommu,
> +				     unsigned long pgsize)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +	int ret;
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +		struct rb_node *n;
> +
> +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
> +		if (ret) {
> +			struct rb_node *p = rb_prev(n);
> +
> +			for (; p; p = rb_prev(p)) {
> +				struct vfio_dma *dma = rb_entry(n,
> +							struct vfio_dma, node);
> +
> +				kfree(dma->bitmap);
> +				dma->bitmap = NULL;
> +			}
> +			return ret;
> +		}
> +
> +		if (!dma->bitmap)
> +			continue;
> +
> +		for (n = rb_first(&dma->pfn_list); n; n = rb_next(n)) {
> +			struct vfio_pfn *vpfn = rb_entry(n, struct vfio_pfn,
> +							 node);
> +
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) / pgsize, 1);
> +		}
> +	}
> +	return 0;
> +}
> +
> +static void vfio_dma_all_bitmap_free(struct vfio_iommu *iommu)
> +{
> +	struct rb_node *n = rb_first(&iommu->dma_list);
> +
> +	for (; n; n = rb_next(n)) {
> +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> +
> +		kfree(dma->bitmap);
> +		dma->bitmap = NULL;
> +	}
> +}
> +
>  /*
>   * Helper Functions for host iova-pfn list
>   */
> @@ -254,12 +327,16 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
>  	return vpfn;
>  }
>  
> -static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
> +static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn,
> +				  bool do_tracking, unsigned long pgsize)
>  {
>  	int ret = 0;
>  
>  	vpfn->ref_count--;
>  	if (!vpfn->ref_count) {
> +		if (do_tracking && dma->bitmap)
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) / pgsize, 1);
>  		ret = put_pfn(vpfn->pfn, dma->prot);
>  		vfio_remove_from_pfn_list(dma, vpfn);
>  	}
> @@ -484,7 +561,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
>  }
>  
>  static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> -				    bool do_accounting)
> +				    bool do_accounting, bool do_tracking,
> +				    unsigned long pgsize)
>  {
>  	int unlocked;
>  	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
> @@ -492,7 +570,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>  	if (!vpfn)
>  		return 0;
>  
> -	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
> +	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn, do_tracking, pgsize);
>  
>  	if (do_accounting)
>  		vfio_lock_acct(dma, -unlocked, true);
> @@ -563,9 +641,26 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  
>  		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
>  		if (ret) {
> -			vfio_unpin_page_external(dma, iova, do_accounting);
> +			vfio_unpin_page_external(dma, iova, do_accounting,
> +						 false, 0);
>  			goto pin_unwind;
>  		}
> +
> +		if (iommu->dirty_page_tracking) {
> +			unsigned long pgshift =
> +					 __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (!dma->bitmap) {
> +				ret = vfio_dma_bitmap_alloc(dma, 1 << pgshift);
> +				if (ret) {
> +					vfio_unpin_page_external(dma, iova,
> +						 do_accounting, false, 0);
> +					goto pin_unwind;
> +				}
> +			}
> +			bitmap_set(dma->bitmap,
> +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> +		}
>  	}
>  
>  	ret = i;
> @@ -578,7 +673,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>  
>  		iova = user_pfn[j] << PAGE_SHIFT;
>  		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
> -		vfio_unpin_page_external(dma, iova, do_accounting);
> +		vfio_unpin_page_external(dma, iova, do_accounting, false, 0);
>  		phys_pfn[j] = 0;
>  	}
>  pin_done:
> @@ -612,7 +707,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
>  		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
>  		if (!dma)
>  			goto unpin_exit;
> -		vfio_unpin_page_external(dma, iova, do_accounting);
> +		vfio_unpin_page_external(dma, iova, do_accounting,
> +					 iommu->dirty_page_tracking, PAGE_SIZE);
>  	}
>  
>  unpin_exit:
> @@ -800,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unmap_unpin(iommu, dma, true);
>  	vfio_unlink_dma(iommu, dma);
>  	put_task_struct(dma->task);
> +	kfree(dma->bitmap);
>  	kfree(dma);
>  	iommu->dma_avail++;
>  }
> @@ -830,6 +927,54 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
>  	return bitmap;
>  }
>  
> +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> +				  size_t size, uint64_t pgsize,
> +				  unsigned char __user *bitmap)
> +{
> +	struct vfio_dma *dma;
> +	unsigned long pgshift = __ffs(pgsize);
> +	unsigned int npages, bitmap_size;
> +
> +	dma = vfio_find_dma(iommu, iova, 1);
> +
> +	if (!dma)
> +		return -EINVAL;
> +
> +	if (dma->iova != iova || dma->size != size)
> +		return -EINVAL;
> +
> +	npages = dma->size >> pgshift;
> +	bitmap_size = dirty_bitmap_bytes(npages);
> +
> +	/* mark all pages dirty if all pages are pinned and mapped. */
> +	if (dma->iommu_mapped)
> +		bitmap_set(dma->bitmap, 0, npages);
> +
> +	if (dma->bitmap) {
> +		if (copy_to_user((void __user *)bitmap, dma->bitmap,
> +				 bitmap_size))
> +			return -EFAULT;
> +
> +		memset(dma->bitmap, 0, bitmap_size);
> +	}
> +	return 0;
> +}
> +
> +static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
> +{
> +	long bsize;
> +
> +	if (!bitmap_size || bitmap_size > SIZE_MAX)
> +		return -EINVAL;
> +
> +	bsize = dirty_bitmap_bytes(npages);
> +
> +	if (bitmap_size < bsize)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
>  			     struct vfio_iommu_type1_dma_unmap *unmap)
>  {
> @@ -2277,6 +2422,92 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>  
>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>  			-EFAULT : 0;
> +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> +		struct vfio_iommu_type1_dirty_bitmap dirty;
> +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> +		int ret;
> +
> +		if (!iommu->v2)
> +			return -EACCES;
> +
> +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> +				    flags);
> +
> +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> +			return -EFAULT;
> +
> +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> +			return -EINVAL;
> +
> +		/* only one flag should be set at a time */
> +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> +			return -EINVAL;

It seems a bit odd to use a set of ORable flags when only one can be set
at a time.

> +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> +			unsigned long iommu_pgsize =
> +					1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			mutex_lock(&iommu->lock);
> +			ret = vfio_dma_all_bitmap_alloc(iommu, iommu_pgsize);
> +			if (!ret)
> +				iommu->dirty_page_tracking = true;
> +			mutex_unlock(&iommu->lock);
> +			return ret;
> +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking) {
> +				iommu->dirty_page_tracking = false;
> +				vfio_dma_all_bitmap_free(iommu);
> +			}
> +			mutex_unlock(&iommu->lock);
> +			return 0;
> +		} else if (dirty.flags &
> +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> +			struct vfio_iommu_type1_dirty_bitmap_get range;
> +			unsigned long pgshift;
> +			size_t data_size = dirty.argsz - minsz;
> +			uint64_t iommu_pgsize =
> +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> +
> +			if (!data_size || data_size < sizeof(range))
> +				return -EINVAL;
> +
> +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> +					   sizeof(range)))
> +				return -EFAULT;
> +
> +			// allow only min supported pgsize
> +			if (range.pgsize != iommu_pgsize)
> +				return -EINVAL;
> +			if (range.iova & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (!range.size || range.size & (iommu_pgsize - 1))
> +				return -EINVAL;
> +			if (range.iova + range.size < range.iova)
> +				return -EINVAL;
> +			if (!access_ok((void __user *)range.bitmap,
> +				       range.bitmap_size))
> +				return -EINVAL;
> +
> +			pgshift = __ffs(range.pgsize);
> +			ret = verify_bitmap_size(range.size >> pgshift,
> +						 range.bitmap_size);
> +			if (ret)
> +				return ret;
> +
> +			mutex_lock(&iommu->lock);
> +			if (iommu->dirty_page_tracking)
> +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> +					 range.size, range.pgsize,
> +					 (unsigned char __user *)range.bitmap);
> +			else
> +				ret = -EINVAL;
> +			mutex_unlock(&iommu->lock);
> +
> +			return ret;
> +		}
>  	}
>  
>  	return -ENOTTY;
> -- 
> 2.7.0
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
  2020-03-18 12:13   ` Dr. David Alan Gilbert
@ 2020-03-18 13:32     ` Alex Williamson
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2020-03-18 13:32 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Kirti Wankhede, cjia, kevin.tian, ziye.yang, changpeng.liu,
	yi.l.liu, mlevitsk, eskultet, cohuck, jonathan.davies, eauger,
	aik, pasic, felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue,
	zhi.a.wang, yan.y.zhao, qemu-devel, kvm

On Wed, 18 Mar 2020 12:13:12 +0000
"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:

> * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > VFIO_IOMMU_DIRTY_PAGES ioctl performs three operations:
> > - Start pinned and unpinned pages tracking while migration is active
> > - Stop pinned and unpinned dirty pages tracking. This is also used to
> >   stop dirty pages tracking if migration failed or cancelled.
> > - Get dirty pages bitmap. This ioctl returns bitmap of dirty pages, its
> >   user space application responsibility to copy content of dirty pages
> >   from source to destination during migration.
> > 
> > To prevent DoS attack, memory for bitmap is allocated per vfio_dma
> > structure. Bitmap size is calculated considering smallest supported page
> > size. Bitmap is allocated when dirty logging is enabled for those
> > vfio_dmas whose vpfn list is not empty or whole range is mapped, in
> > case of pass-through device.
> > 
> > Bitmap is populated for already pinned pages when bitmap is allocated for
> > a vfio_dma with the smallest supported page size. Update bitmap from
> > pinning and unpinning functions. When user application queries bitmap,
> > check if requested page size is same as page size used to populated
> > bitmap. If it is equal, copy bitmap, but if not equal, return error.
> > 
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 243 +++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 237 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > index d386461e5d11..435e84269a28 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -70,6 +70,7 @@ struct vfio_iommu {
> >  	unsigned int		dma_avail;
> >  	bool			v2;
> >  	bool			nesting;
> > +	bool			dirty_page_tracking;
> >  };
> >  
> >  struct vfio_domain {
> > @@ -90,6 +91,7 @@ struct vfio_dma {
> >  	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
> >  	struct task_struct	*task;
> >  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> > +	unsigned long		*bitmap;
> >  };
> >  
> >  struct vfio_group {
> > @@ -125,6 +127,7 @@ struct vfio_regions {
> >  					(!list_empty(&iommu->domain_list))
> >  
> >  static int put_pfn(unsigned long pfn, int prot);
> > +static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >  
> >  /*
> >   * This code handles mapping and unmapping of user data buffers
> > @@ -174,6 +177,76 @@ static void vfio_unlink_dma(struct vfio_iommu *iommu, struct vfio_dma *old)
> >  	rb_erase(&old->node, &iommu->dma_list);
> >  }
> >  
> > +static inline unsigned long dirty_bitmap_bytes(unsigned int npages)
> > +{
> > +	if (!npages)
> > +		return 0;
> > +
> > +	return ALIGN(npages, BITS_PER_LONG) / sizeof(unsigned long);
> > +}
> > +
> > +static int vfio_dma_bitmap_alloc(struct vfio_dma *dma, unsigned long pgsize)
> > +{
> > +	if (!RB_EMPTY_ROOT(&dma->pfn_list) || dma->iommu_mapped) {
> > +		unsigned long npages = dma->size / pgsize;
> > +
> > +		dma->bitmap = kvzalloc(dirty_bitmap_bytes(npages), GFP_KERNEL);
> > +		if (!dma->bitmap)
> > +			return -ENOMEM;
> > +	}
> > +	return 0;
> > +}
> > +
> > +static int vfio_dma_all_bitmap_alloc(struct vfio_iommu *iommu,
> > +				     unsigned long pgsize)
> > +{
> > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > +	int ret;
> > +
> > +	for (; n; n = rb_next(n)) {
> > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > +		struct rb_node *n;
> > +
> > +		ret = vfio_dma_bitmap_alloc(dma, pgsize);
> > +		if (ret) {
> > +			struct rb_node *p = rb_prev(n);
> > +
> > +			for (; p; p = rb_prev(p)) {
> > +				struct vfio_dma *dma = rb_entry(n,
> > +							struct vfio_dma, node);
> > +
> > +				kfree(dma->bitmap);
> > +				dma->bitmap = NULL;
> > +			}
> > +			return ret;
> > +		}
> > +
> > +		if (!dma->bitmap)
> > +			continue;
> > +
> > +		for (n = rb_first(&dma->pfn_list); n; n = rb_next(n)) {
> > +			struct vfio_pfn *vpfn = rb_entry(n, struct vfio_pfn,
> > +							 node);
> > +
> > +			bitmap_set(dma->bitmap,
> > +				   (vpfn->iova - dma->iova) / pgsize, 1);
> > +		}
> > +	}
> > +	return 0;
> > +}
> > +
> > +static void vfio_dma_all_bitmap_free(struct vfio_iommu *iommu)
> > +{
> > +	struct rb_node *n = rb_first(&iommu->dma_list);
> > +
> > +	for (; n; n = rb_next(n)) {
> > +		struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node);
> > +
> > +		kfree(dma->bitmap);
> > +		dma->bitmap = NULL;
> > +	}
> > +}
> > +
> >  /*
> >   * Helper Functions for host iova-pfn list
> >   */
> > @@ -254,12 +327,16 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma,
> >  	return vpfn;
> >  }
> >  
> > -static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
> > +static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn,
> > +				  bool do_tracking, unsigned long pgsize)
> >  {
> >  	int ret = 0;
> >  
> >  	vpfn->ref_count--;
> >  	if (!vpfn->ref_count) {
> > +		if (do_tracking && dma->bitmap)
> > +			bitmap_set(dma->bitmap,
> > +				   (vpfn->iova - dma->iova) / pgsize, 1);
> >  		ret = put_pfn(vpfn->pfn, dma->prot);
> >  		vfio_remove_from_pfn_list(dma, vpfn);
> >  	}
> > @@ -484,7 +561,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
> >  }
> >  
> >  static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> > -				    bool do_accounting)
> > +				    bool do_accounting, bool do_tracking,
> > +				    unsigned long pgsize)
> >  {
> >  	int unlocked;
> >  	struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova);
> > @@ -492,7 +570,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> >  	if (!vpfn)
> >  		return 0;
> >  
> > -	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
> > +	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn, do_tracking, pgsize);
> >  
> >  	if (do_accounting)
> >  		vfio_lock_acct(dma, -unlocked, true);
> > @@ -563,9 +641,26 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >  
> >  		ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
> >  		if (ret) {
> > -			vfio_unpin_page_external(dma, iova, do_accounting);
> > +			vfio_unpin_page_external(dma, iova, do_accounting,
> > +						 false, 0);
> >  			goto pin_unwind;
> >  		}
> > +
> > +		if (iommu->dirty_page_tracking) {
> > +			unsigned long pgshift =
> > +					 __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			if (!dma->bitmap) {
> > +				ret = vfio_dma_bitmap_alloc(dma, 1 << pgshift);
> > +				if (ret) {
> > +					vfio_unpin_page_external(dma, iova,
> > +						 do_accounting, false, 0);
> > +					goto pin_unwind;
> > +				}
> > +			}
> > +			bitmap_set(dma->bitmap,
> > +				   (vpfn->iova - dma->iova) >> pgshift, 1);
> > +		}
> >  	}
> >  
> >  	ret = i;
> > @@ -578,7 +673,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >  
> >  		iova = user_pfn[j] << PAGE_SHIFT;
> >  		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
> > -		vfio_unpin_page_external(dma, iova, do_accounting);
> > +		vfio_unpin_page_external(dma, iova, do_accounting, false, 0);
> >  		phys_pfn[j] = 0;
> >  	}
> >  pin_done:
> > @@ -612,7 +707,8 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
> >  		dma = vfio_find_dma(iommu, iova, PAGE_SIZE);
> >  		if (!dma)
> >  			goto unpin_exit;
> > -		vfio_unpin_page_external(dma, iova, do_accounting);
> > +		vfio_unpin_page_external(dma, iova, do_accounting,
> > +					 iommu->dirty_page_tracking, PAGE_SIZE);
> >  	}
> >  
> >  unpin_exit:
> > @@ -800,6 +896,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
> >  	vfio_unmap_unpin(iommu, dma, true);
> >  	vfio_unlink_dma(iommu, dma);
> >  	put_task_struct(dma->task);
> > +	kfree(dma->bitmap);
> >  	kfree(dma);
> >  	iommu->dma_avail++;
> >  }
> > @@ -830,6 +927,54 @@ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> >  	return bitmap;
> >  }
> >  
> > +static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> > +				  size_t size, uint64_t pgsize,
> > +				  unsigned char __user *bitmap)
> > +{
> > +	struct vfio_dma *dma;
> > +	unsigned long pgshift = __ffs(pgsize);
> > +	unsigned int npages, bitmap_size;
> > +
> > +	dma = vfio_find_dma(iommu, iova, 1);
> > +
> > +	if (!dma)
> > +		return -EINVAL;
> > +
> > +	if (dma->iova != iova || dma->size != size)
> > +		return -EINVAL;
> > +
> > +	npages = dma->size >> pgshift;
> > +	bitmap_size = dirty_bitmap_bytes(npages);
> > +
> > +	/* mark all pages dirty if all pages are pinned and mapped. */
> > +	if (dma->iommu_mapped)
> > +		bitmap_set(dma->bitmap, 0, npages);
> > +
> > +	if (dma->bitmap) {
> > +		if (copy_to_user((void __user *)bitmap, dma->bitmap,
> > +				 bitmap_size))
> > +			return -EFAULT;
> > +
> > +		memset(dma->bitmap, 0, bitmap_size);
> > +	}
> > +	return 0;
> > +}
> > +
> > +static int verify_bitmap_size(unsigned long npages, unsigned long bitmap_size)
> > +{
> > +	long bsize;
> > +
> > +	if (!bitmap_size || bitmap_size > SIZE_MAX)
> > +		return -EINVAL;
> > +
> > +	bsize = dirty_bitmap_bytes(npages);
> > +
> > +	if (bitmap_size < bsize)
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> >  static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
> >  			     struct vfio_iommu_type1_dma_unmap *unmap)
> >  {
> > @@ -2277,6 +2422,92 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> >  
> >  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
> >  			-EFAULT : 0;
> > +	} else if (cmd == VFIO_IOMMU_DIRTY_PAGES) {
> > +		struct vfio_iommu_type1_dirty_bitmap dirty;
> > +		uint32_t mask = VFIO_IOMMU_DIRTY_PAGES_FLAG_START |
> > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP |
> > +				VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP;
> > +		int ret;
> > +
> > +		if (!iommu->v2)
> > +			return -EACCES;
> > +
> > +		minsz = offsetofend(struct vfio_iommu_type1_dirty_bitmap,
> > +				    flags);
> > +
> > +		if (copy_from_user(&dirty, (void __user *)arg, minsz))
> > +			return -EFAULT;
> > +
> > +		if (dirty.argsz < minsz || dirty.flags & ~mask)
> > +			return -EINVAL;
> > +
> > +		/* only one flag should be set at a time */
> > +		if (__ffs(dirty.flags) != __fls(dirty.flags))
> > +			return -EINVAL;  
> 
> It seems a bit odd to use a set of ORable flags when only one can be set
> at a time.

Is it really?  It might in the future be true that there are some
combination of flags that are valid, but for the currently defined
flags, they're mutually exclusive.  There's no combination of START |
STOP | GET that makes sense.  Don't read into the current
implementation that this is fixed in the API except as it relates to
the currently defined flags.  Thanks,

Alex
 
> > +		if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) {
> > +			unsigned long iommu_pgsize =
> > +					1 << __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			mutex_lock(&iommu->lock);
> > +			ret = vfio_dma_all_bitmap_alloc(iommu, iommu_pgsize);
> > +			if (!ret)
> > +				iommu->dirty_page_tracking = true;
> > +			mutex_unlock(&iommu->lock);
> > +			return ret;
> > +		} else if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP) {
> > +			mutex_lock(&iommu->lock);
> > +			if (iommu->dirty_page_tracking) {
> > +				iommu->dirty_page_tracking = false;
> > +				vfio_dma_all_bitmap_free(iommu);
> > +			}
> > +			mutex_unlock(&iommu->lock);
> > +			return 0;
> > +		} else if (dirty.flags &
> > +				 VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP) {
> > +			struct vfio_iommu_type1_dirty_bitmap_get range;
> > +			unsigned long pgshift;
> > +			size_t data_size = dirty.argsz - minsz;
> > +			uint64_t iommu_pgsize =
> > +					 1 << __ffs(vfio_pgsize_bitmap(iommu));
> > +
> > +			if (!data_size || data_size < sizeof(range))
> > +				return -EINVAL;
> > +
> > +			if (copy_from_user(&range, (void __user *)(arg + minsz),
> > +					   sizeof(range)))
> > +				return -EFAULT;
> > +
> > +			// allow only min supported pgsize
> > +			if (range.pgsize != iommu_pgsize)
> > +				return -EINVAL;
> > +			if (range.iova & (iommu_pgsize - 1))
> > +				return -EINVAL;
> > +			if (!range.size || range.size & (iommu_pgsize - 1))
> > +				return -EINVAL;
> > +			if (range.iova + range.size < range.iova)
> > +				return -EINVAL;
> > +			if (!access_ok((void __user *)range.bitmap,
> > +				       range.bitmap_size))
> > +				return -EINVAL;
> > +
> > +			pgshift = __ffs(range.pgsize);
> > +			ret = verify_bitmap_size(range.size >> pgshift,
> > +						 range.bitmap_size);
> > +			if (ret)
> > +				return ret;
> > +
> > +			mutex_lock(&iommu->lock);
> > +			if (iommu->dirty_page_tracking)
> > +				ret = vfio_iova_dirty_bitmap(iommu, range.iova,
> > +					 range.size, range.pgsize,
> > +					 (unsigned char __user *)range.bitmap);
> > +			else
> > +				ret = -EINVAL;
> > +			mutex_unlock(&iommu->lock);
> > +
> > +			return ret;
> > +		}
> >  	}
> >  
> >  	return -ENOTTY;
> > -- 
> > 2.7.0
> >   
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-17 19:00       ` Alex Williamson
@ 2020-03-18 15:00         ` Kirti Wankhede
  2020-03-18 17:01           ` Alex Williamson
  0 siblings, 1 reply; 20+ messages in thread
From: Kirti Wankhede @ 2020-03-18 15:00 UTC (permalink / raw)
  To: Alex Williamson
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm



On 3/18/2020 12:30 AM, Alex Williamson wrote:
> On Tue, 17 Mar 2020 23:58:38 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> 
>> On 3/14/2020 2:19 AM, Alex Williamson wrote:
>>> On Thu, 12 Mar 2020 23:23:27 +0530
>>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>>>    
>>>> Added a check such that only singleton IOMMU groups can pin pages.
>>>>   From the point when vendor driver pins any pages, consider IOMMU group
>>>> dirty page scope to be limited to pinned pages.
>>>>
>>>> To optimize to avoid walking list often, added flag
>>>> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
>>>> vfio_domain in the domain_list dirty page scope is limited to pinned
>>>> pages. This flag is updated on first pinned pages request for that IOMMU
>>>> group and on attaching/detaching group.
>>>>
>>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
>>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
>>>> ---
>>>>    drivers/vfio/vfio.c             |  9 +++++-
>>>>    drivers/vfio/vfio_iommu_type1.c | 72 +++++++++++++++++++++++++++++++++++++++--
>>>>    include/linux/vfio.h            |  4 ++-
>>>>    3 files changed, 80 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
>>>> index c8482624ca34..79108c1245a5 100644
>>>> --- a/drivers/vfio/vfio.c
>>>> +++ b/drivers/vfio/vfio.c
>>>> @@ -85,6 +85,7 @@ struct vfio_group {
>>>>    	atomic_t			opened;
>>>>    	wait_queue_head_t		container_q;
>>>>    	bool				noiommu;
>>>> +	unsigned int			dev_counter;
>>>>    	struct kvm			*kvm;
>>>>    	struct blocking_notifier_head	notifier;
>>>>    };
>>>> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
>>>>    
>>>>    	mutex_lock(&group->device_lock);
>>>>    	list_add(&device->group_next, &group->device_list);
>>>> +	group->dev_counter++;
>>>>    	mutex_unlock(&group->device_lock);
>>>>    
>>>>    	return device;
>>>> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
>>>>    	struct vfio_group *group = device->group;
>>>>    
>>>>    	list_del(&device->group_next);
>>>> +	group->dev_counter--;
>>>>    	mutex_unlock(&group->device_lock);
>>>>    
>>>>    	dev_set_drvdata(device->dev, NULL);
>>>> @@ -1895,6 +1898,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>>>>    	if (!group)
>>>>    		return -ENODEV;
>>>>    
>>>> +	if (group->dev_counter > 1)
>>>> +		return -EINVAL;
>>>> +
>>>>    	ret = vfio_group_add_container_user(group);
>>>>    	if (ret)
>>>>    		goto err_pin_pages;
>>>> @@ -1902,7 +1908,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
>>>>    	container = group->container;
>>>>    	driver = container->iommu_driver;
>>>>    	if (likely(driver && driver->ops->pin_pages))
>>>> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
>>>> +		ret = driver->ops->pin_pages(container->iommu_data,
>>>> +					     group->iommu_group, user_pfn,
>>>>    					     npage, prot, phys_pfn);
>>>>    	else
>>>>    		ret = -ENOTTY;
>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>>> index 4f1f116feabc..18a284b230c0 100644
>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>> @@ -71,6 +71,7 @@ struct vfio_iommu {
>>>>    	bool			v2;
>>>>    	bool			nesting;
>>>>    	bool			dirty_page_tracking;
>>>> +	bool			pinned_page_dirty_scope;
>>>>    };
>>>>    
>>>>    struct vfio_domain {
>>>> @@ -98,6 +99,7 @@ struct vfio_group {
>>>>    	struct iommu_group	*iommu_group;
>>>>    	struct list_head	next;
>>>>    	bool			mdev_group;	/* An mdev group */
>>>> +	bool			has_pinned_pages;
>>>
>>> I'm afraid over time this name will be confusing, should we simply
>>> call it pinned_page_dirty_scope per vfio_group as well?
>>
>> Updating as you suggested, but I hope it doesn't look confusing.
>>
>>>   We might have
>>> to adapt this over time as we get new ways to dirty pages, but each
>>> group voting towards the same value being set on the vfio_iommu object
>>> seems like a good starting point.
>>>    
>>>>    };
>>>>    
>>>>    struct vfio_iova {
>>>> @@ -129,6 +131,10 @@ struct vfio_regions {
>>>>    static int put_pfn(unsigned long pfn, int prot);
>>>>    static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
>>>>    
>>>> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
>>>> +					       struct iommu_group *iommu_group);
>>>> +
>>>> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
>>>>    /*
>>>>     * This code handles mapping and unmapping of user data buffers
>>>>     * into DMA'ble space using the IOMMU
>>>> @@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
>>>>    }
>>>>    
>>>>    static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>>> +				      struct iommu_group *iommu_group,
>>>>    				      unsigned long *user_pfn,
>>>>    				      int npage, int prot,
>>>>    				      unsigned long *phys_pfn)
>>>>    {
>>>>    	struct vfio_iommu *iommu = iommu_data;
>>>> +	struct vfio_group *group;
>>>>    	int i, j, ret;
>>>>    	unsigned long remote_vaddr;
>>>>    	struct vfio_dma *dma;
>>>> @@ -662,8 +670,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
>>>>    				   (vpfn->iova - dma->iova) >> pgshift, 1);
>>>>    		}
>>>>    	}
>>>> -
>>>>    	ret = i;
>>>> +
>>>> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
>>>> +	if (!group->has_pinned_pages) {
>>>> +		group->has_pinned_pages = true;
>>>> +		update_pinned_page_dirty_scope(iommu);
>>>> +	}
>>>> +
>>>>    	goto pin_done;
>>>>    
>>>>    pin_unwind:
>>>> @@ -946,8 +960,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
>>>>    	npages = dma->size >> pgshift;
>>>>    	bitmap_size = dirty_bitmap_bytes(npages);
>>>>    
>>>> -	/* mark all pages dirty if all pages are pinned and mapped. */
>>>> -	if (dma->iommu_mapped)
>>>> +	/*
>>>> +	 * mark all pages dirty if any IOMMU capable device is not able
>>>> +	 * to report dirty pages and all pages are pinned and mapped.
>>>> +	 */
>>>> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>>>>    		bitmap_set(dma->bitmap, 0, npages);
>>>>    
>>>>    	if (dma->bitmap) {
>>>> @@ -1430,6 +1447,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
>>>>    	return NULL;
>>>>    }
>>>>    
>>>> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
>>>> +					       struct iommu_group *iommu_group)
>>>> +{
>>>> +	struct vfio_domain *domain;
>>>> +	struct vfio_group *group = NULL;
>>>> +
>>>> +	list_for_each_entry(domain, &iommu->domain_list, next) {
>>>> +		group = find_iommu_group(domain, iommu_group);
>>>> +		if (group)
>>>> +			return group;
>>>> +	}
>>>> +
>>>> +	if (iommu->external_domain)
>>>> +		group = find_iommu_group(iommu->external_domain, iommu_group);
>>>> +
>>>> +	return group;
>>>> +}
>>>> +
>>>> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
>>>> +{
>>>> +	struct vfio_domain *domain;
>>>> +	struct vfio_group *group;
>>>> +
>>>> +	list_for_each_entry(domain, &iommu->domain_list, next) {
>>>> +		list_for_each_entry(group, &domain->group_list, next) {
>>>> +			if (!group->has_pinned_pages) {
>>>> +				iommu->pinned_page_dirty_scope = false;
>>>> +				return;
>>>> +			}
>>>> +		}
>>>> +	}
>>>> +
>>>> +	if (iommu->external_domain) {
>>>> +		domain = iommu->external_domain;
>>>> +		list_for_each_entry(group, &domain->group_list, next) {
>>>> +			if (!group->has_pinned_pages) {
>>>> +				iommu->pinned_page_dirty_scope = false;
>>>> +				return;
>>>> +			}
>>>> +		}
>>>> +	}
>>>> +
>>>> +	iommu->pinned_page_dirty_scope = true;
>>>> +}
>>>> +
>>>>    static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
>>>>    				  phys_addr_t *base)
>>>>    {
>>>> @@ -1836,6 +1898,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>>>>    
>>>>    			list_add(&group->next,
>>>>    				 &iommu->external_domain->group_list);
>>>> +			update_pinned_page_dirty_scope(iommu);
>>>>    			mutex_unlock(&iommu->lock);
>>>>    
>>>>    			return 0;
>>>> @@ -1958,6 +2021,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>>>>    done:
>>>>    	/* Delete the old one and insert new iova list */
>>>>    	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
>>>> +	update_pinned_page_dirty_scope(iommu);
>>>>    	mutex_unlock(&iommu->lock);
>>>>    	vfio_iommu_resv_free(&group_resv_regions);
>>>>      
>>>
>>> At this point we've added an iommu backed group that can't possibly
>>> have pages pinned on behalf of this group yet, can't we just set
>>> iommu->pinned_page_dirty_scope = false?
>>>    
>>
>> Right, changing.
>>
>>> In the previous case, aren't we adding a non-iommu backed group, so
>>> should we presume the scope is pinned pages even before we have any?
>>
>> Anyways we are updating it when pages are pinned, I think better not to
>> presume.
> 
> If there's no iommu backing then the device doesn't have access to
> dirty the pages itself, how else will they get dirty?  Perhaps I was a
> little use in using the word "presume", I think there's a proof that
> the pages must have limited dirty-scope.
> 

We need to handle below cases with non-iommu backed device:
1. Only non-iommu mdev device
group->pinned_page_dirty_scope = true;
update_pinned_page_dirty_scope()=>iommu->pinned_page_dirty_scope=true

2. First non-iommu mdev is attached then iommu backed device attached.
1st non-iommu mdev device attached
group->pinned_page_dirty_scope = true;
update_pinned_page_dirty_scope()=>iommu->pinned_page_dirty_scope=true

2nd iommu backed device attached:
iommu->pinned_page_dirty_scope = false

3. First iommu backed devices are attached then non-iommu backed devices 
attached
For iommu backed device attached
iommu->pinned_page_dirty_scope = false

Last non-iommu mdev device attached
group->pinned_page_dirty_scope = true;
update_pinned_page_dirty_scope()=>iommu->pinned_page_dirty_scope=false

I think we can set group->pinned_page_dirty_scope = true, but not the 
iommu->pinned_page_dirty_scope.

Then if iommu backed device's driver pins pages through vfio_pin_pages(): 	
group->pinned_page_dirty_scope = true;
update_pinned_page_dirty_scope() will change 
iommu->pinned_page_dirty_scope based on current group list - if any 
group in the list doesn't support dirty scope - set false

>>> We could almost forego the iommu scope update, but it could be the
>>> first group added if we're going to preemptively assume the scope of
>>> the group.
>>>    
>>>> @@ -1972,6 +2036,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>>>>    out_free:
>>>>    	kfree(domain);
>>>>    	kfree(group);
>>>> +	update_pinned_page_dirty_scope(iommu);
>>>
>>> This one looks like paranoia given how late we update when the group is
>>> added.
>>>    
>>>>    	mutex_unlock(&iommu->lock);
>>>>    	return ret;
>>>>    }
>>>> @@ -2176,6 +2241,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>>>>    		vfio_iommu_iova_free(&iova_copy);
>>>>    
>>>>    detach_group_done:
>>>> +	update_pinned_page_dirty_scope(iommu);
>>>
>>> We only need to do this if the group we're removing does not have
>>> pinned page dirty scope, right?  I think we have all the info here to
>>> make that optimization.
>>>    
>>
>> There could be more than one group that doesn't have pinned page dirty
>> scope, better to run through update_pinned_page_dirty_scope() function.
> 
> Maybe I stated it wrong above, but I think we have this table:
> 
> 
> iommu|group
> -----+--------+---------+
> XXXXX|   0    |    1    |
> -----+--------+---------+
>    0  |   A    |    B    |
> -----+--------+---------+
>    1  |   C    |    D    |
> -----+--------+---------+
> 
> A: If we are NOT dirty-page-scope at the iommu and we remove a group
> that is NOT dirty-page-scope, we need to check because that might have
> been the group preventing the iommu from being dirty-page-scope.
> 
> B: If we are NOT dirty-page-scope at the iommu and we remove a group
> that IS dirty-page-scope, we know that group wasn't limiting the scope
> at the iommu.
> 
> C: If the iommu IS dirty-page-scope, we can't remove a group that is
> NOT dirty page scope, this case is impossible.
> 
> D: If the iommu IS dirty-page-scope and we remove a group that IS dirty-
> page-scope, nothing changes.
> 
> So I think we only need to update on A, or A+C since C cannot happen.
> In B and D removing a group with dirt-page-scope cannot change the
> iommu scope.  Thanks,
> 

Ok. Updating iommu->pinned_page_dirty_scope only when removing a group 
that is NOT dirty-page-scope.

Thanks,
Kirti

> Alex
> 
>>>>    	mutex_unlock(&iommu->lock);
>>>>    }
>>>>    
>>>> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
>>>> index e42a711a2800..da29802d6276 100644
>>>> --- a/include/linux/vfio.h
>>>> +++ b/include/linux/vfio.h
>>>> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
>>>>    					struct iommu_group *group);
>>>>    	void		(*detach_group)(void *iommu_data,
>>>>    					struct iommu_group *group);
>>>> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
>>>> +	int		(*pin_pages)(void *iommu_data,
>>>> +				     struct iommu_group *group,
>>>> +				     unsigned long *user_pfn,
>>>>    				     int npage, int prot,
>>>>    				     unsigned long *phys_pfn);
>>>>    	int		(*unpin_pages)(void *iommu_data,
>>>    
>>
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages
  2020-03-18 15:00         ` Kirti Wankhede
@ 2020-03-18 17:01           ` Alex Williamson
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2020-03-18 17:01 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: cjia, kevin.tian, ziye.yang, changpeng.liu, yi.l.liu, mlevitsk,
	eskultet, cohuck, dgilbert, jonathan.davies, eauger, aik, pasic,
	felipe, Zhengxiao.zx, shuangtai.tst, Ken.Xue, zhi.a.wang,
	yan.y.zhao, qemu-devel, kvm

On Wed, 18 Mar 2020 20:30:19 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 3/18/2020 12:30 AM, Alex Williamson wrote:
> > On Tue, 17 Mar 2020 23:58:38 +0530
> > Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >   
> >> On 3/14/2020 2:19 AM, Alex Williamson wrote:  
> >>> On Thu, 12 Mar 2020 23:23:27 +0530
> >>> Kirti Wankhede <kwankhede@nvidia.com> wrote:
> >>>      
> >>>> Added a check such that only singleton IOMMU groups can pin pages.
> >>>>   From the point when vendor driver pins any pages, consider IOMMU group
> >>>> dirty page scope to be limited to pinned pages.
> >>>>
> >>>> To optimize to avoid walking list often, added flag
> >>>> pinned_page_dirty_scope to indicate if all of the vfio_groups for each
> >>>> vfio_domain in the domain_list dirty page scope is limited to pinned
> >>>> pages. This flag is updated on first pinned pages request for that IOMMU
> >>>> group and on attaching/detaching group.
> >>>>
> >>>> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> >>>> Reviewed-by: Neo Jia <cjia@nvidia.com>
> >>>> ---
> >>>>    drivers/vfio/vfio.c             |  9 +++++-
> >>>>    drivers/vfio/vfio_iommu_type1.c | 72 +++++++++++++++++++++++++++++++++++++++--
> >>>>    include/linux/vfio.h            |  4 ++-
> >>>>    3 files changed, 80 insertions(+), 5 deletions(-)
> >>>>
> >>>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> >>>> index c8482624ca34..79108c1245a5 100644
> >>>> --- a/drivers/vfio/vfio.c
> >>>> +++ b/drivers/vfio/vfio.c
> >>>> @@ -85,6 +85,7 @@ struct vfio_group {
> >>>>    	atomic_t			opened;
> >>>>    	wait_queue_head_t		container_q;
> >>>>    	bool				noiommu;
> >>>> +	unsigned int			dev_counter;
> >>>>    	struct kvm			*kvm;
> >>>>    	struct blocking_notifier_head	notifier;
> >>>>    };
> >>>> @@ -555,6 +556,7 @@ struct vfio_device *vfio_group_create_device(struct vfio_group *group,
> >>>>    
> >>>>    	mutex_lock(&group->device_lock);
> >>>>    	list_add(&device->group_next, &group->device_list);
> >>>> +	group->dev_counter++;
> >>>>    	mutex_unlock(&group->device_lock);
> >>>>    
> >>>>    	return device;
> >>>> @@ -567,6 +569,7 @@ static void vfio_device_release(struct kref *kref)
> >>>>    	struct vfio_group *group = device->group;
> >>>>    
> >>>>    	list_del(&device->group_next);
> >>>> +	group->dev_counter--;
> >>>>    	mutex_unlock(&group->device_lock);
> >>>>    
> >>>>    	dev_set_drvdata(device->dev, NULL);
> >>>> @@ -1895,6 +1898,9 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> >>>>    	if (!group)
> >>>>    		return -ENODEV;
> >>>>    
> >>>> +	if (group->dev_counter > 1)
> >>>> +		return -EINVAL;
> >>>> +
> >>>>    	ret = vfio_group_add_container_user(group);
> >>>>    	if (ret)
> >>>>    		goto err_pin_pages;
> >>>> @@ -1902,7 +1908,8 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> >>>>    	container = group->container;
> >>>>    	driver = container->iommu_driver;
> >>>>    	if (likely(driver && driver->ops->pin_pages))
> >>>> -		ret = driver->ops->pin_pages(container->iommu_data, user_pfn,
> >>>> +		ret = driver->ops->pin_pages(container->iommu_data,
> >>>> +					     group->iommu_group, user_pfn,
> >>>>    					     npage, prot, phys_pfn);
> >>>>    	else
> >>>>    		ret = -ENOTTY;
> >>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >>>> index 4f1f116feabc..18a284b230c0 100644
> >>>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>>> @@ -71,6 +71,7 @@ struct vfio_iommu {
> >>>>    	bool			v2;
> >>>>    	bool			nesting;
> >>>>    	bool			dirty_page_tracking;
> >>>> +	bool			pinned_page_dirty_scope;
> >>>>    };
> >>>>    
> >>>>    struct vfio_domain {
> >>>> @@ -98,6 +99,7 @@ struct vfio_group {
> >>>>    	struct iommu_group	*iommu_group;
> >>>>    	struct list_head	next;
> >>>>    	bool			mdev_group;	/* An mdev group */
> >>>> +	bool			has_pinned_pages;  
> >>>
> >>> I'm afraid over time this name will be confusing, should we simply
> >>> call it pinned_page_dirty_scope per vfio_group as well?  
> >>
> >> Updating as you suggested, but I hope it doesn't look confusing.
> >>  
> >>>   We might have
> >>> to adapt this over time as we get new ways to dirty pages, but each
> >>> group voting towards the same value being set on the vfio_iommu object
> >>> seems like a good starting point.
> >>>      
> >>>>    };
> >>>>    
> >>>>    struct vfio_iova {
> >>>> @@ -129,6 +131,10 @@ struct vfio_regions {
> >>>>    static int put_pfn(unsigned long pfn, int prot);
> >>>>    static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu);
> >>>>    
> >>>> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> >>>> +					       struct iommu_group *iommu_group);
> >>>> +
> >>>> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu);
> >>>>    /*
> >>>>     * This code handles mapping and unmapping of user data buffers
> >>>>     * into DMA'ble space using the IOMMU
> >>>> @@ -579,11 +585,13 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
> >>>>    }
> >>>>    
> >>>>    static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>>> +				      struct iommu_group *iommu_group,
> >>>>    				      unsigned long *user_pfn,
> >>>>    				      int npage, int prot,
> >>>>    				      unsigned long *phys_pfn)
> >>>>    {
> >>>>    	struct vfio_iommu *iommu = iommu_data;
> >>>> +	struct vfio_group *group;
> >>>>    	int i, j, ret;
> >>>>    	unsigned long remote_vaddr;
> >>>>    	struct vfio_dma *dma;
> >>>> @@ -662,8 +670,14 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
> >>>>    				   (vpfn->iova - dma->iova) >> pgshift, 1);
> >>>>    		}
> >>>>    	}
> >>>> -
> >>>>    	ret = i;
> >>>> +
> >>>> +	group = vfio_iommu_find_iommu_group(iommu, iommu_group);
> >>>> +	if (!group->has_pinned_pages) {
> >>>> +		group->has_pinned_pages = true;
> >>>> +		update_pinned_page_dirty_scope(iommu);
> >>>> +	}
> >>>> +
> >>>>    	goto pin_done;
> >>>>    
> >>>>    pin_unwind:
> >>>> @@ -946,8 +960,11 @@ static int vfio_iova_dirty_bitmap(struct vfio_iommu *iommu, dma_addr_t iova,
> >>>>    	npages = dma->size >> pgshift;
> >>>>    	bitmap_size = dirty_bitmap_bytes(npages);
> >>>>    
> >>>> -	/* mark all pages dirty if all pages are pinned and mapped. */
> >>>> -	if (dma->iommu_mapped)
> >>>> +	/*
> >>>> +	 * mark all pages dirty if any IOMMU capable device is not able
> >>>> +	 * to report dirty pages and all pages are pinned and mapped.
> >>>> +	 */
> >>>> +	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
> >>>>    		bitmap_set(dma->bitmap, 0, npages);
> >>>>    
> >>>>    	if (dma->bitmap) {
> >>>> @@ -1430,6 +1447,51 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain,
> >>>>    	return NULL;
> >>>>    }
> >>>>    
> >>>> +static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu,
> >>>> +					       struct iommu_group *iommu_group)
> >>>> +{
> >>>> +	struct vfio_domain *domain;
> >>>> +	struct vfio_group *group = NULL;
> >>>> +
> >>>> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> >>>> +		group = find_iommu_group(domain, iommu_group);
> >>>> +		if (group)
> >>>> +			return group;
> >>>> +	}
> >>>> +
> >>>> +	if (iommu->external_domain)
> >>>> +		group = find_iommu_group(iommu->external_domain, iommu_group);
> >>>> +
> >>>> +	return group;
> >>>> +}
> >>>> +
> >>>> +static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu)
> >>>> +{
> >>>> +	struct vfio_domain *domain;
> >>>> +	struct vfio_group *group;
> >>>> +
> >>>> +	list_for_each_entry(domain, &iommu->domain_list, next) {
> >>>> +		list_for_each_entry(group, &domain->group_list, next) {
> >>>> +			if (!group->has_pinned_pages) {
> >>>> +				iommu->pinned_page_dirty_scope = false;
> >>>> +				return;
> >>>> +			}
> >>>> +		}
> >>>> +	}
> >>>> +
> >>>> +	if (iommu->external_domain) {
> >>>> +		domain = iommu->external_domain;
> >>>> +		list_for_each_entry(group, &domain->group_list, next) {
> >>>> +			if (!group->has_pinned_pages) {
> >>>> +				iommu->pinned_page_dirty_scope = false;
> >>>> +				return;
> >>>> +			}
> >>>> +		}
> >>>> +	}
> >>>> +
> >>>> +	iommu->pinned_page_dirty_scope = true;
> >>>> +}
> >>>> +
> >>>>    static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions,
> >>>>    				  phys_addr_t *base)
> >>>>    {
> >>>> @@ -1836,6 +1898,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> >>>>    
> >>>>    			list_add(&group->next,
> >>>>    				 &iommu->external_domain->group_list);
> >>>> +			update_pinned_page_dirty_scope(iommu);
> >>>>    			mutex_unlock(&iommu->lock);
> >>>>    
> >>>>    			return 0;
> >>>> @@ -1958,6 +2021,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> >>>>    done:
> >>>>    	/* Delete the old one and insert new iova list */
> >>>>    	vfio_iommu_iova_insert_copy(iommu, &iova_copy);
> >>>> +	update_pinned_page_dirty_scope(iommu);
> >>>>    	mutex_unlock(&iommu->lock);
> >>>>    	vfio_iommu_resv_free(&group_resv_regions);
> >>>>        
> >>>
> >>> At this point we've added an iommu backed group that can't possibly
> >>> have pages pinned on behalf of this group yet, can't we just set
> >>> iommu->pinned_page_dirty_scope = false?
> >>>      
> >>
> >> Right, changing.
> >>  
> >>> In the previous case, aren't we adding a non-iommu backed group, so
> >>> should we presume the scope is pinned pages even before we have any?  
> >>
> >> Anyways we are updating it when pages are pinned, I think better not to
> >> presume.  
> > 
> > If there's no iommu backing then the device doesn't have access to
> > dirty the pages itself, how else will they get dirty?  Perhaps I was a
> > little use in using the word "presume", I think there's a proof that
> > the pages must have limited dirty-scope.
> >   
> 
> We need to handle below cases with non-iommu backed device:
> 1. Only non-iommu mdev device
> group->pinned_page_dirty_scope = true;
> update_pinned_page_dirty_scope()=>iommu->pinned_page_dirty_scope=true  
> 
> 2. First non-iommu mdev is attached then iommu backed device attached.
> 1st non-iommu mdev device attached
> group->pinned_page_dirty_scope = true;
> update_pinned_page_dirty_scope()=>iommu->pinned_page_dirty_scope=true  
> 
> 2nd iommu backed device attached:
> iommu->pinned_page_dirty_scope = false
> 
> 3. First iommu backed devices are attached then non-iommu backed devices 
> attached
> For iommu backed device attached
> iommu->pinned_page_dirty_scope = false
> 
> Last non-iommu mdev device attached
> group->pinned_page_dirty_scope = true;
> update_pinned_page_dirty_scope()=>iommu->pinned_page_dirty_scope=false  
> 
> I think we can set group->pinned_page_dirty_scope = true, but not the 
> iommu->pinned_page_dirty_scope.

Yes, agreed.  This is what I was trying to say, but I wasn't clear
which object.  Furthermore if iommu->pinned_page_dirty_scope=true and
we add a group with group->pinned_page_dirty_scope=true, we don't need
to update.  If iommu->pinned_page_dirty_scope=false and we add a
group->pinned_page_dirty_scope=true, we only need to update if this is
the first group, though I'd concede that by the time we check for that
we might as well have just called the update function.  Thanks,

Alex
 
> Then if iommu backed device's driver pins pages through vfio_pin_pages(): 	
> group->pinned_page_dirty_scope = true;
> update_pinned_page_dirty_scope() will change 
> iommu->pinned_page_dirty_scope based on current group list - if any 
> group in the list doesn't support dirty scope - set false
> 
> >>> We could almost forego the iommu scope update, but it could be the
> >>> first group added if we're going to preemptively assume the scope of
> >>> the group.
> >>>      
> >>>> @@ -1972,6 +2036,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
> >>>>    out_free:
> >>>>    	kfree(domain);
> >>>>    	kfree(group);
> >>>> +	update_pinned_page_dirty_scope(iommu);  
> >>>
> >>> This one looks like paranoia given how late we update when the group is
> >>> added.
> >>>      
> >>>>    	mutex_unlock(&iommu->lock);
> >>>>    	return ret;
> >>>>    }
> >>>> @@ -2176,6 +2241,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
> >>>>    		vfio_iommu_iova_free(&iova_copy);
> >>>>    
> >>>>    detach_group_done:
> >>>> +	update_pinned_page_dirty_scope(iommu);  
> >>>
> >>> We only need to do this if the group we're removing does not have
> >>> pinned page dirty scope, right?  I think we have all the info here to
> >>> make that optimization.
> >>>      
> >>
> >> There could be more than one group that doesn't have pinned page dirty
> >> scope, better to run through update_pinned_page_dirty_scope() function.  
> > 
> > Maybe I stated it wrong above, but I think we have this table:
> > 
> > 
> > iommu|group
> > -----+--------+---------+
> > XXXXX|   0    |    1    |
> > -----+--------+---------+
> >    0  |   A    |    B    |
> > -----+--------+---------+
> >    1  |   C    |    D    |
> > -----+--------+---------+
> > 
> > A: If we are NOT dirty-page-scope at the iommu and we remove a group
> > that is NOT dirty-page-scope, we need to check because that might have
> > been the group preventing the iommu from being dirty-page-scope.
> > 
> > B: If we are NOT dirty-page-scope at the iommu and we remove a group
> > that IS dirty-page-scope, we know that group wasn't limiting the scope
> > at the iommu.
> > 
> > C: If the iommu IS dirty-page-scope, we can't remove a group that is
> > NOT dirty page scope, this case is impossible.
> > 
> > D: If the iommu IS dirty-page-scope and we remove a group that IS dirty-
> > page-scope, nothing changes.
> > 
> > So I think we only need to update on A, or A+C since C cannot happen.
> > In B and D removing a group with dirt-page-scope cannot change the
> > iommu scope.  Thanks,
> >   
> 
> Ok. Updating iommu->pinned_page_dirty_scope only when removing a group 
> that is NOT dirty-page-scope.
> 
> Thanks,
> Kirti
> 
> > Alex
> >   
> >>>>    	mutex_unlock(&iommu->lock);
> >>>>    }
> >>>>    
> >>>> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> >>>> index e42a711a2800..da29802d6276 100644
> >>>> --- a/include/linux/vfio.h
> >>>> +++ b/include/linux/vfio.h
> >>>> @@ -72,7 +72,9 @@ struct vfio_iommu_driver_ops {
> >>>>    					struct iommu_group *group);
> >>>>    	void		(*detach_group)(void *iommu_data,
> >>>>    					struct iommu_group *group);
> >>>> -	int		(*pin_pages)(void *iommu_data, unsigned long *user_pfn,
> >>>> +	int		(*pin_pages)(void *iommu_data,
> >>>> +				     struct iommu_group *group,
> >>>> +				     unsigned long *user_pfn,
> >>>>    				     int npage, int prot,
> >>>>    				     unsigned long *phys_pfn);
> >>>>    	int		(*unpin_pages)(void *iommu_data,  
> >>>      
> >>  
> >   
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-03-18 17:01 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-12 17:53 [PATCH v13 Kernel 0/7] KABIs to support migration for VFIO devices Kirti Wankhede
2020-03-12 17:53 ` [PATCH v13 Kernel 1/7] vfio: KABI for migration interface for device state Kirti Wankhede
2020-03-12 17:53 ` [PATCH v13 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Kirti Wankhede
2020-03-12 17:53 ` [PATCH v13 Kernel 3/7] vfio iommu: Add ioctl definition for dirty pages tracking Kirti Wankhede
2020-03-12 17:53 ` [PATCH v13 Kernel 4/7] vfio iommu: Implementation of ioctl to " Kirti Wankhede
2020-03-13 18:13   ` Alex Williamson
2020-03-16 18:49     ` Kirti Wankhede
2020-03-16 19:25       ` Alex Williamson
2020-03-18 12:13   ` Dr. David Alan Gilbert
2020-03-18 13:32     ` Alex Williamson
2020-03-12 17:53 ` [PATCH v13 Kernel 5/7] vfio iommu: Update UNMAP_DMA ioctl to get dirty bitmap before unmap Kirti Wankhede
2020-03-13 18:45   ` Alex Williamson
2020-03-17 18:28     ` Kirti Wankhede
2020-03-12 17:53 ` [PATCH v13 Kernel 6/7] vfio iommu: Adds flag to indicate dirty pages tracking capability support Kirti Wankhede
2020-03-12 17:53 ` [PATCH v13 Kernel 7/7] vfio: Selective dirty page tracking if IOMMU backed device pins pages Kirti Wankhede
2020-03-13 20:49   ` Alex Williamson
2020-03-17 18:28     ` Kirti Wankhede
2020-03-17 19:00       ` Alex Williamson
2020-03-18 15:00         ` Kirti Wankhede
2020-03-18 17:01           ` Alex Williamson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).