kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking
@ 2021-01-07  9:28 Keqian Zhu
  2021-01-07  9:28 ` [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Keqian Zhu
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Keqian Zhu @ 2021-01-07  9:28 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Alex Williamson, Kirti Wankhede, Cornelia Huck, Will Deacon,
	Marc Zyngier, Catalin Marinas
  Cc: Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

Hi,

In patch series[1], I put forward some fixes and optimizations for
vfio_iommu_type1. This extracts and improves the fix part of it.

[1] https://lore.kernel.org/linux-iommu/20201210073425.25960-1-zhukeqian1@huawei.com/T/#t

Thanks,
Keqian

Keqian Zhu (5):
  vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  vfio/iommu_type1: Populate dirty bitmap for new vfio_dma
  vfio/iommu_type1: Populate dirty bitmap when attach group
  vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking
  vfio/iommu_type1: Move sanity_check_pfn_list to unmap_unpin_all

 drivers/vfio/vfio_iommu_type1.c | 107 +++++++++++++++++++++-----------
 1 file changed, 72 insertions(+), 35 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-07  9:28 [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking Keqian Zhu
@ 2021-01-07  9:28 ` Keqian Zhu
  2021-01-12 21:20   ` Alex Williamson
  2021-01-07  9:28 ` [PATCH 2/5] vfio/iommu_type1: Populate dirty bitmap for new vfio_dma Keqian Zhu
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Keqian Zhu @ 2021-01-07  9:28 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Alex Williamson, Kirti Wankhede, Cornelia Huck, Will Deacon,
	Marc Zyngier, Catalin Marinas
  Cc: Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
is easy to lose dirty log. For example, after promoting pinned_scope of
vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
dirty log that occurs before vfio_iommu is promoted.

The key point is that pinned-dirty is not a real dirty tracking way, it
can't continuously track dirty pages, but just restrict dirty scope. It
is essentially the same as fully-dirty. Fully-dirty is of full-scope and
pinned-dirty is of pinned-scope.

So we must mark pinned-dirty or fully-dirty after we start dirty tracking
or clear dirty bitmap, to ensure that dirty log is marked right away.

Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++-----------
 1 file changed, 22 insertions(+), 11 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index bceda5e8baaa..b0a26e8e0adf 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma)
 	dma->bitmap = NULL;
 }
 
-static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
+static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize)
 {
 	struct rb_node *p;
 	unsigned long pgshift = __ffs(pgsize);
@@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
 	}
 }
 
+static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize)
+{
+	unsigned long pgshift = __ffs(pgsize);
+	unsigned long nbits = dma->size >> pgshift;
+
+	bitmap_set(dma->bitmap, 0, nbits);
+}
+
+static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
+				     struct vfio_dma *dma)
+{
+	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
+
+	if (iommu->pinned_page_dirty_scope)
+		vfio_dma_populate_bitmap_pinned(dma, pgsize);
+	else if (dma->iommu_mapped)
+		vfio_dma_populate_bitmap_full(dma, pgsize);
+}
+
 static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
 {
 	struct rb_node *n;
@@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
 			}
 			return ret;
 		}
-		vfio_dma_populate_bitmap(dma, pgsize);
+		vfio_dma_populate_bitmap(iommu, dma);
 	}
 	return 0;
 }
@@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
 	unsigned long shift = bit_offset % BITS_PER_LONG;
 	unsigned long leftover;
 
-	/*
-	 * mark all pages dirty if any IOMMU capable device is not able
-	 * to report dirty pages and all pages are pinned and mapped.
-	 */
-	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
-		bitmap_set(dma->bitmap, 0, nbits);
-
 	if (shift) {
 		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
 				  nbits + shift);
@@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
 	struct vfio_dma *dma;
 	struct rb_node *n;
 	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
-	size_t pgsize = (size_t)1 << pgshift;
 	int ret;
 
 	/*
@@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
 		 * pages which are marked dirty by vfio_dma_rw()
 		 */
 		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
-		vfio_dma_populate_bitmap(dma, pgsize);
+		vfio_dma_populate_bitmap(iommu, dma);
 	}
 	return 0;
 }
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/5] vfio/iommu_type1: Populate dirty bitmap for new vfio_dma
  2021-01-07  9:28 [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking Keqian Zhu
  2021-01-07  9:28 ` [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Keqian Zhu
@ 2021-01-07  9:28 ` Keqian Zhu
  2021-01-07  9:28 ` [PATCH 3/5] vfio/iommu_type1: Populate dirty bitmap when attach group Keqian Zhu
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Keqian Zhu @ 2021-01-07  9:28 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Alex Williamson, Kirti Wankhede, Cornelia Huck, Will Deacon,
	Marc Zyngier, Catalin Marinas
  Cc: Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

We should populate dirty bitmap for newly added vfio_dma as it can be
accessed arbitrarily if vfio_iommu is not promoted to pinned_scope.
vfio_dma_populate_bitmap can handle this properly.

Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 drivers/vfio/vfio_iommu_type1.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index b0a26e8e0adf..29c8702c3b6e 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1413,7 +1413,9 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
 
 	if (!ret && iommu->dirty_page_tracking) {
 		ret = vfio_dma_bitmap_alloc(dma, pgsize);
-		if (ret)
+		if (!ret)
+			vfio_dma_populate_bitmap(iommu, dma);
+		else
 			vfio_remove_dma(iommu, dma);
 	}
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/5] vfio/iommu_type1: Populate dirty bitmap when attach group
  2021-01-07  9:28 [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking Keqian Zhu
  2021-01-07  9:28 ` [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Keqian Zhu
  2021-01-07  9:28 ` [PATCH 2/5] vfio/iommu_type1: Populate dirty bitmap for new vfio_dma Keqian Zhu
@ 2021-01-07  9:28 ` Keqian Zhu
  2021-01-07  9:29 ` [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking Keqian Zhu
  2021-01-07  9:29 ` [PATCH 5/5] vfio/iommu_type1: Move sanity_check_pfn_list to unmap_unpin_all Keqian Zhu
  4 siblings, 0 replies; 16+ messages in thread
From: Keqian Zhu @ 2021-01-07  9:28 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Alex Williamson, Kirti Wankhede, Cornelia Huck, Will Deacon,
	Marc Zyngier, Catalin Marinas
  Cc: Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

Attach an iommu backend group will potentially access all dma
ranges. We should traverse all dma ranges to mark dirty.

Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 drivers/vfio/vfio_iommu_type1.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 29c8702c3b6e..26b7eb2a5cfc 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -255,6 +255,17 @@ static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
 		vfio_dma_populate_bitmap_full(dma, pgsize);
 }
 
+static void vfio_iommu_populate_bitmap(struct vfio_iommu *iommu)
+{
+	struct rb_node *n;
+	struct vfio_dma *dma;
+
+	for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
+		dma = rb_entry(n, struct vfio_dma, node);
+		vfio_dma_populate_bitmap(iommu, dma);
+	}
+}
+
 static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
 {
 	struct rb_node *n;
@@ -2190,7 +2201,12 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	 * demotes the iommu scope until it declares itself dirty tracking
 	 * capable via the page pinning interface.
 	 */
-	iommu->pinned_page_dirty_scope = false;
+	if (iommu->pinned_page_dirty_scope) {
+		iommu->pinned_page_dirty_scope = false;
+		if (iommu->dirty_page_tracking)
+			vfio_iommu_populate_bitmap(iommu);
+	}
+
 	mutex_unlock(&iommu->lock);
 	vfio_iommu_resv_free(&group_resv_regions);
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking
  2021-01-07  9:28 [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking Keqian Zhu
                   ` (2 preceding siblings ...)
  2021-01-07  9:28 ` [PATCH 3/5] vfio/iommu_type1: Populate dirty bitmap when attach group Keqian Zhu
@ 2021-01-07  9:29 ` Keqian Zhu
  2021-01-11 21:49   ` Alex Williamson
  2021-01-07  9:29 ` [PATCH 5/5] vfio/iommu_type1: Move sanity_check_pfn_list to unmap_unpin_all Keqian Zhu
  4 siblings, 1 reply; 16+ messages in thread
From: Keqian Zhu @ 2021-01-07  9:29 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Alex Williamson, Kirti Wankhede, Cornelia Huck, Will Deacon,
	Marc Zyngier, Catalin Marinas
  Cc: Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

If we detach group during dirty page tracking, we shouldn't remove
vfio_dma, because dirty log will lose.

But we don't prevent unmap_unpin_all in vfio_iommu_release, because
under normal procedure, dirty tracking has been stopped.

Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 26b7eb2a5cfc..9776a059904d 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2373,7 +2373,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 			if (list_empty(&iommu->external_domain->group_list)) {
 				vfio_sanity_check_pfn_list(iommu);
 
-				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu))
+				/*
+				 * During dirty page tracking, we can't remove
+				 * vfio_dma because dirty log will lose.
+				 */
+				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) &&
+				    !iommu->dirty_page_tracking)
 					vfio_iommu_unmap_unpin_all(iommu);
 
 				kfree(iommu->external_domain);
@@ -2406,10 +2411,15 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 		 * iommu and external domain doesn't exist, then all the
 		 * mappings go away too. If it's the last domain with iommu and
 		 * external domain exist, update accounting
+		 *
+		 * Note: During dirty page tracking, we can't remove vfio_dma
+		 * because dirty log will lose. Just update accounting is a good
+		 * choice.
 		 */
 		if (list_empty(&domain->group_list)) {
 			if (list_is_singular(&iommu->domain_list)) {
-				if (!iommu->external_domain)
+				if (!iommu->external_domain &&
+				    !iommu->dirty_page_tracking)
 					vfio_iommu_unmap_unpin_all(iommu);
 				else
 					vfio_iommu_unmap_unpin_reaccount(iommu);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/5] vfio/iommu_type1: Move sanity_check_pfn_list to unmap_unpin_all
  2021-01-07  9:28 [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking Keqian Zhu
                   ` (3 preceding siblings ...)
  2021-01-07  9:29 ` [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking Keqian Zhu
@ 2021-01-07  9:29 ` Keqian Zhu
  4 siblings, 0 replies; 16+ messages in thread
From: Keqian Zhu @ 2021-01-07  9:29 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Alex Williamson, Kirti Wankhede, Cornelia Huck, Will Deacon,
	Marc Zyngier, Catalin Marinas
  Cc: Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

Both external domain and iommu backend domain can use pinning
interface, thus both can add pfn to dma pfn_list. By moving
the sanity_check_pfn_list to unmap_unpin_all can apply it to
all types of domain.

Fixes: a54eb55045ae ("vfio iommu type1: Add support for mediated devices")
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 drivers/vfio/vfio_iommu_type1.c | 38 ++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 20 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 9776a059904d..d796be8bcbc5 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2225,10 +2225,28 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
 	return ret;
 }
 
+static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu)
+{
+	struct rb_node *n;
+
+	n = rb_first(&iommu->dma_list);
+	for (; n; n = rb_next(n)) {
+		struct vfio_dma *dma;
+
+		dma = rb_entry(n, struct vfio_dma, node);
+
+		if (WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list)))
+			break;
+	}
+	/* mdev vendor driver must unregister notifier */
+	WARN_ON(iommu->notifier.head);
+}
+
 static void vfio_iommu_unmap_unpin_all(struct vfio_iommu *iommu)
 {
 	struct rb_node *node;
 
+	vfio_sanity_check_pfn_list(iommu);
 	while ((node = rb_first(&iommu->dma_list)))
 		vfio_remove_dma(iommu, rb_entry(node, struct vfio_dma, node));
 }
@@ -2256,23 +2274,6 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu)
 	}
 }
 
-static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu)
-{
-	struct rb_node *n;
-
-	n = rb_first(&iommu->dma_list);
-	for (; n; n = rb_next(n)) {
-		struct vfio_dma *dma;
-
-		dma = rb_entry(n, struct vfio_dma, node);
-
-		if (WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list)))
-			break;
-	}
-	/* mdev vendor driver must unregister notifier */
-	WARN_ON(iommu->notifier.head);
-}
-
 /*
  * Called when a domain is removed in detach. It is possible that
  * the removed domain decided the iova aperture window. Modify the
@@ -2371,8 +2372,6 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
 			kfree(group);
 
 			if (list_empty(&iommu->external_domain->group_list)) {
-				vfio_sanity_check_pfn_list(iommu);
-
 				/*
 				 * During dirty page tracking, we can't remove
 				 * vfio_dma because dirty log will lose.
@@ -2503,7 +2502,6 @@ static void vfio_iommu_type1_release(void *iommu_data)
 
 	if (iommu->external_domain) {
 		vfio_release_domain(iommu->external_domain, true);
-		vfio_sanity_check_pfn_list(iommu);
 		kfree(iommu->external_domain);
 	}
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking
  2021-01-07  9:29 ` [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking Keqian Zhu
@ 2021-01-11 21:49   ` Alex Williamson
  2021-01-12 12:04     ` Keqian Zhu
  0 siblings, 1 reply; 16+ messages in thread
From: Alex Williamson @ 2021-01-11 21:49 UTC (permalink / raw)
  To: Keqian Zhu
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Kirti Wankhede, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

On Thu, 7 Jan 2021 17:29:00 +0800
Keqian Zhu <zhukeqian1@huawei.com> wrote:

> If we detach group during dirty page tracking, we shouldn't remove
> vfio_dma, because dirty log will lose.
> 
> But we don't prevent unmap_unpin_all in vfio_iommu_release, because
> under normal procedure, dirty tracking has been stopped.

This looks like it's creating a larger problem than it's fixing, it's
not our job to maintain the dirty bitmap regardless of what the user
does.  If the user detaches the last group in a container causing the
mappings within that container to be deconstructed before the user has
collected dirty pages, that sounds like a user error.  A container with
no groups is de-privileged and therefore loses all state.  Thanks,

Alex

> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 26b7eb2a5cfc..9776a059904d 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -2373,7 +2373,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  			if (list_empty(&iommu->external_domain->group_list)) {
>  				vfio_sanity_check_pfn_list(iommu);
>  
> -				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu))
> +				/*
> +				 * During dirty page tracking, we can't remove
> +				 * vfio_dma because dirty log will lose.
> +				 */
> +				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) &&
> +				    !iommu->dirty_page_tracking)
>  					vfio_iommu_unmap_unpin_all(iommu);
>  
>  				kfree(iommu->external_domain);
> @@ -2406,10 +2411,15 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>  		 * iommu and external domain doesn't exist, then all the
>  		 * mappings go away too. If it's the last domain with iommu and
>  		 * external domain exist, update accounting
> +		 *
> +		 * Note: During dirty page tracking, we can't remove vfio_dma
> +		 * because dirty log will lose. Just update accounting is a good
> +		 * choice.
>  		 */
>  		if (list_empty(&domain->group_list)) {
>  			if (list_is_singular(&iommu->domain_list)) {
> -				if (!iommu->external_domain)
> +				if (!iommu->external_domain &&
> +				    !iommu->dirty_page_tracking)
>  					vfio_iommu_unmap_unpin_all(iommu);
>  				else
>  					vfio_iommu_unmap_unpin_reaccount(iommu);


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking
  2021-01-11 21:49   ` Alex Williamson
@ 2021-01-12 12:04     ` Keqian Zhu
  2021-01-12 19:53       ` Alex Williamson
  0 siblings, 1 reply; 16+ messages in thread
From: Keqian Zhu @ 2021-01-12 12:04 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Kirti Wankhede, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun



On 2021/1/12 5:49, Alex Williamson wrote:
> On Thu, 7 Jan 2021 17:29:00 +0800
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
>> If we detach group during dirty page tracking, we shouldn't remove
>> vfio_dma, because dirty log will lose.
>>
>> But we don't prevent unmap_unpin_all in vfio_iommu_release, because
>> under normal procedure, dirty tracking has been stopped.
> 
> This looks like it's creating a larger problem than it's fixing, it's
> not our job to maintain the dirty bitmap regardless of what the user
> does.  If the user detaches the last group in a container causing the
> mappings within that container to be deconstructed before the user has
> collected dirty pages, that sounds like a user error.  A container with
> no groups is de-privileged and therefore loses all state.  Thanks,
> 
> Alex

Hi Alex,

This looks good to me ;-). That's a reasonable constraint for user behavior.

What about replacing this patch with an addition to the uapi document of
VFIO_GROUP_UNSET_CONTAINER? User should pay attention to this when call this
ioctl during dirty tracking.

And any comments on other patches? thanks.

Thanks.
Keqian

> 
>> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>  drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++--
>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 26b7eb2a5cfc..9776a059904d 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -2373,7 +2373,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>>  			if (list_empty(&iommu->external_domain->group_list)) {
>>  				vfio_sanity_check_pfn_list(iommu);
>>  
>> -				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu))
>> +				/*
>> +				 * During dirty page tracking, we can't remove
>> +				 * vfio_dma because dirty log will lose.
>> +				 */
>> +				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) &&
>> +				    !iommu->dirty_page_tracking)
>>  					vfio_iommu_unmap_unpin_all(iommu);
>>  
>>  				kfree(iommu->external_domain);
>> @@ -2406,10 +2411,15 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>>  		 * iommu and external domain doesn't exist, then all the
>>  		 * mappings go away too. If it's the last domain with iommu and
>>  		 * external domain exist, update accounting
>> +		 *
>> +		 * Note: During dirty page tracking, we can't remove vfio_dma
>> +		 * because dirty log will lose. Just update accounting is a good
>> +		 * choice.
>>  		 */
>>  		if (list_empty(&domain->group_list)) {
>>  			if (list_is_singular(&iommu->domain_list)) {
>> -				if (!iommu->external_domain)
>> +				if (!iommu->external_domain &&
>> +				    !iommu->dirty_page_tracking)
>>  					vfio_iommu_unmap_unpin_all(iommu);
>>  				else
>>  					vfio_iommu_unmap_unpin_reaccount(iommu);
> 
> .
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking
  2021-01-12 12:04     ` Keqian Zhu
@ 2021-01-12 19:53       ` Alex Williamson
  2021-01-14 12:11         ` Keqian Zhu
  0 siblings, 1 reply; 16+ messages in thread
From: Alex Williamson @ 2021-01-12 19:53 UTC (permalink / raw)
  To: Keqian Zhu
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Kirti Wankhede, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

On Tue, 12 Jan 2021 20:04:38 +0800
Keqian Zhu <zhukeqian1@huawei.com> wrote:

> On 2021/1/12 5:49, Alex Williamson wrote:
> > On Thu, 7 Jan 2021 17:29:00 +0800
> > Keqian Zhu <zhukeqian1@huawei.com> wrote:
> >   
> >> If we detach group during dirty page tracking, we shouldn't remove
> >> vfio_dma, because dirty log will lose.
> >>
> >> But we don't prevent unmap_unpin_all in vfio_iommu_release, because
> >> under normal procedure, dirty tracking has been stopped.  
> > 
> > This looks like it's creating a larger problem than it's fixing, it's
> > not our job to maintain the dirty bitmap regardless of what the user
> > does.  If the user detaches the last group in a container causing the
> > mappings within that container to be deconstructed before the user has
> > collected dirty pages, that sounds like a user error.  A container with
> > no groups is de-privileged and therefore loses all state.  Thanks,
> > 
> > Alex  
> 
> Hi Alex,
> 
> This looks good to me ;-). That's a reasonable constraint for user behavior.
> 
> What about replacing this patch with an addition to the uapi document of
> VFIO_GROUP_UNSET_CONTAINER? User should pay attention to this when call this
> ioctl during dirty tracking.

Here's the current uapi comment:

/**
 * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
 *
 * Remove the group from the attached container.  This is the
 * opposite of the SET_CONTAINER call and returns the group to
 * an initial state.  All device file descriptors must be released
 * prior to calling this interface.  When removing the last group
 * from a container, the IOMMU will be disabled and all state lost,
 * effectively also returning the VFIO file descriptor to an initial
 * state.
 * Return: 0 on success, -errno on failure.
 * Availability: When attached to container
 */

So we already indicate that "all state" of the container is lost when
removing the last group, I don't see that it's necessarily to
explicitly include dirty bitmap state beyond that statement.  Without
mappings there can be no dirty bitmap to track.

 > And any comments on other patches? thanks.

I had a difficult time mapping the commit log to the actual code
change, I'll likely have some wording suggestions.  Is patch 5/5 still
necessary if this patch is dropped?  Thanks,

Alex

> >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
> >> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> >> ---
> >>  drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++--
> >>  1 file changed, 12 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index 26b7eb2a5cfc..9776a059904d 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -2373,7 +2373,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
> >>  			if (list_empty(&iommu->external_domain->group_list)) {
> >>  				vfio_sanity_check_pfn_list(iommu);
> >>  
> >> -				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu))
> >> +				/*
> >> +				 * During dirty page tracking, we can't remove
> >> +				 * vfio_dma because dirty log will lose.
> >> +				 */
> >> +				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) &&
> >> +				    !iommu->dirty_page_tracking)
> >>  					vfio_iommu_unmap_unpin_all(iommu);
> >>  
> >>  				kfree(iommu->external_domain);
> >> @@ -2406,10 +2411,15 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
> >>  		 * iommu and external domain doesn't exist, then all the
> >>  		 * mappings go away too. If it's the last domain with iommu and
> >>  		 * external domain exist, update accounting
> >> +		 *
> >> +		 * Note: During dirty page tracking, we can't remove vfio_dma
> >> +		 * because dirty log will lose. Just update accounting is a good
> >> +		 * choice.
> >>  		 */
> >>  		if (list_empty(&domain->group_list)) {
> >>  			if (list_is_singular(&iommu->domain_list)) {
> >> -				if (!iommu->external_domain)
> >> +				if (!iommu->external_domain &&
> >> +				    !iommu->dirty_page_tracking)
> >>  					vfio_iommu_unmap_unpin_all(iommu);
> >>  				else
> >>  					vfio_iommu_unmap_unpin_reaccount(iommu);  
> > 
> > .
> >   
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-07  9:28 ` [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Keqian Zhu
@ 2021-01-12 21:20   ` Alex Williamson
  2021-01-13 12:35     ` Kirti Wankhede
  2021-01-14 13:05     ` Keqian Zhu
  0 siblings, 2 replies; 16+ messages in thread
From: Alex Williamson @ 2021-01-12 21:20 UTC (permalink / raw)
  To: Keqian Zhu
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Kirti Wankhede, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

On Thu, 7 Jan 2021 17:28:57 +0800
Keqian Zhu <zhukeqian1@huawei.com> wrote:

> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
> is easy to lose dirty log. For example, after promoting pinned_scope of
> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
> dirty log that occurs before vfio_iommu is promoted.
>
> The key point is that pinned-dirty is not a real dirty tracking way, it
> can't continuously track dirty pages, but just restrict dirty scope. It
> is essentially the same as fully-dirty. Fully-dirty is of full-scope and
> pinned-dirty is of pinned-scope.
> 
> So we must mark pinned-dirty or fully-dirty after we start dirty tracking
> or clear dirty bitmap, to ensure that dirty log is marked right away.

I was initially convinced by these first three patches, but upon
further review, I think the premise is wrong.  AIUI, the concern across
these patches is that our dirty bitmap is only populated with pages
dirtied by pinning and we only take into account the pinned page dirty
scope at the time the bitmap is retrieved by the user.  You suppose
this presents a gap where if a vendor driver has not yet identified
with a page pinning scope that the entire bitmap should be considered
dirty regardless of whether that driver later pins pages prior to the
user retrieving the dirty bitmap.

I don't think this is how we intended the cooperation between the iommu
driver and vendor driver to work.  By pinning pages a vendor driver is
not declaring that only their future dirty page scope is limited to
pinned pages, instead they're declaring themselves as a participant in
dirty page tracking and take responsibility for pinning any necessary
pages.  For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START
to trigger a blocking notification to groups to not only begin dirty
tracking, but also to synchronously register their current device DMA
footprint.  This patch would require a vendor driver to possibly perform
a gratuitous page pinning in order to set the scope prior to dirty
logging being enabled, or else the initial bitmap will be fully dirty.

Therefore, I don't see that this series is necessary or correct.  Kirti,
does this match your thinking?

Thinking about these semantics, it seems there might still be an issue
if a group with non-pinned-page dirty scope is detached with dirty
logging enabled.  It seems this should in fact fully populate the dirty
bitmaps at the time it's removed since we don't know the extent of its
previous DMA, nor will the group be present to trigger the full bitmap
when the user retrieves the dirty bitmap.  Creating fully populated
bitmaps at the time tracking is enabled negates our ability to take
advantage of later enlightenment though.  Thanks,

Alex

> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++-----------
>  1 file changed, 22 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index bceda5e8baaa..b0a26e8e0adf 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma)
>  	dma->bitmap = NULL;
>  }
>  
> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize)
>  {
>  	struct rb_node *p;
>  	unsigned long pgshift = __ffs(pgsize);
> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>  	}
>  }
>  
> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize)
> +{
> +	unsigned long pgshift = __ffs(pgsize);
> +	unsigned long nbits = dma->size >> pgshift;
> +
> +	bitmap_set(dma->bitmap, 0, nbits);
> +}
> +
> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
> +				     struct vfio_dma *dma)
> +{
> +	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
> +
> +	if (iommu->pinned_page_dirty_scope)
> +		vfio_dma_populate_bitmap_pinned(dma, pgsize);
> +	else if (dma->iommu_mapped)
> +		vfio_dma_populate_bitmap_full(dma, pgsize);
> +}
> +
>  static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>  {
>  	struct rb_node *n;
> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>  			}
>  			return ret;
>  		}
> -		vfio_dma_populate_bitmap(dma, pgsize);
> +		vfio_dma_populate_bitmap(iommu, dma);
>  	}
>  	return 0;
>  }
> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>  	unsigned long shift = bit_offset % BITS_PER_LONG;
>  	unsigned long leftover;
>  
> -	/*
> -	 * mark all pages dirty if any IOMMU capable device is not able
> -	 * to report dirty pages and all pages are pinned and mapped.
> -	 */
> -	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
> -		bitmap_set(dma->bitmap, 0, nbits);
> -
>  	if (shift) {
>  		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
>  				  nbits + shift);
> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>  	struct vfio_dma *dma;
>  	struct rb_node *n;
>  	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> -	size_t pgsize = (size_t)1 << pgshift;
>  	int ret;
>  
>  	/*
> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>  		 * pages which are marked dirty by vfio_dma_rw()
>  		 */
>  		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
> -		vfio_dma_populate_bitmap(dma, pgsize);
> +		vfio_dma_populate_bitmap(iommu, dma);
>  	}
>  	return 0;
>  }


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-12 21:20   ` Alex Williamson
@ 2021-01-13 12:35     ` Kirti Wankhede
  2021-01-13 15:13       ` Alex Williamson
  2021-01-14 13:05     ` Keqian Zhu
  1 sibling, 1 reply; 16+ messages in thread
From: Kirti Wankhede @ 2021-01-13 12:35 UTC (permalink / raw)
  To: Alex Williamson, Keqian Zhu
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Cornelia Huck, Will Deacon, Marc Zyngier, Catalin Marinas,
	Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun



On 1/13/2021 2:50 AM, Alex Williamson wrote:
> On Thu, 7 Jan 2021 17:28:57 +0800
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
>> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
>> is easy to lose dirty log. For example, after promoting pinned_scope of
>> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
>> dirty log that occurs before vfio_iommu is promoted.
>>
>> The key point is that pinned-dirty is not a real dirty tracking way, it
>> can't continuously track dirty pages, but just restrict dirty scope. It
>> is essentially the same as fully-dirty. Fully-dirty is of full-scope and
>> pinned-dirty is of pinned-scope.
>>
>> So we must mark pinned-dirty or fully-dirty after we start dirty tracking
>> or clear dirty bitmap, to ensure that dirty log is marked right away.
> 
> I was initially convinced by these first three patches, but upon
> further review, I think the premise is wrong.  AIUI, the concern across
> these patches is that our dirty bitmap is only populated with pages
> dirtied by pinning and we only take into account the pinned page dirty
> scope at the time the bitmap is retrieved by the user.  You suppose
> this presents a gap where if a vendor driver has not yet identified
> with a page pinning scope that the entire bitmap should be considered
> dirty regardless of whether that driver later pins pages prior to the
> user retrieving the dirty bitmap.
> 
> I don't think this is how we intended the cooperation between the iommu
> driver and vendor driver to work.  By pinning pages a vendor driver is
> not declaring that only their future dirty page scope is limited to
> pinned pages, instead they're declaring themselves as a participant in
> dirty page tracking and take responsibility for pinning any necessary
> pages.  For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START
> to trigger a blocking notification to groups to not only begin dirty
> tracking, but also to synchronously register their current device DMA
> footprint.  This patch would require a vendor driver to possibly perform
> a gratuitous page pinning in order to set the scope prior to dirty
> logging being enabled, or else the initial bitmap will be fully dirty.
> 
> Therefore, I don't see that this series is necessary or correct.  Kirti,
> does this match your thinking?
> 

That's correct Alex and I agree with you.

> Thinking about these semantics, it seems there might still be an issue
> if a group with non-pinned-page dirty scope is detached with dirty
> logging enabled.  

Hot-unplug a device while migration process has started - is this 
scenario supported?

Thanks,
Kirti

> It seems this should in fact fully populate the dirty
> bitmaps at the time it's removed since we don't know the extent of its
> previous DMA, nor will the group be present to trigger the full bitmap
> when the user retrieves the dirty bitmap.  Creating fully populated
> bitmaps at the time tracking is enabled negates our ability to take
> advantage of later enlightenment though.  Thanks,
> 
> Alex
> 
>> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++-----------
>>   1 file changed, 22 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index bceda5e8baaa..b0a26e8e0adf 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma)
>>   	dma->bitmap = NULL;
>>   }
>>   
>> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize)
>>   {
>>   	struct rb_node *p;
>>   	unsigned long pgshift = __ffs(pgsize);
>> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>>   	}
>>   }
>>   
>> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize)
>> +{
>> +	unsigned long pgshift = __ffs(pgsize);
>> +	unsigned long nbits = dma->size >> pgshift;
>> +
>> +	bitmap_set(dma->bitmap, 0, nbits);
>> +}
>> +
>> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
>> +				     struct vfio_dma *dma)
>> +{
>> +	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
>> +
>> +	if (iommu->pinned_page_dirty_scope)
>> +		vfio_dma_populate_bitmap_pinned(dma, pgsize);
>> +	else if (dma->iommu_mapped)
>> +		vfio_dma_populate_bitmap_full(dma, pgsize);
>> +}
>> +
>>   static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>>   {
>>   	struct rb_node *n;
>> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>>   			}
>>   			return ret;
>>   		}
>> -		vfio_dma_populate_bitmap(dma, pgsize);
>> +		vfio_dma_populate_bitmap(iommu, dma);
>>   	}
>>   	return 0;
>>   }
>> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>>   	unsigned long shift = bit_offset % BITS_PER_LONG;
>>   	unsigned long leftover;
>>   
>> -	/*
>> -	 * mark all pages dirty if any IOMMU capable device is not able
>> -	 * to report dirty pages and all pages are pinned and mapped.
>> -	 */
>> -	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>> -		bitmap_set(dma->bitmap, 0, nbits);
>> -
>>   	if (shift) {
>>   		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
>>   				  nbits + shift);
>> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>>   	struct vfio_dma *dma;
>>   	struct rb_node *n;
>>   	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
>> -	size_t pgsize = (size_t)1 << pgshift;
>>   	int ret;
>>   
>>   	/*
>> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>>   		 * pages which are marked dirty by vfio_dma_rw()
>>   		 */
>>   		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
>> -		vfio_dma_populate_bitmap(dma, pgsize);
>> +		vfio_dma_populate_bitmap(iommu, dma);
>>   	}
>>   	return 0;
>>   }
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-13 12:35     ` Kirti Wankhede
@ 2021-01-13 15:13       ` Alex Williamson
  0 siblings, 0 replies; 16+ messages in thread
From: Alex Williamson @ 2021-01-13 15:13 UTC (permalink / raw)
  To: Kirti Wankhede
  Cc: Keqian Zhu, linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Cornelia Huck, Will Deacon, Marc Zyngier, Catalin Marinas,
	Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

On Wed, 13 Jan 2021 18:05:43 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> On 1/13/2021 2:50 AM, Alex Williamson wrote:
> > On Thu, 7 Jan 2021 17:28:57 +0800
> > Keqian Zhu <zhukeqian1@huawei.com> wrote:
> >   
> >> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
> >> is easy to lose dirty log. For example, after promoting pinned_scope of
> >> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
> >> dirty log that occurs before vfio_iommu is promoted.
> >>
> >> The key point is that pinned-dirty is not a real dirty tracking way, it
> >> can't continuously track dirty pages, but just restrict dirty scope. It
> >> is essentially the same as fully-dirty. Fully-dirty is of full-scope and
> >> pinned-dirty is of pinned-scope.
> >>
> >> So we must mark pinned-dirty or fully-dirty after we start dirty tracking
> >> or clear dirty bitmap, to ensure that dirty log is marked right away.  
> > 
> > I was initially convinced by these first three patches, but upon
> > further review, I think the premise is wrong.  AIUI, the concern across
> > these patches is that our dirty bitmap is only populated with pages
> > dirtied by pinning and we only take into account the pinned page dirty
> > scope at the time the bitmap is retrieved by the user.  You suppose
> > this presents a gap where if a vendor driver has not yet identified
> > with a page pinning scope that the entire bitmap should be considered
> > dirty regardless of whether that driver later pins pages prior to the
> > user retrieving the dirty bitmap.
> > 
> > I don't think this is how we intended the cooperation between the iommu
> > driver and vendor driver to work.  By pinning pages a vendor driver is
> > not declaring that only their future dirty page scope is limited to
> > pinned pages, instead they're declaring themselves as a participant in
> > dirty page tracking and take responsibility for pinning any necessary
> > pages.  For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START
> > to trigger a blocking notification to groups to not only begin dirty
> > tracking, but also to synchronously register their current device DMA
> > footprint.  This patch would require a vendor driver to possibly perform
> > a gratuitous page pinning in order to set the scope prior to dirty
> > logging being enabled, or else the initial bitmap will be fully dirty.
> > 
> > Therefore, I don't see that this series is necessary or correct.  Kirti,
> > does this match your thinking?
> >   
> 
> That's correct Alex and I agree with you.
> 
> > Thinking about these semantics, it seems there might still be an issue
> > if a group with non-pinned-page dirty scope is detached with dirty
> > logging enabled.    
> 
> Hot-unplug a device while migration process has started - is this 
> scenario supported?

It's not prevented, it would rely on a userspace policy, right?  The
kernel should do the right thing regardless.  Thanks,

Alex

> > It seems this should in fact fully populate the dirty
> > bitmaps at the time it's removed since we don't know the extent of its
> > previous DMA, nor will the group be present to trigger the full bitmap
> > when the user retrieves the dirty bitmap.  Creating fully populated
> > bitmaps at the time tracking is enabled negates our ability to take
> > advantage of later enlightenment though.  Thanks,
> > 
> > Alex
> >   
> >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
> >> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> >> ---
> >>   drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++-----------
> >>   1 file changed, 22 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index bceda5e8baaa..b0a26e8e0adf 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma)
> >>   	dma->bitmap = NULL;
> >>   }
> >>   
> >> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
> >> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize)
> >>   {
> >>   	struct rb_node *p;
> >>   	unsigned long pgshift = __ffs(pgsize);
> >> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
> >>   	}
> >>   }
> >>   
> >> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize)
> >> +{
> >> +	unsigned long pgshift = __ffs(pgsize);
> >> +	unsigned long nbits = dma->size >> pgshift;
> >> +
> >> +	bitmap_set(dma->bitmap, 0, nbits);
> >> +}
> >> +
> >> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
> >> +				     struct vfio_dma *dma)
> >> +{
> >> +	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
> >> +
> >> +	if (iommu->pinned_page_dirty_scope)
> >> +		vfio_dma_populate_bitmap_pinned(dma, pgsize);
> >> +	else if (dma->iommu_mapped)
> >> +		vfio_dma_populate_bitmap_full(dma, pgsize);
> >> +}
> >> +
> >>   static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
> >>   {
> >>   	struct rb_node *n;
> >> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
> >>   			}
> >>   			return ret;
> >>   		}
> >> -		vfio_dma_populate_bitmap(dma, pgsize);
> >> +		vfio_dma_populate_bitmap(iommu, dma);
> >>   	}
> >>   	return 0;
> >>   }
> >> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
> >>   	unsigned long shift = bit_offset % BITS_PER_LONG;
> >>   	unsigned long leftover;
> >>   
> >> -	/*
> >> -	 * mark all pages dirty if any IOMMU capable device is not able
> >> -	 * to report dirty pages and all pages are pinned and mapped.
> >> -	 */
> >> -	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
> >> -		bitmap_set(dma->bitmap, 0, nbits);
> >> -
> >>   	if (shift) {
> >>   		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
> >>   				  nbits + shift);
> >> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
> >>   	struct vfio_dma *dma;
> >>   	struct rb_node *n;
> >>   	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> >> -	size_t pgsize = (size_t)1 << pgshift;
> >>   	int ret;
> >>   
> >>   	/*
> >> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
> >>   		 * pages which are marked dirty by vfio_dma_rw()
> >>   		 */
> >>   		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
> >> -		vfio_dma_populate_bitmap(dma, pgsize);
> >> +		vfio_dma_populate_bitmap(iommu, dma);
> >>   	}
> >>   	return 0;
> >>   }  
> >   
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking
  2021-01-12 19:53       ` Alex Williamson
@ 2021-01-14 12:11         ` Keqian Zhu
  0 siblings, 0 replies; 16+ messages in thread
From: Keqian Zhu @ 2021-01-14 12:11 UTC (permalink / raw)
  To: Alex Williamson
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Kirti Wankhede, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun



On 2021/1/13 3:53, Alex Williamson wrote:
> On Tue, 12 Jan 2021 20:04:38 +0800
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
>> On 2021/1/12 5:49, Alex Williamson wrote:
>>> On Thu, 7 Jan 2021 17:29:00 +0800
>>> Keqian Zhu <zhukeqian1@huawei.com> wrote:
>>>   
>>>> If we detach group during dirty page tracking, we shouldn't remove
>>>> vfio_dma, because dirty log will lose.
>>>>
>>>> But we don't prevent unmap_unpin_all in vfio_iommu_release, because
>>>> under normal procedure, dirty tracking has been stopped.  
>>>
>>> This looks like it's creating a larger problem than it's fixing, it's
>>> not our job to maintain the dirty bitmap regardless of what the user
>>> does.  If the user detaches the last group in a container causing the
>>> mappings within that container to be deconstructed before the user has
>>> collected dirty pages, that sounds like a user error.  A container with
>>> no groups is de-privileged and therefore loses all state.  Thanks,
>>>
>>> Alex  
>>
>> Hi Alex,
>>
>> This looks good to me ;-). That's a reasonable constraint for user behavior.
>>
>> What about replacing this patch with an addition to the uapi document of
>> VFIO_GROUP_UNSET_CONTAINER? User should pay attention to this when call this
>> ioctl during dirty tracking.
> 
> Here's the current uapi comment:
> 
> /**
>  * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
>  *
>  * Remove the group from the attached container.  This is the
>  * opposite of the SET_CONTAINER call and returns the group to
>  * an initial state.  All device file descriptors must be released
>  * prior to calling this interface.  When removing the last group
>  * from a container, the IOMMU will be disabled and all state lost,
>  * effectively also returning the VFIO file descriptor to an initial
>  * state.
>  * Return: 0 on success, -errno on failure.
>  * Availability: When attached to container
>  */
> 
> So we already indicate that "all state" of the container is lost when
> removing the last group, I don't see that it's necessarily to
> explicitly include dirty bitmap state beyond that statement.  Without
> mappings there can be no dirty bitmap to track.
OK :-) .

> 
>  > And any comments on other patches? thanks.
> 
> I had a difficult time mapping the commit log to the actual code
> change, I'll likely have some wording suggestions.  Is patch 5/5 still
> necessary if this patch is dropped?  Thanks,
> 
I think the 5th patch is still necessary. vfio_sanity_check_pfn_list() is used to check
whether pfn_list of vfio_dma is empty. but we apply this check just for external domain.
If the iommu backed domain also pin some pages, then this check fails. So I think we should
use this check only when all domains are about to be removed.

Besides, this patch should extract the "WARN_ON(iommu->notifier.head);" just for external domain.

Thanks,
Keqian

> Alex
> 
>>>> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
>>>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>>>> ---
>>>>  drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++--
>>>>  1 file changed, 12 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>>> index 26b7eb2a5cfc..9776a059904d 100644
>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>> @@ -2373,7 +2373,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>>>>  			if (list_empty(&iommu->external_domain->group_list)) {
>>>>  				vfio_sanity_check_pfn_list(iommu);
>>>>  
>>>> -				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu))
>>>> +				/*
>>>> +				 * During dirty page tracking, we can't remove
>>>> +				 * vfio_dma because dirty log will lose.
>>>> +				 */
>>>> +				if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) &&
>>>> +				    !iommu->dirty_page_tracking)
>>>>  					vfio_iommu_unmap_unpin_all(iommu);
>>>>  
>>>>  				kfree(iommu->external_domain);
>>>> @@ -2406,10 +2411,15 @@ static void vfio_iommu_type1_detach_group(void *iommu_data,
>>>>  		 * iommu and external domain doesn't exist, then all the
>>>>  		 * mappings go away too. If it's the last domain with iommu and
>>>>  		 * external domain exist, update accounting
>>>> +		 *
>>>> +		 * Note: During dirty page tracking, we can't remove vfio_dma
>>>> +		 * because dirty log will lose. Just update accounting is a good
>>>> +		 * choice.
>>>>  		 */
>>>>  		if (list_empty(&domain->group_list)) {
>>>>  			if (list_is_singular(&iommu->domain_list)) {
>>>> -				if (!iommu->external_domain)
>>>> +				if (!iommu->external_domain &&
>>>> +				    !iommu->dirty_page_tracking)
>>>>  					vfio_iommu_unmap_unpin_all(iommu);
>>>>  				else
>>>>  					vfio_iommu_unmap_unpin_reaccount(iommu);  
>>>
>>> .
>>>   
>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-12 21:20   ` Alex Williamson
  2021-01-13 12:35     ` Kirti Wankhede
@ 2021-01-14 13:05     ` Keqian Zhu
  2021-01-14 17:14       ` Alex Williamson
  1 sibling, 1 reply; 16+ messages in thread
From: Keqian Zhu @ 2021-01-14 13:05 UTC (permalink / raw)
  To: Alex Williamson, Kirti Wankhede
  Cc: linux-kernel, linux-arm-kernel, iommu, kvm, kvmarm,
	Cornelia Huck, Will Deacon, Marc Zyngier, Catalin Marinas,
	Mark Rutland, James Morse, Robin Murphy, Joerg Roedel,
	Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

Hi Alex,

On 2021/1/13 5:20, Alex Williamson wrote:
> On Thu, 7 Jan 2021 17:28:57 +0800
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
>> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
>> is easy to lose dirty log. For example, after promoting pinned_scope of
>> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
>> dirty log that occurs before vfio_iommu is promoted.
>>
>> The key point is that pinned-dirty is not a real dirty tracking way, it
>> can't continuously track dirty pages, but just restrict dirty scope. It
>> is essentially the same as fully-dirty. Fully-dirty is of full-scope and
>> pinned-dirty is of pinned-scope.
>>
>> So we must mark pinned-dirty or fully-dirty after we start dirty tracking
>> or clear dirty bitmap, to ensure that dirty log is marked right away.
> 
> I was initially convinced by these first three patches, but upon
> further review, I think the premise is wrong.  AIUI, the concern across
> these patches is that our dirty bitmap is only populated with pages
> dirtied by pinning and we only take into account the pinned page dirty
> scope at the time the bitmap is retrieved by the user.  You suppose
> this presents a gap where if a vendor driver has not yet identified
> with a page pinning scope that the entire bitmap should be considered
> dirty regardless of whether that driver later pins pages prior to the
> user retrieving the dirty bitmap.
Yes, this is my concern. And there are some other scenarios.

1. If a non-pinned iommu-backed domain is detached between starting dirty log and retrieving
dirty bitmap, then the dirty log is missed (As you said in the last paragraph).

2. If all vendor drivers pinned (means iommu pinned_scope is true), and an vendor driver do pin/unpin
pair between starting dirty log and retrieving dirty bitmap, then the dirty log of these unpinned pages
are missed.

> 
> I don't think this is how we intended the cooperation between the iommu
> driver and vendor driver to work.  By pinning pages a vendor driver is
> not declaring that only their future dirty page scope is limited to
> pinned pages, instead they're declaring themselves as a participant in
> dirty page tracking and take responsibility for pinning any necessary
> pages.  For example we might extend VFIO_IOMMU_DIRTY_PAGES_FLAG_START
> to trigger a blocking notification to groups to not only begin dirty
> tracking, but also to synchronously register their current device DMA
> footprint.  This patch would require a vendor driver to possibly perform
> a gratuitous page pinning in order to set the scope prior to dirty
> logging being enabled, or else the initial bitmap will be fully dirty.
I get what you mean ;-). You said that there is time gap between we attach a device
and this device declares itself is dirty traceable.

However, this makes it difficult to management dirty log tracking (I list two bugs).
If the vfio devices can declare themselves dirty traceable when attach to container,
then the logic of dirty tracking is much more clear ;-) .

> 
> Therefore, I don't see that this series is necessary or correct.  Kirti,
> does this match your thinking?
> 
> Thinking about these semantics, it seems there might still be an issue
> if a group with non-pinned-page dirty scope is detached with dirty
> logging enabled.  It seems this should in fact fully populate the dirty
> bitmaps at the time it's removed since we don't know the extent of its
> previous DMA, nor will the group be present to trigger the full bitmap
> when the user retrieves the dirty bitmap.  Creating fully populated
> bitmaps at the time tracking is enabled negates our ability to take
> advantage of later enlightenment though.  Thanks,
> 
Since that you want to allow the time gap between we attach device and the device
declare itself dirty traceable, I am going to fix these two bugs in patch v2. Do you
agree?

Thanks,
Keqian

> Alex
> 
>> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking")
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>  drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++-----------
>>  1 file changed, 22 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index bceda5e8baaa..b0a26e8e0adf 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct vfio_dma *dma)
>>  	dma->bitmap = NULL;
>>  }
>>  
>> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>> +static void vfio_dma_populate_bitmap_pinned(struct vfio_dma *dma, size_t pgsize)
>>  {
>>  	struct rb_node *p;
>>  	unsigned long pgshift = __ffs(pgsize);
>> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize)
>>  	}
>>  }
>>  
>> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma, size_t pgsize)
>> +{
>> +	unsigned long pgshift = __ffs(pgsize);
>> +	unsigned long nbits = dma->size >> pgshift;
>> +
>> +	bitmap_set(dma->bitmap, 0, nbits);
>> +}
>> +
>> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
>> +				     struct vfio_dma *dma)
>> +{
>> +	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
>> +
>> +	if (iommu->pinned_page_dirty_scope)
>> +		vfio_dma_populate_bitmap_pinned(dma, pgsize);
>> +	else if (dma->iommu_mapped)
>> +		vfio_dma_populate_bitmap_full(dma, pgsize);
>> +}
>> +
>>  static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>>  {
>>  	struct rb_node *n;
>> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>>  			}
>>  			return ret;
>>  		}
>> -		vfio_dma_populate_bitmap(dma, pgsize);
>> +		vfio_dma_populate_bitmap(iommu, dma);
>>  	}
>>  	return 0;
>>  }
>> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>>  	unsigned long shift = bit_offset % BITS_PER_LONG;
>>  	unsigned long leftover;
>>  
>> -	/*
>> -	 * mark all pages dirty if any IOMMU capable device is not able
>> -	 * to report dirty pages and all pages are pinned and mapped.
>> -	 */
>> -	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>> -		bitmap_set(dma->bitmap, 0, nbits);
>> -
>>  	if (shift) {
>>  		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
>>  				  nbits + shift);
>> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>>  	struct vfio_dma *dma;
>>  	struct rb_node *n;
>>  	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
>> -	size_t pgsize = (size_t)1 << pgshift;
>>  	int ret;
>>  
>>  	/*
>> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
>>  		 * pages which are marked dirty by vfio_dma_rw()
>>  		 */
>>  		bitmap_clear(dma->bitmap, 0, dma->size >> pgshift);
>> -		vfio_dma_populate_bitmap(dma, pgsize);
>> +		vfio_dma_populate_bitmap(iommu, dma);
>>  	}
>>  	return 0;
>>  }
> 
> .
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-14 13:05     ` Keqian Zhu
@ 2021-01-14 17:14       ` Alex Williamson
  2021-01-15  9:41         ` Keqian Zhu
  0 siblings, 1 reply; 16+ messages in thread
From: Alex Williamson @ 2021-01-14 17:14 UTC (permalink / raw)
  To: Keqian Zhu
  Cc: Kirti Wankhede, linux-kernel, linux-arm-kernel, iommu, kvm,
	kvmarm, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun

On Thu, 14 Jan 2021 21:05:23 +0800
Keqian Zhu <zhukeqian1@huawei.com> wrote:

> Hi Alex,
> 
> On 2021/1/13 5:20, Alex Williamson wrote:
> > On Thu, 7 Jan 2021 17:28:57 +0800
> > Keqian Zhu <zhukeqian1@huawei.com> wrote:
> >   
> >> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
> >> is easy to lose dirty log. For example, after promoting pinned_scope of
> >> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
> >> dirty log that occurs before vfio_iommu is promoted.
> >>
> >> The key point is that pinned-dirty is not a real dirty tracking way, it
> >> can't continuously track dirty pages, but just restrict dirty scope. It
> >> is essentially the same as fully-dirty. Fully-dirty is of full-scope and
> >> pinned-dirty is of pinned-scope.
> >>
> >> So we must mark pinned-dirty or fully-dirty after we start dirty tracking
> >> or clear dirty bitmap, to ensure that dirty log is marked right away.  
> > 
> > I was initially convinced by these first three patches, but upon
> > further review, I think the premise is wrong.  AIUI, the concern across
> > these patches is that our dirty bitmap is only populated with pages
> > dirtied by pinning and we only take into account the pinned page dirty
> > scope at the time the bitmap is retrieved by the user.  You suppose
> > this presents a gap where if a vendor driver has not yet identified
> > with a page pinning scope that the entire bitmap should be considered
> > dirty regardless of whether that driver later pins pages prior to the
> > user retrieving the dirty bitmap.  
> Yes, this is my concern. And there are some other scenarios.
> 
> 1. If a non-pinned iommu-backed domain is detached between starting
> dirty log and retrieving dirty bitmap, then the dirty log is missed
> (As you said in the last paragraph).
> 
> 2. If all vendor drivers pinned (means iommu pinned_scope is true),
> and an vendor driver do pin/unpin pair between starting dirty log and
> retrieving dirty bitmap, then the dirty log of these unpinned pages
> are missed.

Can you identity where this happen?  I believe this assertion is
incorrect.  When dirty logging is enabled vfio_dma_populate_bitmap()
marks all existing pinned pages as dirty.  In the scenario you
describe, the iommu pinned page dirty scope is irrelevant.  We only
track pinned or external DMA access pages for exactly this reason.
Unpinning a page never clears the dirty bitmap, only the user
retrieving the bitmap or disabling dirty logging clears the bitmap.  A
page that has been unpinned is transiently dirty, it is not repopulated
in the bitmap after the user has retrieved the bitmap because the
device no longer has access to it to consider it perpetually dirty.

> > I don't think this is how we intended the cooperation between the
> > iommu driver and vendor driver to work.  By pinning pages a vendor
> > driver is not declaring that only their future dirty page scope is
> > limited to pinned pages, instead they're declaring themselves as a
> > participant in dirty page tracking and take responsibility for
> > pinning any necessary pages.  For example we might extend
> > VFIO_IOMMU_DIRTY_PAGES_FLAG_START to trigger a blocking
> > notification to groups to not only begin dirty tracking, but also
> > to synchronously register their current device DMA footprint.  This
> > patch would require a vendor driver to possibly perform a
> > gratuitous page pinning in order to set the scope prior to dirty
> > logging being enabled, or else the initial bitmap will be fully
> > dirty.  
> I get what you mean ;-). You said that there is time gap between we
> attach a device and this device declares itself is dirty traceable.
> 
> However, this makes it difficult to management dirty log tracking (I
> list two bugs). If the vfio devices can declare themselves dirty
> traceable when attach to container, then the logic of dirty tracking
> is much more clear ;-) .

There's only one bug above afaict, which should be easily fixed.  I
think if you actually dig into the problem of a device declaring
themselves as dirty tracking capable, when the IOMMU backend works on
group, not devices, and it's groups that are attached to containers,
you might see that the logic is not so clear.  I also don't see this as
a significant issue, you're focusing on a niche scenario where a device
is hot-added to a VM, which is immediately migrated before the device
identifies itself by pinning pages.  In that scenario the iommu dirty
scope would be overly broad and we'd indicate all pages are dirty.
This errors on the side of reporting too much is dirty, which still
provides a correct result to the user.  The more common scenario would
be migration of a "long" running, stable VM, where drivers are active
and devices have already pinned pages if they intend to.

> > Therefore, I don't see that this series is necessary or correct.
> > Kirti, does this match your thinking?
> > 
> > Thinking about these semantics, it seems there might still be an
> > issue if a group with non-pinned-page dirty scope is detached with
> > dirty logging enabled.  It seems this should in fact fully populate
> > the dirty bitmaps at the time it's removed since we don't know the
> > extent of its previous DMA, nor will the group be present to
> > trigger the full bitmap when the user retrieves the dirty bitmap.
> > Creating fully populated bitmaps at the time tracking is enabled
> > negates our ability to take advantage of later enlightenment
> > though.  Thanks, 
> Since that you want to allow the time gap between we attach device
> and the device declare itself dirty traceable, I am going to fix
> these two bugs in patch v2. Do you agree?

I would consider a patch that fully populates the dirty bitmap if a
non-pinned page scope group is removed from the container while dirty
logging is enabled.  Thanks,

Alex

> >> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for
> >> dirty pages tracking") Signed-off-by: Keqian Zhu
> >> <zhukeqian1@huawei.com> ---
> >>  drivers/vfio/vfio_iommu_type1.c | 33
> >> ++++++++++++++++++++++----------- 1 file changed, 22
> >> insertions(+), 11 deletions(-)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c
> >> b/drivers/vfio/vfio_iommu_type1.c index bceda5e8baaa..b0a26e8e0adf
> >> 100644 --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct
> >> vfio_dma *dma) dma->bitmap = NULL;
> >>  }
> >>  
> >> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t
> >> pgsize) +static void vfio_dma_populate_bitmap_pinned(struct
> >> vfio_dma *dma, size_t pgsize) {
> >>  	struct rb_node *p;
> >>  	unsigned long pgshift = __ffs(pgsize);
> >> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct
> >> vfio_dma *dma, size_t pgsize) }
> >>  }
> >>  
> >> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma,
> >> size_t pgsize) +{
> >> +	unsigned long pgshift = __ffs(pgsize);
> >> +	unsigned long nbits = dma->size >> pgshift;
> >> +
> >> +	bitmap_set(dma->bitmap, 0, nbits);
> >> +}
> >> +
> >> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
> >> +				     struct vfio_dma *dma)
> >> +{
> >> +	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
> >> +
> >> +	if (iommu->pinned_page_dirty_scope)
> >> +		vfio_dma_populate_bitmap_pinned(dma, pgsize);
> >> +	else if (dma->iommu_mapped)
> >> +		vfio_dma_populate_bitmap_full(dma, pgsize);
> >> +}
> >> +
> >>  static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
> >>  {
> >>  	struct rb_node *n;
> >> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct
> >> vfio_iommu *iommu) }
> >>  			return ret;
> >>  		}
> >> -		vfio_dma_populate_bitmap(dma, pgsize);
> >> +		vfio_dma_populate_bitmap(iommu, dma);
> >>  	}
> >>  	return 0;
> >>  }
> >> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user
> >> *bitmap, struct vfio_iommu *iommu, unsigned long shift =
> >> bit_offset % BITS_PER_LONG; unsigned long leftover;
> >>  
> >> -	/*
> >> -	 * mark all pages dirty if any IOMMU capable device is
> >> not able
> >> -	 * to report dirty pages and all pages are pinned and
> >> mapped.
> >> -	 */
> >> -	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
> >> -		bitmap_set(dma->bitmap, 0, nbits);
> >> -
> >>  	if (shift) {
> >>  		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
> >>  				  nbits + shift);
> >> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user
> >> *bitmap, struct vfio_iommu *iommu, struct vfio_dma *dma;
> >>  	struct rb_node *n;
> >>  	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
> >> -	size_t pgsize = (size_t)1 << pgshift;
> >>  	int ret;
> >>  
> >>  	/*
> >> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user
> >> *bitmap, struct vfio_iommu *iommu,
> >>  		 * pages which are marked dirty by vfio_dma_rw()
> >>  		 */
> >>  		bitmap_clear(dma->bitmap, 0, dma->size >>
> >> pgshift);
> >> -		vfio_dma_populate_bitmap(dma, pgsize);
> >> +		vfio_dma_populate_bitmap(iommu, dma);
> >>  	}
> >>  	return 0;
> >>  }  
> > 
> > .
> >   
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose
  2021-01-14 17:14       ` Alex Williamson
@ 2021-01-15  9:41         ` Keqian Zhu
  0 siblings, 0 replies; 16+ messages in thread
From: Keqian Zhu @ 2021-01-15  9:41 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Kirti Wankhede, linux-kernel, linux-arm-kernel, iommu, kvm,
	kvmarm, Cornelia Huck, Will Deacon, Marc Zyngier,
	Catalin Marinas, Mark Rutland, James Morse, Robin Murphy,
	Joerg Roedel, Daniel Lezcano, Thomas Gleixner, Suzuki K Poulose,
	Julien Thierry, Andrew Morton, Alexios Zavras, wanghaibin.wang,
	jiangkunkun



On 2021/1/15 1:14, Alex Williamson wrote:
> On Thu, 14 Jan 2021 21:05:23 +0800
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
>> Hi Alex,
>>
>> On 2021/1/13 5:20, Alex Williamson wrote:
>>> On Thu, 7 Jan 2021 17:28:57 +0800
>>> Keqian Zhu <zhukeqian1@huawei.com> wrote:
>>>   
>>>> Defer checking whether vfio_dma is of fully-dirty in update_user_bitmap
>>>> is easy to lose dirty log. For example, after promoting pinned_scope of
>>>> vfio_iommu, vfio_dma is not considered as fully-dirty, then we may lose
>>>> dirty log that occurs before vfio_iommu is promoted.
>>>>
>>>> The key point is that pinned-dirty is not a real dirty tracking way, it
>>>> can't continuously track dirty pages, but just restrict dirty scope. It
>>>> is essentially the same as fully-dirty. Fully-dirty is of full-scope and
>>>> pinned-dirty is of pinned-scope.
>>>>
>>>> So we must mark pinned-dirty or fully-dirty after we start dirty tracking
>>>> or clear dirty bitmap, to ensure that dirty log is marked right away.  
>>>
>>> I was initially convinced by these first three patches, but upon
>>> further review, I think the premise is wrong.  AIUI, the concern across
>>> these patches is that our dirty bitmap is only populated with pages
>>> dirtied by pinning and we only take into account the pinned page dirty
>>> scope at the time the bitmap is retrieved by the user.  You suppose
>>> this presents a gap where if a vendor driver has not yet identified
>>> with a page pinning scope that the entire bitmap should be considered
>>> dirty regardless of whether that driver later pins pages prior to the
>>> user retrieving the dirty bitmap.  
>> Yes, this is my concern. And there are some other scenarios.
>>
>> 1. If a non-pinned iommu-backed domain is detached between starting
>> dirty log and retrieving dirty bitmap, then the dirty log is missed
>> (As you said in the last paragraph).
>>
>> 2. If all vendor drivers pinned (means iommu pinned_scope is true),
>> and an vendor driver do pin/unpin pair between starting dirty log and
>> retrieving dirty bitmap, then the dirty log of these unpinned pages
>> are missed.
> 
> Can you identity where this happen?  I believe this assertion is
> incorrect.  When dirty logging is enabled vfio_dma_populate_bitmap()
> marks all existing pinned pages as dirty.  In the scenario you
> describe, the iommu pinned page dirty scope is irrelevant.  We only
> track pinned or external DMA access pages for exactly this reason.
> Unpinning a page never clears the dirty bitmap, only the user
> retrieving the bitmap or disabling dirty logging clears the bitmap.  A
> page that has been unpinned is transiently dirty, it is not repopulated
> in the bitmap after the user has retrieved the bitmap because the
> device no longer has access to it to consider it perpetually dirty.
Right, thanks for making the logic more clear to me ;-). Then just one issue to fix.

> 
>>> I don't think this is how we intended the cooperation between the
>>> iommu driver and vendor driver to work.  By pinning pages a vendor
>>> driver is not declaring that only their future dirty page scope is
>>> limited to pinned pages, instead they're declaring themselves as a
>>> participant in dirty page tracking and take responsibility for
>>> pinning any necessary pages.  For example we might extend
>>> VFIO_IOMMU_DIRTY_PAGES_FLAG_START to trigger a blocking
>>> notification to groups to not only begin dirty tracking, but also
>>> to synchronously register their current device DMA footprint.  This
>>> patch would require a vendor driver to possibly perform a
>>> gratuitous page pinning in order to set the scope prior to dirty
>>> logging being enabled, or else the initial bitmap will be fully
>>> dirty.  
>> I get what you mean ;-). You said that there is time gap between we
>> attach a device and this device declares itself is dirty traceable.
>>
>> However, this makes it difficult to management dirty log tracking (I
>> list two bugs). If the vfio devices can declare themselves dirty
>> traceable when attach to container, then the logic of dirty tracking
>> is much more clear ;-) .
> 
> There's only one bug above afaict, which should be easily fixed.  I
> think if you actually dig into the problem of a device declaring
> themselves as dirty tracking capable, when the IOMMU backend works on
> group, not devices, and it's groups that are attached to containers,
> you might see that the logic is not so clear.  I also don't see this as
> a significant issue, you're focusing on a niche scenario where a device
> is hot-added to a VM, which is immediately migrated before the device
> identifies itself by pinning pages.  In that scenario the iommu dirty
> scope would be overly broad and we'd indicate all pages are dirty.
> This errors on the side of reporting too much is dirty, which still
> provides a correct result to the user.  The more common scenario would
> be migration of a "long" running, stable VM, where drivers are active
> and devices have already pinned pages if they intend to.
>
OK ;-)

Now I am thinking about how we handle the situation that if other dirty tracking
way is added, such as smmu httu? If we can track dirty pages precisely, then populate
all pinned scope is not suitable, or the vendor drive knows that iommu can track dirty
pages, so it doesn't use the pin interface to give vfio_iommu the pinned dirty hint?

>>> Therefore, I don't see that this series is necessary or correct.
>>> Kirti, does this match your thinking?
>>>
>>> Thinking about these semantics, it seems there might still be an
>>> issue if a group with non-pinned-page dirty scope is detached with
>>> dirty logging enabled.  It seems this should in fact fully populate
>>> the dirty bitmaps at the time it's removed since we don't know the
>>> extent of its previous DMA, nor will the group be present to
>>> trigger the full bitmap when the user retrieves the dirty bitmap.
>>> Creating fully populated bitmaps at the time tracking is enabled
>>> negates our ability to take advantage of later enlightenment
>>> though.  Thanks, 
>> Since that you want to allow the time gap between we attach device
>> and the device declare itself dirty traceable, I am going to fix
>> these two bugs in patch v2. Do you agree?
> 
> I would consider a patch that fully populates the dirty bitmap if a
> non-pinned page scope group is removed from the container while dirty
> logging is enabled.  Thanks,
I have send it, thanks.

Keqian

> 
> Alex
> 
>>>> Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for
>>>> dirty pages tracking") Signed-off-by: Keqian Zhu
>>>> <zhukeqian1@huawei.com> ---
>>>>  drivers/vfio/vfio_iommu_type1.c | 33
>>>> ++++++++++++++++++++++----------- 1 file changed, 22
>>>> insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c
>>>> b/drivers/vfio/vfio_iommu_type1.c index bceda5e8baaa..b0a26e8e0adf
>>>> 100644 --- a/drivers/vfio/vfio_iommu_type1.c
>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>> @@ -224,7 +224,7 @@ static void vfio_dma_bitmap_free(struct
>>>> vfio_dma *dma) dma->bitmap = NULL;
>>>>  }
>>>>  
>>>> -static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t
>>>> pgsize) +static void vfio_dma_populate_bitmap_pinned(struct
>>>> vfio_dma *dma, size_t pgsize) {
>>>>  	struct rb_node *p;
>>>>  	unsigned long pgshift = __ffs(pgsize);
>>>> @@ -236,6 +236,25 @@ static void vfio_dma_populate_bitmap(struct
>>>> vfio_dma *dma, size_t pgsize) }
>>>>  }
>>>>  
>>>> +static void vfio_dma_populate_bitmap_full(struct vfio_dma *dma,
>>>> size_t pgsize) +{
>>>> +	unsigned long pgshift = __ffs(pgsize);
>>>> +	unsigned long nbits = dma->size >> pgshift;
>>>> +
>>>> +	bitmap_set(dma->bitmap, 0, nbits);
>>>> +}
>>>> +
>>>> +static void vfio_dma_populate_bitmap(struct vfio_iommu *iommu,
>>>> +				     struct vfio_dma *dma)
>>>> +{
>>>> +	size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap);
>>>> +
>>>> +	if (iommu->pinned_page_dirty_scope)
>>>> +		vfio_dma_populate_bitmap_pinned(dma, pgsize);
>>>> +	else if (dma->iommu_mapped)
>>>> +		vfio_dma_populate_bitmap_full(dma, pgsize);
>>>> +}
>>>> +
>>>>  static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu)
>>>>  {
>>>>  	struct rb_node *n;
>>>> @@ -257,7 +276,7 @@ static int vfio_dma_bitmap_alloc_all(struct
>>>> vfio_iommu *iommu) }
>>>>  			return ret;
>>>>  		}
>>>> -		vfio_dma_populate_bitmap(dma, pgsize);
>>>> +		vfio_dma_populate_bitmap(iommu, dma);
>>>>  	}
>>>>  	return 0;
>>>>  }
>>>> @@ -987,13 +1006,6 @@ static int update_user_bitmap(u64 __user
>>>> *bitmap, struct vfio_iommu *iommu, unsigned long shift =
>>>> bit_offset % BITS_PER_LONG; unsigned long leftover;
>>>>  
>>>> -	/*
>>>> -	 * mark all pages dirty if any IOMMU capable device is
>>>> not able
>>>> -	 * to report dirty pages and all pages are pinned and
>>>> mapped.
>>>> -	 */
>>>> -	if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped)
>>>> -		bitmap_set(dma->bitmap, 0, nbits);
>>>> -
>>>>  	if (shift) {
>>>>  		bitmap_shift_left(dma->bitmap, dma->bitmap, shift,
>>>>  				  nbits + shift);
>>>> @@ -1019,7 +1031,6 @@ static int vfio_iova_dirty_bitmap(u64 __user
>>>> *bitmap, struct vfio_iommu *iommu, struct vfio_dma *dma;
>>>>  	struct rb_node *n;
>>>>  	unsigned long pgshift = __ffs(iommu->pgsize_bitmap);
>>>> -	size_t pgsize = (size_t)1 << pgshift;
>>>>  	int ret;
>>>>  
>>>>  	/*
>>>> @@ -1055,7 +1066,7 @@ static int vfio_iova_dirty_bitmap(u64 __user
>>>> *bitmap, struct vfio_iommu *iommu,
>>>>  		 * pages which are marked dirty by vfio_dma_rw()
>>>>  		 */
>>>>  		bitmap_clear(dma->bitmap, 0, dma->size >>
>>>> pgshift);
>>>> -		vfio_dma_populate_bitmap(dma, pgsize);
>>>> +		vfio_dma_populate_bitmap(iommu, dma);
>>>>  	}
>>>>  	return 0;
>>>>  }  
>>>
>>> .
>>>   
>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-01-15  9:42 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-07  9:28 [PATCH 0/5] vfio/iommu_type1: Some fixes about dirty tracking Keqian Zhu
2021-01-07  9:28 ` [PATCH 1/5] vfio/iommu_type1: Fixes vfio_dma_populate_bitmap to avoid dirty lose Keqian Zhu
2021-01-12 21:20   ` Alex Williamson
2021-01-13 12:35     ` Kirti Wankhede
2021-01-13 15:13       ` Alex Williamson
2021-01-14 13:05     ` Keqian Zhu
2021-01-14 17:14       ` Alex Williamson
2021-01-15  9:41         ` Keqian Zhu
2021-01-07  9:28 ` [PATCH 2/5] vfio/iommu_type1: Populate dirty bitmap for new vfio_dma Keqian Zhu
2021-01-07  9:28 ` [PATCH 3/5] vfio/iommu_type1: Populate dirty bitmap when attach group Keqian Zhu
2021-01-07  9:29 ` [PATCH 4/5] vfio/iommu_type1: Carefully use unmap_unpin_all during dirty tracking Keqian Zhu
2021-01-11 21:49   ` Alex Williamson
2021-01-12 12:04     ` Keqian Zhu
2021-01-12 19:53       ` Alex Williamson
2021-01-14 12:11         ` Keqian Zhu
2021-01-07  9:29 ` [PATCH 5/5] vfio/iommu_type1: Move sanity_check_pfn_list to unmap_unpin_all Keqian Zhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).