All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
@ 2020-11-04 13:00 Christian König
  2020-11-04 13:00 ` [PATCH 2/4] drm/amdgpu: " Christian König
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Christian König @ 2020-11-04 13:00 UTC (permalink / raw)
  To: dri-devel

This is deprecated.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 95038ac3382e..f41fcee35f70 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
 	if (r)
 		goto release_sg;
 
-	drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
-					 gtt->ttm.dma_address, ttm->num_pages);
+	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
+					 ttm->num_pages);
 
 	return 0;
 
@@ -673,8 +673,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
 	}
 
 	if (slave && ttm->sg) {
-		drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
-						 gtt->ttm.dma_address, ttm->num_pages);
+		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
+						 gtt->ttm.dma_address,
+						 ttm->num_pages);
 		return 0;
 	}
 
-- 
2.25.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] drm/amdgpu: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-04 13:00 [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Christian König
@ 2020-11-04 13:00 ` Christian König
  2020-11-05 12:55   ` Daniel Vetter
  2020-11-04 13:00 ` [PATCH 3/4] drm/nouveau: " Christian König
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2020-11-04 13:00 UTC (permalink / raw)
  To: dri-devel

This is deprecated.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c01c060e4ac5..d6781b479839 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1036,8 +1036,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
 		goto release_sg;
 
 	/* convert SG to linear array of pages and dma addresses */
-	drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
-					 gtt->ttm.dma_address, ttm->num_pages);
+	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
+					 ttm->num_pages);
 
 	return 0;
 
@@ -1382,7 +1382,7 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
 			ttm->sg = sgt;
 		}
 
-		drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
+		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
 						 gtt->ttm.dma_address,
 						 ttm->num_pages);
 		return 0;
-- 
2.25.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] drm/nouveau: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-04 13:00 [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Christian König
  2020-11-04 13:00 ` [PATCH 2/4] drm/amdgpu: " Christian König
@ 2020-11-04 13:00 ` Christian König
  2020-11-04 13:00 ` [PATCH 4/4] drm/prime: document that use the page array is deprecated v3 Christian König
  2020-11-04 17:38 ` [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Daniel Vetter
  3 siblings, 0 replies; 10+ messages in thread
From: Christian König @ 2020-11-04 13:00 UTC (permalink / raw)
  To: dri-devel

This is deprecated, also drop the comment about faults.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/nouveau/nouveau_bo.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 8133377d865d..c1847cf87952 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1317,9 +1317,9 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
 		return 0;
 
 	if (slave && ttm->sg) {
-		/* make userspace faulting work */
-		drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
-						 ttm_dma->dma_address, ttm->num_pages);
+		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
+						 ttm_dma->dma_address,
+						 ttm->num_pages);
 		return 0;
 	}
 
-- 
2.25.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] drm/prime: document that use the page array is deprecated v3
  2020-11-04 13:00 [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Christian König
  2020-11-04 13:00 ` [PATCH 2/4] drm/amdgpu: " Christian König
  2020-11-04 13:00 ` [PATCH 3/4] drm/nouveau: " Christian König
@ 2020-11-04 13:00 ` Christian König
  2020-11-04 16:40   ` Daniel Vetter
  2020-11-04 17:38 ` [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Daniel Vetter
  3 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2020-11-04 13:00 UTC (permalink / raw)
  To: dri-devel

We have reoccurring requests on this so better document that
this approach doesn't work and dma_buf_mmap() needs to be used instead.

v2: split it into two functions
v3: rebased on latest changes

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     |  9 ++-
 drivers/gpu/drm/drm_prime.c                 | 64 +++++++++++++--------
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  3 +-
 drivers/gpu/drm/mediatek/mtk_drm_gem.c      |  2 +-
 drivers/gpu/drm/msm/msm_gem.c               |  2 +-
 drivers/gpu/drm/nouveau/nouveau_bo.c        |  5 +-
 drivers/gpu/drm/omapdrm/omap_gem.c          |  3 +-
 drivers/gpu/drm/radeon/radeon_ttm.c         |  9 ++-
 drivers/gpu/drm/vgem/vgem_drv.c             |  3 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  4 +-
 include/drm/drm_prime.h                     |  7 ++-
 11 files changed, 60 insertions(+), 51 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index d6781b479839..b151c28de978 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1036,8 +1036,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
 		goto release_sg;
 
 	/* convert SG to linear array of pages and dma addresses */
-	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
-					 ttm->num_pages);
+	drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+				       ttm->num_pages);
 
 	return 0;
 
@@ -1382,9 +1382,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
 			ttm->sg = sgt;
 		}
 
-		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
-						 gtt->ttm.dma_address,
-						 ttm->num_pages);
+		drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+					       ttm->num_pages);
 		return 0;
 	}
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index a7b61c2d9190..d181a5de101b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -984,44 +984,58 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
 EXPORT_SYMBOL(drm_gem_prime_import);
 
 /**
- * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
+ * drm_prime_sg_to_page_array - convert an sg table into a page array
  * @sgt: scatter-gather table to convert
- * @pages: optional array of page pointers to store the page array in
- * @addrs: optional array to store the dma bus address of each page
+ * @pages: array of page pointers to store the pages in
+ * @max_entries: size of the passed-in array
+ *
+ * Exports an sg table into an array of pages.
+ *
+ * This function is deprecated and strongly discouraged to be used.
+ * The page array is only useful for page faults and those can corrupt fields
+ * in the struct page if they are not handled by the exporting driver.
+ */
+int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
+					    struct page **pages,
+					    int max_entries)
+{
+	struct sg_page_iter page_iter;
+	struct page **p = pages;
+
+	for_each_sgtable_page(sgt, &page_iter, 0) {
+		if (WARN_ON(p - pages >= max_entries))
+			return -1;
+		*p++ = sg_page_iter_page(&page_iter);
+	}
+	return 0;
+}
+EXPORT_SYMBOL(drm_prime_sg_to_page_array);
+
+/**
+ * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
+ * @sgt: scatter-gather table to convert
+ * @addrs: array to store the dma bus address of each page
  * @max_entries: size of both the passed-in arrays
  *
- * Exports an sg table into an array of pages and addresses. This is currently
- * required by the TTM driver in order to do correct fault handling.
+ * Exports an sg table into an array of addresses.
  *
- * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
+ * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
  * implementation.
  */
-int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
-				     dma_addr_t *addrs, int max_entries)
+int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
+				   int max_entries)
 {
 	struct sg_dma_page_iter dma_iter;
-	struct sg_page_iter page_iter;
-	struct page **p = pages;
 	dma_addr_t *a = addrs;
 
-	if (pages) {
-		for_each_sgtable_page(sgt, &page_iter, 0) {
-			if (WARN_ON(p - pages >= max_entries))
-				return -1;
-			*p++ = sg_page_iter_page(&page_iter);
-		}
-	}
-	if (addrs) {
-		for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
-			if (WARN_ON(a - addrs >= max_entries))
-				return -1;
-			*a++ = sg_page_iter_dma_address(&dma_iter);
-		}
+	for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
+		if (WARN_ON(a - addrs >= max_entries))
+			return -1;
+		*a++ = sg_page_iter_dma_address(&dma_iter);
 	}
-
 	return 0;
 }
-EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
+EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
 
 /**
  * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..8c04b8e8054c 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
 		goto fail;
 	}
 
-	ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
-					       NULL, npages);
+	ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
 	if (ret)
 		goto fail;
 
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index cdd1a6e61564..339854338f7e 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -258,7 +258,7 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
 	if (!mtk_gem->pages)
 		goto out;
 
-	drm_prime_sg_to_page_addr_arrays(sgt, mtk_gem->pages, NULL, npages);
+	drm_prime_sg_to_page_array(sgt, mtk_gem->pages, npages);
 
 	mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
 			       pgprot_writecombine(PAGE_KERNEL));
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 311721ceee50..f995cb02b63d 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -1180,7 +1180,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
 		goto fail;
 	}
 
-	ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
+	ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
 	if (ret) {
 		mutex_unlock(&msm_obj->lock);
 		goto fail;
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index c1847cf87952..615f6abbd3a9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1317,9 +1317,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
 		return 0;
 
 	if (slave && ttm->sg) {
-		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
-						 ttm_dma->dma_address,
-						 ttm->num_pages);
+		drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
+					       ttm->num_pages);
 		return 0;
 	}
 
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index f063f5a04fb0..3adf23042d9d 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -1323,8 +1323,7 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
 		}
 
 		omap_obj->pages = pages;
-		ret = drm_prime_sg_to_page_addr_arrays(sgt, pages, NULL,
-						       npages);
+		ret = drm_prime_sg_to_page_array(sgt, pages, npages);
 		if (ret) {
 			omap_gem_free_object(obj);
 			obj = ERR_PTR(-ENOMEM);
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index f41fcee35f70..750d1bae9f1f 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
 	if (r)
 		goto release_sg;
 
-	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
-					 ttm->num_pages);
+	drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+				       ttm->num_pages);
 
 	return 0;
 
@@ -673,9 +673,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
 	}
 
 	if (slave && ttm->sg) {
-		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
-						 gtt->ttm.dma_address,
-						 ttm->num_pages);
+		drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+					       ttm->num_pages);
 		return 0;
 	}
 
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index ea0eecae5153..e505e5a291b3 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	}
 
 	obj->pages_pin_count++; /* perma-pinned */
-	drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
-					npages);
+	drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
 	return &obj->base;
 }
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..c937ef4e437e 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
 
 	xen_obj->sgt_imported = sgt;
 
-	ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
-					       NULL, xen_obj->num_pages);
+	ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
+					 xen_obj->num_pages);
 	if (ret < 0)
 		return ERR_PTR(ret);
 
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 0991a47a1567..54f2c58305d2 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -105,8 +105,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
 
 void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
 
-int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
-				     dma_addr_t *addrs, int max_pages);
-
+int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
+			       int max_pages);
+int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
+				   int max_pages);
 
 #endif /* __DRM_PRIME_H__ */
-- 
2.25.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] drm/prime: document that use the page array is deprecated v3
  2020-11-04 13:00 ` [PATCH 4/4] drm/prime: document that use the page array is deprecated v3 Christian König
@ 2020-11-04 16:40   ` Daniel Vetter
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2020-11-04 16:40 UTC (permalink / raw)
  To: Christian König; +Cc: dri-devel

On Wed, Nov 04, 2020 at 02:00:24PM +0100, Christian König wrote:
> We have reoccurring requests on this so better document that
> this approach doesn't work and dma_buf_mmap() needs to be used instead.
> 
> v2: split it into two functions
> v3: rebased on latest changes
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     |  9 ++-
>  drivers/gpu/drm/drm_prime.c                 | 64 +++++++++++++--------
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  3 +-
>  drivers/gpu/drm/mediatek/mtk_drm_gem.c      |  2 +-
>  drivers/gpu/drm/msm/msm_gem.c               |  2 +-
>  drivers/gpu/drm/nouveau/nouveau_bo.c        |  5 +-
>  drivers/gpu/drm/omapdrm/omap_gem.c          |  3 +-
>  drivers/gpu/drm/radeon/radeon_ttm.c         |  9 ++-
>  drivers/gpu/drm/vgem/vgem_drv.c             |  3 +-
>  drivers/gpu/drm/xen/xen_drm_front_gem.c     |  4 +-
>  include/drm/drm_prime.h                     |  7 ++-
>  11 files changed, 60 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index d6781b479839..b151c28de978 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1036,8 +1036,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
>  		goto release_sg;
>  
>  	/* convert SG to linear array of pages and dma addresses */
> -	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> -					 ttm->num_pages);
> +	drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> +				       ttm->num_pages);
>  
>  	return 0;
>  
> @@ -1382,9 +1382,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
>  			ttm->sg = sgt;
>  		}
>  
> -		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> -						 gtt->ttm.dma_address,
> -						 ttm->num_pages);
> +		drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> +					       ttm->num_pages);
>  		return 0;
>  	}
>  
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index a7b61c2d9190..d181a5de101b 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -984,44 +984,58 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
>  EXPORT_SYMBOL(drm_gem_prime_import);
>  
>  /**
> - * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
> + * drm_prime_sg_to_page_array - convert an sg table into a page array
>   * @sgt: scatter-gather table to convert
> - * @pages: optional array of page pointers to store the page array in
> - * @addrs: optional array to store the dma bus address of each page
> + * @pages: array of page pointers to store the pages in
> + * @max_entries: size of the passed-in array
> + *
> + * Exports an sg table into an array of pages.
> + *
> + * This function is deprecated and strongly discouraged to be used.
> + * The page array is only useful for page faults and those can corrupt fields
> + * in the struct page if they are not handled by the exporting driver.

I think this should have a

"Use dma_buf_mmap() to redirect userspace memory mappings to the exporter,
and dma_buf_vmap() and dma_buf_vunmap() for cpu access."

> + */
> +int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,

The __deprecated here does nothing I think, and in headers it's frowned
upon because it just causes warnings no one fixes anyway.

Otherwise lgtm, and it pretty much means that if we can cut these drivers
over to gem shmem helpers, then problems should be all solved, since I
fixed the helpers already.

With the nits addressed:

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> +					    struct page **pages,
> +					    int max_entries)
> +{
> +	struct sg_page_iter page_iter;
> +	struct page **p = pages;
> +
> +	for_each_sgtable_page(sgt, &page_iter, 0) {
> +		if (WARN_ON(p - pages >= max_entries))
> +			return -1;
> +		*p++ = sg_page_iter_page(&page_iter);
> +	}
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_prime_sg_to_page_array);
> +
> +/**
> + * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
> + * @sgt: scatter-gather table to convert
> + * @addrs: array to store the dma bus address of each page
>   * @max_entries: size of both the passed-in arrays
>   *
> - * Exports an sg table into an array of pages and addresses. This is currently
> - * required by the TTM driver in order to do correct fault handling.
> + * Exports an sg table into an array of addresses.
>   *
> - * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
> + * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
>   * implementation.
>   */
> -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> -				     dma_addr_t *addrs, int max_entries)
> +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> +				   int max_entries)
>  {
>  	struct sg_dma_page_iter dma_iter;
> -	struct sg_page_iter page_iter;
> -	struct page **p = pages;
>  	dma_addr_t *a = addrs;
>  
> -	if (pages) {
> -		for_each_sgtable_page(sgt, &page_iter, 0) {
> -			if (WARN_ON(p - pages >= max_entries))
> -				return -1;
> -			*p++ = sg_page_iter_page(&page_iter);
> -		}
> -	}
> -	if (addrs) {
> -		for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
> -			if (WARN_ON(a - addrs >= max_entries))
> -				return -1;
> -			*a++ = sg_page_iter_dma_address(&dma_iter);
> -		}
> +	for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
> +		if (WARN_ON(a - addrs >= max_entries))
> +			return -1;
> +		*a++ = sg_page_iter_dma_address(&dma_iter);
>  	}
> -
>  	return 0;
>  }
> -EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
> +EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
>  
>  /**
>   * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..8c04b8e8054c 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
>  		goto fail;
>  	}
>  
> -	ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
> -					       NULL, npages);
> +	ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
>  	if (ret)
>  		goto fail;
>  
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> index cdd1a6e61564..339854338f7e 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> @@ -258,7 +258,7 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
>  	if (!mtk_gem->pages)
>  		goto out;
>  
> -	drm_prime_sg_to_page_addr_arrays(sgt, mtk_gem->pages, NULL, npages);
> +	drm_prime_sg_to_page_array(sgt, mtk_gem->pages, npages);
>  
>  	mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
>  			       pgprot_writecombine(PAGE_KERNEL));
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 311721ceee50..f995cb02b63d 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -1180,7 +1180,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
>  		goto fail;
>  	}
>  
> -	ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
> +	ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
>  	if (ret) {
>  		mutex_unlock(&msm_obj->lock);
>  		goto fail;
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> index c1847cf87952..615f6abbd3a9 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> @@ -1317,9 +1317,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
>  		return 0;
>  
>  	if (slave && ttm->sg) {
> -		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> -						 ttm_dma->dma_address,
> -						 ttm->num_pages);
> +		drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
> +					       ttm->num_pages);
>  		return 0;
>  	}
>  
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
> index f063f5a04fb0..3adf23042d9d 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem.c
> @@ -1323,8 +1323,7 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
>  		}
>  
>  		omap_obj->pages = pages;
> -		ret = drm_prime_sg_to_page_addr_arrays(sgt, pages, NULL,
> -						       npages);
> +		ret = drm_prime_sg_to_page_array(sgt, pages, npages);
>  		if (ret) {
>  			omap_gem_free_object(obj);
>  			obj = ERR_PTR(-ENOMEM);
> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> index f41fcee35f70..750d1bae9f1f 100644
> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> @@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
>  	if (r)
>  		goto release_sg;
>  
> -	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> -					 ttm->num_pages);
> +	drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> +				       ttm->num_pages);
>  
>  	return 0;
>  
> @@ -673,9 +673,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
>  	}
>  
>  	if (slave && ttm->sg) {
> -		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> -						 gtt->ttm.dma_address,
> -						 ttm->num_pages);
> +		drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> +					       ttm->num_pages);
>  		return 0;
>  	}
>  
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index ea0eecae5153..e505e5a291b3 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>  	}
>  
>  	obj->pages_pin_count++; /* perma-pinned */
> -	drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
> -					npages);
> +	drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
>  	return &obj->base;
>  }
>  
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..c937ef4e437e 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
>  
>  	xen_obj->sgt_imported = sgt;
>  
> -	ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> -					       NULL, xen_obj->num_pages);
> +	ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
> +					 xen_obj->num_pages);
>  	if (ret < 0)
>  		return ERR_PTR(ret);
>  
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index 0991a47a1567..54f2c58305d2 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -105,8 +105,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
>  
>  void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
>  
> -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> -				     dma_addr_t *addrs, int max_pages);
> -
> +int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
> +			       int max_pages);
> +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> +				   int max_pages);
>  
>  #endif /* __DRM_PRIME_H__ */
> -- 
> 2.25.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-04 13:00 [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Christian König
                   ` (2 preceding siblings ...)
  2020-11-04 13:00 ` [PATCH 4/4] drm/prime: document that use the page array is deprecated v3 Christian König
@ 2020-11-04 17:38 ` Daniel Vetter
  2020-11-05 12:12   ` Christian König
  3 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2020-11-04 17:38 UTC (permalink / raw)
  To: Christian König; +Cc: dri-devel

On Wed, Nov 04, 2020 at 02:00:21PM +0100, Christian König wrote:
> This is deprecated.
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>

So I tried to prove to myself that ttm doesn't access ->pages for these
cases, and kinda couldn't. We still seem to allocate the pages array and
all that, and there's lots of code using ->pages all over. And between
ttm_bo_type_sg and TTM_PAGE_FLAG_SG I didn't manage to chase a whole lot
of paths to their full conclusion.

So I reduced my ambitions and wanted to prove that at least for dma-buf
imports aka ttm_bo_type_sg, we're guaranteed that we don't try to mmap
these to userspace. And also failed to find that check.

btw this is across all drivers, mostly ttm code, not radeon specific.

So conclusion, still a mess here that at least I can't see throug clearly
:-/ here = ttm_tt and the entire backing storage handling and everything
that ties into it. Probably the area that still has the most midlayer feel
to ttm with all the refactoring in-flight in still.

tldr; tried to review patches 1-3, gave up.

Cheers, Daniel

> ---
>  drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> index 95038ac3382e..f41fcee35f70 100644
> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> @@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
>  	if (r)
>  		goto release_sg;
>  
> -	drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> -					 gtt->ttm.dma_address, ttm->num_pages);
> +	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> +					 ttm->num_pages);
>  
>  	return 0;
>  
> @@ -673,8 +673,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
>  	}
>  
>  	if (slave && ttm->sg) {
> -		drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> -						 gtt->ttm.dma_address, ttm->num_pages);
> +		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> +						 gtt->ttm.dma_address,
> +						 ttm->num_pages);
>  		return 0;
>  	}
>  
> -- 
> 2.25.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-04 17:38 ` [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Daniel Vetter
@ 2020-11-05 12:12   ` Christian König
  2020-11-05 13:01     ` Daniel Vetter
  0 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2020-11-05 12:12 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: dri-devel

Am 04.11.20 um 18:38 schrieb Daniel Vetter:
> On Wed, Nov 04, 2020 at 02:00:21PM +0100, Christian König wrote:
>> This is deprecated.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
> So I tried to prove to myself that ttm doesn't access ->pages for these
> cases, and kinda couldn't. We still seem to allocate the pages array and
> all that, and there's lots of code using ->pages all over. And between
> ttm_bo_type_sg and TTM_PAGE_FLAG_SG I didn't manage to chase a whole lot
> of paths to their full conclusion.

Nope, see the amdgpu code:

> if (ttm_sg_tt_init(&gtt->ttm, bo, page_flags, caching)) {

And then what ttm_sg_tt_init() does:

>         if (page_flags & TTM_PAGE_FLAG_SG)
>                 ret = ttm_sg_tt_alloc_page_directory(ttm);
>         else
>                 ret = ttm_dma_tt_alloc_page_directory(ttm);

And then finally what ttm_sg_tt_alloc_page_directory() does:

ttm->dma_address = kvmalloc_array(ttm->num_pages,....

ttm->pages should be NULL in this case if I'm not completely mistaken.

For or imported DMA-buf s we shouldn't have a ttm->pages in amdgpu for 
quite a while and that works perfectly fine.




> So I reduced my ambitions and wanted to prove that at least for dma-buf
> imports aka ttm_bo_type_sg, we're guaranteed that we don't try to mmap
> these to userspace. And also failed to find that check.

See ttm_bo_vm_reserve():
>         /*
>          * Refuse to fault imported pages. This should be handled
>          * (if at all) by redirecting mmap to the exporter.
>          */
>         if (bo->ttm && (bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) {
>                 dma_resv_unlock(bo->base.resv);
>                 return VM_FAULT_SIGBUS;
>         }


> btw this is across all drivers, mostly ttm code, not radeon specific.

Well instead of those patches we could as well switch over radeon and 
nouveau to using ttm_sg_tt_init() as well (or merge ttm_sg_tt_init() 
into ttm_dma_tt_init).

That's probably the much cleaner approach, but when I wrote 
ttm_sg_tt_init() I wasn't sure if radeon/nouveau where using ttm->pages 
for something, e.g. basically the same concern you have.

Regards,
Christian.

> So conclusion, still a mess here that at least I can't see throug clearly
> :-/ here = ttm_tt and the entire backing storage handling and everything
> that ties into it. Probably the area that still has the most midlayer feel
> to ttm with all the refactoring in-flight in still.
>
> tldr; tried to review patches 1-3, gave up.
>
> Cheers, Daniel
>
>> ---
>>   drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
>>   1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
>> index 95038ac3382e..f41fcee35f70 100644
>> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
>> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
>> @@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
>>   	if (r)
>>   		goto release_sg;
>>   
>> -	drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
>> -					 gtt->ttm.dma_address, ttm->num_pages);
>> +	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
>> +					 ttm->num_pages);
>>   
>>   	return 0;
>>   
>> @@ -673,8 +673,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
>>   	}
>>   
>>   	if (slave && ttm->sg) {
>> -		drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
>> -						 gtt->ttm.dma_address, ttm->num_pages);
>> +		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
>> +						 gtt->ttm.dma_address,
>> +						 ttm->num_pages);
>>   		return 0;
>>   	}
>>   
>> -- 
>> 2.25.1
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] drm/amdgpu: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-04 13:00 ` [PATCH 2/4] drm/amdgpu: " Christian König
@ 2020-11-05 12:55   ` Daniel Vetter
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2020-11-05 12:55 UTC (permalink / raw)
  To: Christian König; +Cc: dri-devel

On Wed, Nov 04, 2020 at 02:00:22PM +0100, Christian König wrote:
> This is deprecated.
> 
> Signed-off-by: Christian König <christian.koenig@amd.com>

Ok you convinced me that amdgpu is safe. Maybe for completeness paste the
explanation you put into the other mail into the commit message here (just
the entire thing seems fine).

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index c01c060e4ac5..d6781b479839 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1036,8 +1036,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
>  		goto release_sg;
>  
>  	/* convert SG to linear array of pages and dma addresses */
> -	drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> -					 gtt->ttm.dma_address, ttm->num_pages);
> +	drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> +					 ttm->num_pages);
>  
>  	return 0;
>  
> @@ -1382,7 +1382,7 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
>  			ttm->sg = sgt;
>  		}
>  
> -		drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> +		drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
>  						 gtt->ttm.dma_address,
>  						 ttm->num_pages);
>  		return 0;
> -- 
> 2.25.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-05 12:12   ` Christian König
@ 2020-11-05 13:01     ` Daniel Vetter
  2020-11-05 13:05       ` Daniel Vetter
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Vetter @ 2020-11-05 13:01 UTC (permalink / raw)
  To: Christian König; +Cc: dri-devel

On Thu, Nov 5, 2020 at 1:12 PM Christian König
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Am 04.11.20 um 18:38 schrieb Daniel Vetter:
> > On Wed, Nov 04, 2020 at 02:00:21PM +0100, Christian König wrote:
> >> This is deprecated.
> >>
> >> Signed-off-by: Christian König <christian.koenig@amd.com>
> > So I tried to prove to myself that ttm doesn't access ->pages for these
> > cases, and kinda couldn't. We still seem to allocate the pages array and
> > all that, and there's lots of code using ->pages all over. And between
> > ttm_bo_type_sg and TTM_PAGE_FLAG_SG I didn't manage to chase a whole lot
> > of paths to their full conclusion.
>
> Nope, see the amdgpu code:
>
> > if (ttm_sg_tt_init(&gtt->ttm, bo, page_flags, caching)) {
>
> And then what ttm_sg_tt_init() does:
>
> >         if (page_flags & TTM_PAGE_FLAG_SG)
> >                 ret = ttm_sg_tt_alloc_page_directory(ttm);
> >         else
> >                 ret = ttm_dma_tt_alloc_page_directory(ttm);
>
> And then finally what ttm_sg_tt_alloc_page_directory() does:
>
> ttm->dma_address = kvmalloc_array(ttm->num_pages,....
>
> ttm->pages should be NULL in this case if I'm not completely mistaken.
>
> For or imported DMA-buf s we shouldn't have a ttm->pages in amdgpu for
> quite a while and that works perfectly fine.
>
>
>
>
> > So I reduced my ambitions and wanted to prove that at least for dma-buf
> > imports aka ttm_bo_type_sg, we're guaranteed that we don't try to mmap
> > these to userspace. And also failed to find that check.
>
> See ttm_bo_vm_reserve():
> >         /*
> >          * Refuse to fault imported pages. This should be handled
> >          * (if at all) by redirecting mmap to the exporter.
> >          */
> >         if (bo->ttm && (bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) {
> >                 dma_resv_unlock(bo->base.resv);
> >                 return VM_FAULT_SIGBUS;
> >         }
>
>
> > btw this is across all drivers, mostly ttm code, not radeon specific.

Ok, this makes sense, and you can have my r-b for the amdgpu patch.

> Well instead of those patches we could as well switch over radeon and
> nouveau to using ttm_sg_tt_init() as well (or merge ttm_sg_tt_init()
> into ttm_dma_tt_init).
>
> That's probably the much cleaner approach, but when I wrote
> ttm_sg_tt_init() I wasn't sure if radeon/nouveau where using ttm->pages
> for something, e.g. basically the same concern you have.

I guess that might be useful as an intermediate step. Still a pile of
work to review drivers, but at least the ttm paths work all the same.

One thing that's somewhat annoying here is that ttm_bo_type_sg
conflates how the backing memory is stored (it's in an sg list) with
where it's from (it also means "this is a dma-buf, don't look at the
cpu side, don't mmap it"). That's kinda awkward, just in case you have
a driver that might want to use sg lists for everything to not
duplicate code - ttm_tt otoh seems to be the "everything goes"
combinatorial chaos. But that's an entirely unrelated rant :-)

Then there's also the issue that dma-buf import goes through a
bazillion layers with lots of driver duplication, which isn't helping
in reviewing to make sure they really all end up with ttm_bo_type_sg.
But I think that's the case from what I can see after lots of
grepping. Maybe ttm_bo_init should have a WARN_ON if there's an sg but
type isn't ttm_bo_type_sg.
-Daniel

>
> Regards,
> Christian.
>
> > So conclusion, still a mess here that at least I can't see throug clearly
> > :-/ here = ttm_tt and the entire backing storage handling and everything
> > that ties into it. Probably the area that still has the most midlayer feel
> > to ttm with all the refactoring in-flight in still.
> >
> > tldr; tried to review patches 1-3, gave up.
> >
> > Cheers, Daniel
> >
> >> ---
> >>   drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
> >>   1 file changed, 5 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> >> index 95038ac3382e..f41fcee35f70 100644
> >> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> >> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> >> @@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> >>      if (r)
> >>              goto release_sg;
> >>
> >> -    drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> >> -                                     gtt->ttm.dma_address, ttm->num_pages);
> >> +    drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> >> +                                     ttm->num_pages);
> >>
> >>      return 0;
> >>
> >> @@ -673,8 +673,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> >>      }
> >>
> >>      if (slave && ttm->sg) {
> >> -            drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> >> -                                             gtt->ttm.dma_address, ttm->num_pages);
> >> +            drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> >> +                                             gtt->ttm.dma_address,
> >> +                                             ttm->num_pages);
> >>              return 0;
> >>      }
> >>
> >> --
> >> 2.25.1
> >>
> >> _______________________________________________
> >> dri-devel mailing list
> >> dri-devel@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
  2020-11-05 13:01     ` Daniel Vetter
@ 2020-11-05 13:05       ` Daniel Vetter
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Vetter @ 2020-11-05 13:05 UTC (permalink / raw)
  To: Christian König; +Cc: dri-devel

On Thu, Nov 5, 2020 at 2:01 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Thu, Nov 5, 2020 at 1:12 PM Christian König
> <ckoenig.leichtzumerken@gmail.com> wrote:
> >
> > Am 04.11.20 um 18:38 schrieb Daniel Vetter:
> > > On Wed, Nov 04, 2020 at 02:00:21PM +0100, Christian König wrote:
> > >> This is deprecated.
> > >>
> > >> Signed-off-by: Christian König <christian.koenig@amd.com>
> > > So I tried to prove to myself that ttm doesn't access ->pages for these
> > > cases, and kinda couldn't. We still seem to allocate the pages array and
> > > all that, and there's lots of code using ->pages all over. And between
> > > ttm_bo_type_sg and TTM_PAGE_FLAG_SG I didn't manage to chase a whole lot
> > > of paths to their full conclusion.
> >
> > Nope, see the amdgpu code:
> >
> > > if (ttm_sg_tt_init(&gtt->ttm, bo, page_flags, caching)) {
> >
> > And then what ttm_sg_tt_init() does:
> >
> > >         if (page_flags & TTM_PAGE_FLAG_SG)
> > >                 ret = ttm_sg_tt_alloc_page_directory(ttm);
> > >         else
> > >                 ret = ttm_dma_tt_alloc_page_directory(ttm);
> >
> > And then finally what ttm_sg_tt_alloc_page_directory() does:
> >
> > ttm->dma_address = kvmalloc_array(ttm->num_pages,....
> >
> > ttm->pages should be NULL in this case if I'm not completely mistaken.
> >
> > For or imported DMA-buf s we shouldn't have a ttm->pages in amdgpu for
> > quite a while and that works perfectly fine.
> >
> >
> >
> >
> > > So I reduced my ambitions and wanted to prove that at least for dma-buf
> > > imports aka ttm_bo_type_sg, we're guaranteed that we don't try to mmap
> > > these to userspace. And also failed to find that check.
> >
> > See ttm_bo_vm_reserve():
> > >         /*
> > >          * Refuse to fault imported pages. This should be handled
> > >          * (if at all) by redirecting mmap to the exporter.
> > >          */
> > >         if (bo->ttm && (bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) {
> > >                 dma_resv_unlock(bo->base.resv);
> > >                 return VM_FAULT_SIGBUS;
> > >         }

E.g. I think if this here would check for gem_bo->import_attach
instead, that would be a ton clearer. And we can do that nowadays I
think, since vmwgfx doesn't import foreign dma_buf, ever.
-Daniel

> > > btw this is across all drivers, mostly ttm code, not radeon specific.
>
> Ok, this makes sense, and you can have my r-b for the amdgpu patch.
>
> > Well instead of those patches we could as well switch over radeon and
> > nouveau to using ttm_sg_tt_init() as well (or merge ttm_sg_tt_init()
> > into ttm_dma_tt_init).
> >
> > That's probably the much cleaner approach, but when I wrote
> > ttm_sg_tt_init() I wasn't sure if radeon/nouveau where using ttm->pages
> > for something, e.g. basically the same concern you have.
>
> I guess that might be useful as an intermediate step. Still a pile of
> work to review drivers, but at least the ttm paths work all the same.
>
> One thing that's somewhat annoying here is that ttm_bo_type_sg
> conflates how the backing memory is stored (it's in an sg list) with
> where it's from (it also means "this is a dma-buf, don't look at the
> cpu side, don't mmap it"). That's kinda awkward, just in case you have
> a driver that might want to use sg lists for everything to not
> duplicate code - ttm_tt otoh seems to be the "everything goes"
> combinatorial chaos. But that's an entirely unrelated rant :-)
>
> Then there's also the issue that dma-buf import goes through a
> bazillion layers with lots of driver duplication, which isn't helping
> in reviewing to make sure they really all end up with ttm_bo_type_sg.
> But I think that's the case from what I can see after lots of
> grepping. Maybe ttm_bo_init should have a WARN_ON if there's an sg but
> type isn't ttm_bo_type_sg.
> -Daniel
>
> >
> > Regards,
> > Christian.
> >
> > > So conclusion, still a mess here that at least I can't see throug clearly
> > > :-/ here = ttm_tt and the entire backing storage handling and everything
> > > that ties into it. Probably the area that still has the most midlayer feel
> > > to ttm with all the refactoring in-flight in still.
> > >
> > > tldr; tried to review patches 1-3, gave up.
> > >
> > > Cheers, Daniel
> > >
> > >> ---
> > >>   drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
> > >>   1 file changed, 5 insertions(+), 4 deletions(-)
> > >>
> > >> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> > >> index 95038ac3382e..f41fcee35f70 100644
> > >> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> > >> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> > >> @@ -494,8 +494,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> > >>      if (r)
> > >>              goto release_sg;
> > >>
> > >> -    drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> > >> -                                     gtt->ttm.dma_address, ttm->num_pages);
> > >> +    drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > >> +                                     ttm->num_pages);
> > >>
> > >>      return 0;
> > >>
> > >> @@ -673,8 +673,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> > >>      }
> > >>
> > >>      if (slave && ttm->sg) {
> > >> -            drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
> > >> -                                             gtt->ttm.dma_address, ttm->num_pages);
> > >> +            drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > >> +                                             gtt->ttm.dma_address,
> > >> +                                             ttm->num_pages);
> > >>              return 0;
> > >>      }
> > >>
> > >> --
> > >> 2.25.1
> > >>
> > >> _______________________________________________
> > >> dri-devel mailing list
> > >> dri-devel@lists.freedesktop.org
> > >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-11-05 13:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-04 13:00 [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Christian König
2020-11-04 13:00 ` [PATCH 2/4] drm/amdgpu: " Christian König
2020-11-05 12:55   ` Daniel Vetter
2020-11-04 13:00 ` [PATCH 3/4] drm/nouveau: " Christian König
2020-11-04 13:00 ` [PATCH 4/4] drm/prime: document that use the page array is deprecated v3 Christian König
2020-11-04 16:40   ` Daniel Vetter
2020-11-04 17:38 ` [PATCH 1/4] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Daniel Vetter
2020-11-05 12:12   ` Christian König
2020-11-05 13:01     ` Daniel Vetter
2020-11-05 13:05       ` Daniel Vetter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.