* [PATCH 1/6] mm: mmap: fix fput in error path
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
adds a workaround for a bug in mmap_region.
As the comment states ->mmap() callback can change
vma->vm_file and so we might call fput() on the wrong file.
Revert the workaround and proper fix this in mmap_region.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/dma-buf/dma-buf.c | 22 +++++-----------------
mm/mmap.c | 2 +-
2 files changed, 6 insertions(+), 18 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index a6ba4d598f0e..edd57402a48a 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1143,9 +1143,6 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
- struct file *oldfile;
- int ret;
-
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1163,22 +1160,13 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
return -EINVAL;
/* readjust the vma */
- get_file(dmabuf->file);
- oldfile = vma->vm_file;
- vma->vm_file = dmabuf->file;
- vma->vm_pgoff = pgoff;
+ if (vma->vm_file)
+ fput(vma->vm_file);
- ret = dmabuf->ops->mmap(dmabuf, vma);
- if (ret) {
- /* restore old parameters on failure */
- vma->vm_file = oldfile;
- fput(dmabuf->file);
- } else {
- if (oldfile)
- fput(oldfile);
- }
- return ret;
+ vma->vm_file = get_file(dmabuf->file);
+ vma->vm_pgoff = pgoff;
+ return dmabuf->ops->mmap(dmabuf, vma);
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
diff --git a/mm/mmap.c b/mm/mmap.c
index 40248d84ad5f..3a2670d73355 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1852,8 +1852,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
return addr;
unmap_and_free_vma:
+ fput(vma->vm_file);
vma->vm_file = NULL;
- fput(file);
/* Undo any partial mapping done by a device driver. */
unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end);
--
2.17.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 1/6] mm: mmap: fix fput in error path
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
adds a workaround for a bug in mmap_region.
As the comment states ->mmap() callback can change
vma->vm_file and so we might call fput() on the wrong file.
Revert the workaround and proper fix this in mmap_region.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/dma-buf/dma-buf.c | 22 +++++-----------------
mm/mmap.c | 2 +-
2 files changed, 6 insertions(+), 18 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index a6ba4d598f0e..edd57402a48a 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1143,9 +1143,6 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
- struct file *oldfile;
- int ret;
-
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1163,22 +1160,13 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
return -EINVAL;
/* readjust the vma */
- get_file(dmabuf->file);
- oldfile = vma->vm_file;
- vma->vm_file = dmabuf->file;
- vma->vm_pgoff = pgoff;
+ if (vma->vm_file)
+ fput(vma->vm_file);
- ret = dmabuf->ops->mmap(dmabuf, vma);
- if (ret) {
- /* restore old parameters on failure */
- vma->vm_file = oldfile;
- fput(dmabuf->file);
- } else {
- if (oldfile)
- fput(oldfile);
- }
- return ret;
+ vma->vm_file = get_file(dmabuf->file);
+ vma->vm_pgoff = pgoff;
+ return dmabuf->ops->mmap(dmabuf, vma);
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
diff --git a/mm/mmap.c b/mm/mmap.c
index 40248d84ad5f..3a2670d73355 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1852,8 +1852,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
return addr;
unmap_and_free_vma:
+ fput(vma->vm_file);
vma->vm_file = NULL;
- fput(file);
/* Undo any partial mapping done by a device driver. */
unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end);
--
2.17.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 2/6] mm: introduce vma_set_file function v3
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:03 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
Add the new vma_set_file() function to allow changing
vma->vm_file with the necessary refcount dance.
v2: add more users of this.
v3: add missing EXPORT_SYMBOL, rebase on mmap cleanup,
add comments why we drop the reference on two occasions.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v2)
---
drivers/dma-buf/dma-buf.c | 5 +----
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 4 +---
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 3 +--
drivers/gpu/drm/i915/gem/i915_gem_mman.c | 5 +++--
drivers/gpu/drm/msm/msm_gem.c | 4 +---
drivers/gpu/drm/omapdrm/omap_gem.c | 3 +--
drivers/gpu/drm/vgem/vgem_drv.c | 3 +--
drivers/staging/android/ashmem.c | 6 +++---
include/linux/mm.h | 2 ++
mm/mmap.c | 15 +++++++++++++++
10 files changed, 29 insertions(+), 21 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index edd57402a48a..8e6a114c6034 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1160,10 +1160,7 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
return -EINVAL;
/* readjust the vma */
- if (vma->vm_file)
- fput(vma->vm_file);
-
- vma->vm_file = get_file(dmabuf->file);
+ vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
return dmabuf->ops->mmap(dmabuf, vma);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 312e9d58d5a7..10ce267c0947 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -145,10 +145,8 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
- fput(vma->vm_file);
- get_file(etnaviv_obj->base.filp);
vma->vm_pgoff = 0;
- vma->vm_file = etnaviv_obj->base.filp;
+ vma_set_file(vma, etnaviv_obj->base.filp);
vma->vm_page_prot = vm_page_prot;
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index fec0e1e3dc3e..8ce4c9e28b87 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -119,8 +119,7 @@ static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *
if (ret)
return ret;
- fput(vma->vm_file);
- vma->vm_file = get_file(obj->base.filp);
+ vma_set_file(vma, obj->base.filp);
return 0;
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 3d69e51f3e4d..ec28a6cde49b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -893,8 +893,9 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
* requires avoiding extraneous references to their filp, hence why
* we prefer to use an anonymous file for their mmaps.
*/
- fput(vma->vm_file);
- vma->vm_file = anon;
+ vma_set_file(vma, anon);
+ /* Drop the initial creation reference, the vma is now holding one. */
+ fput(anon);
switch (mmo->mmap_type) {
case I915_MMAP_TYPE_WC:
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index de915ff6f4b4..a71f42870d5e 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -223,10 +223,8 @@ int msm_gem_mmap_obj(struct drm_gem_object *obj,
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
- fput(vma->vm_file);
- get_file(obj->filp);
vma->vm_pgoff = 0;
- vma->vm_file = obj->filp;
+ vma_set_file(vma, obj->filp);
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
}
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 979d53a93c2b..0d4542ff1d7d 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -564,9 +564,8 @@ int omap_gem_mmap_obj(struct drm_gem_object *obj,
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
- fput(vma->vm_file);
vma->vm_pgoff = 0;
- vma->vm_file = get_file(obj->filp);
+ vma_set_file(vma, obj->filp);
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
}
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..ea0eecae5153 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -397,8 +397,7 @@ static int vgem_prime_mmap(struct drm_gem_object *obj,
if (ret)
return ret;
- fput(vma->vm_file);
- vma->vm_file = get_file(obj->filp);
+ vma_set_file(vma, obj->filp);
vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
index 10b4be1f3e78..4789d36ddfd3 100644
--- a/drivers/staging/android/ashmem.c
+++ b/drivers/staging/android/ashmem.c
@@ -450,9 +450,9 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
vma_set_anonymous(vma);
}
- if (vma->vm_file)
- fput(vma->vm_file);
- vma->vm_file = asma->file;
+ vma_set_file(vma, asma->file);
+ /* XXX: merge this with the get_file() above if possible */
+ fput(asma->file);
out:
mutex_unlock(&ashmem_mutex);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ca6e6a81576b..f7a005153d02 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2693,6 +2693,8 @@ static inline void vma_set_page_prot(struct vm_area_struct *vma)
}
#endif
+void vma_set_file(struct vm_area_struct *vma, struct file *file);
+
#ifdef CONFIG_NUMA_BALANCING
unsigned long change_prot_numa(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
diff --git a/mm/mmap.c b/mm/mmap.c
index 3a2670d73355..19cd37c3ebac 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -136,6 +136,21 @@ void vma_set_page_prot(struct vm_area_struct *vma)
WRITE_ONCE(vma->vm_page_prot, vm_page_prot);
}
+/*
+ * Change backing file, only valid to use during initial VMA setup.
+ */
+void vma_set_file(struct vm_area_struct *vma, struct file *file)
+{
+ if (file)
+ get_file(file);
+
+ swap(vma->vm_file, file);
+
+ if (file)
+ fput(file);
+}
+EXPORT_SYMBOL(vma_set_file);
+
/*
* Requires inode->i_mapping->i_mmap_rwsem
*/
--
2.17.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 2/6] mm: introduce vma_set_file function v3
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
Add the new vma_set_file() function to allow changing
vma->vm_file with the necessary refcount dance.
v2: add more users of this.
v3: add missing EXPORT_SYMBOL, rebase on mmap cleanup,
add comments why we drop the reference on two occasions.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v2)
---
drivers/dma-buf/dma-buf.c | 5 +----
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 4 +---
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 3 +--
drivers/gpu/drm/i915/gem/i915_gem_mman.c | 5 +++--
drivers/gpu/drm/msm/msm_gem.c | 4 +---
drivers/gpu/drm/omapdrm/omap_gem.c | 3 +--
drivers/gpu/drm/vgem/vgem_drv.c | 3 +--
drivers/staging/android/ashmem.c | 6 +++---
include/linux/mm.h | 2 ++
mm/mmap.c | 15 +++++++++++++++
10 files changed, 29 insertions(+), 21 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index edd57402a48a..8e6a114c6034 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1160,10 +1160,7 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
return -EINVAL;
/* readjust the vma */
- if (vma->vm_file)
- fput(vma->vm_file);
-
- vma->vm_file = get_file(dmabuf->file);
+ vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
return dmabuf->ops->mmap(dmabuf, vma);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 312e9d58d5a7..10ce267c0947 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -145,10 +145,8 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
- fput(vma->vm_file);
- get_file(etnaviv_obj->base.filp);
vma->vm_pgoff = 0;
- vma->vm_file = etnaviv_obj->base.filp;
+ vma_set_file(vma, etnaviv_obj->base.filp);
vma->vm_page_prot = vm_page_prot;
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index fec0e1e3dc3e..8ce4c9e28b87 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -119,8 +119,7 @@ static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *
if (ret)
return ret;
- fput(vma->vm_file);
- vma->vm_file = get_file(obj->base.filp);
+ vma_set_file(vma, obj->base.filp);
return 0;
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 3d69e51f3e4d..ec28a6cde49b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -893,8 +893,9 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
* requires avoiding extraneous references to their filp, hence why
* we prefer to use an anonymous file for their mmaps.
*/
- fput(vma->vm_file);
- vma->vm_file = anon;
+ vma_set_file(vma, anon);
+ /* Drop the initial creation reference, the vma is now holding one. */
+ fput(anon);
switch (mmo->mmap_type) {
case I915_MMAP_TYPE_WC:
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index de915ff6f4b4..a71f42870d5e 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -223,10 +223,8 @@ int msm_gem_mmap_obj(struct drm_gem_object *obj,
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
- fput(vma->vm_file);
- get_file(obj->filp);
vma->vm_pgoff = 0;
- vma->vm_file = obj->filp;
+ vma_set_file(vma, obj->filp);
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
}
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 979d53a93c2b..0d4542ff1d7d 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -564,9 +564,8 @@ int omap_gem_mmap_obj(struct drm_gem_object *obj,
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
- fput(vma->vm_file);
vma->vm_pgoff = 0;
- vma->vm_file = get_file(obj->filp);
+ vma_set_file(vma, obj->filp);
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
}
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..ea0eecae5153 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -397,8 +397,7 @@ static int vgem_prime_mmap(struct drm_gem_object *obj,
if (ret)
return ret;
- fput(vma->vm_file);
- vma->vm_file = get_file(obj->filp);
+ vma_set_file(vma, obj->filp);
vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
index 10b4be1f3e78..4789d36ddfd3 100644
--- a/drivers/staging/android/ashmem.c
+++ b/drivers/staging/android/ashmem.c
@@ -450,9 +450,9 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
vma_set_anonymous(vma);
}
- if (vma->vm_file)
- fput(vma->vm_file);
- vma->vm_file = asma->file;
+ vma_set_file(vma, asma->file);
+ /* XXX: merge this with the get_file() above if possible */
+ fput(asma->file);
out:
mutex_unlock(&ashmem_mutex);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ca6e6a81576b..f7a005153d02 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2693,6 +2693,8 @@ static inline void vma_set_page_prot(struct vm_area_struct *vma)
}
#endif
+void vma_set_file(struct vm_area_struct *vma, struct file *file);
+
#ifdef CONFIG_NUMA_BALANCING
unsigned long change_prot_numa(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
diff --git a/mm/mmap.c b/mm/mmap.c
index 3a2670d73355..19cd37c3ebac 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -136,6 +136,21 @@ void vma_set_page_prot(struct vm_area_struct *vma)
WRITE_ONCE(vma->vm_page_prot, vm_page_prot);
}
+/*
+ * Change backing file, only valid to use during initial VMA setup.
+ */
+void vma_set_file(struct vm_area_struct *vma, struct file *file)
+{
+ if (file)
+ get_file(file);
+
+ swap(vma->vm_file, file);
+
+ if (file)
+ fput(file);
+}
+EXPORT_SYMBOL(vma_set_file);
+
/*
* Requires inode->i_mapping->i_mmap_rwsem
*/
--
2.17.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 3/6] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:03 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
This is deprecated.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 63e38b05a5bc..4b92cdbcd29b 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
if (r)
goto release_sg;
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- gtt->ttm.dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -642,8 +642,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
}
if (slave && ttm->sg) {
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- gtt->ttm.dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
+ gtt->ttm.dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 3/6] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
This is deprecated.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 63e38b05a5bc..4b92cdbcd29b 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
if (r)
goto release_sg;
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- gtt->ttm.dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -642,8 +642,9 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
}
if (slave && ttm->sg) {
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- gtt->ttm.dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
+ gtt->ttm.dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
--
2.17.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 4/6] drm/amdgpu: stop using pages with drm_prime_sg_to_page_addr_arrays
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:03 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
This is deprecated.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 399961035ae6..ac463e706b19 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
goto release_sg;
/* convert SG to linear array of pages and dma addresses */
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- gtt->ttm.dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -1345,7 +1345,7 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
ttm->sg = sgt;
}
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
gtt->ttm.dma_address,
ttm->num_pages);
ttm_tt_set_populated(ttm);
--
2.17.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 4/6] drm/amdgpu: stop using pages with drm_prime_sg_to_page_addr_arrays
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
This is deprecated.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 399961035ae6..ac463e706b19 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
goto release_sg;
/* convert SG to linear array of pages and dma addresses */
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- gtt->ttm.dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -1345,7 +1345,7 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
ttm->sg = sgt;
}
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
gtt->ttm.dma_address,
ttm->num_pages);
ttm_tt_set_populated(ttm);
--
2.17.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 5/6] drm/nouveau: stop using pages with drm_prime_sg_to_page_addr_arrays
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:03 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
This is deprecated, also drop the comment about faults.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 0c0ca44a6802..e378bb491688 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1299,9 +1299,9 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
return 0;
if (slave && ttm->sg) {
- /* make userspace faulting work */
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- ttm_dma->dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
+ ttm_dma->dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 5/6] drm/nouveau: stop using pages with drm_prime_sg_to_page_addr_arrays
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
This is deprecated, also drop the comment about faults.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 0c0ca44a6802..e378bb491688 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1299,9 +1299,9 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
return 0;
if (slave && ttm->sg) {
- /* make userspace faulting work */
- drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
- ttm_dma->dma_address, ttm->num_pages);
+ drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
+ ttm_dma->dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
--
2.17.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:03 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
We have reoccurring requests on this so better document that
this approach doesn't work and dma_buf_mmap() needs to be used instead.
v2: split it into two functions
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
drivers/gpu/drm/msm/msm_gem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
include/drm/drm_prime.h | 7 ++-
10 files changed, 69 insertions(+), 42 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index ac463e706b19..6a65490de391 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
goto release_sg;
/* convert SG to linear array of pages and dma addresses */
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
ttm->sg = sgt;
}
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
- gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 4910c446db83..8b750c074494 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
EXPORT_SYMBOL(drm_gem_prime_import);
/**
- * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
+ * drm_prime_sg_to_page_array - convert an sg table into a page array
* @sgt: scatter-gather table to convert
- * @pages: optional array of page pointers to store the page array in
- * @addrs: optional array to store the dma bus address of each page
- * @max_entries: size of both the passed-in arrays
+ * @pages: array of page pointers to store the pages in
+ * @max_entries: size of the passed-in array
*
- * Exports an sg table into an array of pages and addresses. This is currently
- * required by the TTM driver in order to do correct fault handling.
+ * Exports an sg table into an array of pages.
*
- * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
- * implementation.
+ * This function is deprecated and strongly discouraged to be used.
+ * The page array is only useful for page faults and those can corrupt fields
+ * in the struct page if they are not handled by the exporting driver.
*/
-int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
- dma_addr_t *addrs, int max_entries)
+int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
+ struct page **pages,
+ int max_entries)
{
unsigned count;
struct scatterlist *sg;
struct page *page;
u32 page_len, page_index;
- dma_addr_t addr;
- u32 dma_len, dma_index;
/*
* Scatterlist elements contains both pages and DMA addresses, but
@@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
* described by the sg_dma_address(sg).
*/
page_index = 0;
- dma_index = 0;
for_each_sg(sgt->sgl, sg, sgt->nents, count) {
page_len = sg->length;
page = sg_page(sg);
- dma_len = sg_dma_len(sg);
- addr = sg_dma_address(sg);
- while (pages && page_len > 0) {
+ while (page_len > 0) {
if (WARN_ON(page_index >= max_entries))
return -1;
pages[page_index] = page;
@@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
page_len -= PAGE_SIZE;
page_index++;
}
- while (addrs && dma_len > 0) {
+ }
+ return 0;
+}
+EXPORT_SYMBOL(drm_prime_sg_to_page_array);
+
+/**
+ * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
+ * @sgt: scatter-gather table to convert
+ * @addrs: array to store the dma bus address of each page
+ * @max_entries: size of both the passed-in arrays
+ *
+ * Exports an sg table into an array of addresses.
+ *
+ * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
+ * implementation.
+ */
+int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
+ int max_entries)
+{
+ struct scatterlist *sg;
+ u32 dma_len, dma_index;
+ dma_addr_t addr;
+ unsigned count;
+
+ /*
+ * Scatterlist elements contains both pages and DMA addresses, but
+ * one shoud not assume 1:1 relation between them. The sg->length is
+ * the size of the physical memory chunk described by the sg->page,
+ * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
+ * described by the sg_dma_address(sg).
+ */
+ dma_index = 0;
+ for_each_sg(sgt->sgl, sg, sgt->nents, count) {
+ dma_len = sg_dma_len(sg);
+ addr = sg_dma_address(sg);
+
+ while (dma_len > 0) {
if (WARN_ON(dma_index >= max_entries))
return -1;
addrs[dma_index] = addr;
@@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
}
return 0;
}
-EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
+EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
/**
* drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..8c04b8e8054c 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
goto fail;
}
- ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
- NULL, npages);
+ ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index a71f42870d5e..616b87641740 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
goto fail;
}
- ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
+ ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
if (ret) {
mutex_unlock(&msm_obj->lock);
goto fail;
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index e378bb491688..835edd74ef59 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
return 0;
if (slave && ttm->sg) {
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
- ttm_dma->dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 4b92cdbcd29b..7997e4564576 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
if (r)
goto release_sg;
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
}
if (slave && ttm->sg) {
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
- gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index ea0eecae5153..e505e5a291b3 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
}
obj->pages_pin_count++; /* perma-pinned */
- drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
- npages);
+ drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
return &obj->base;
}
diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
index 19a0e260a4df..a2ff21f47101 100644
--- a/drivers/gpu/drm/vkms/vkms_gem.c
+++ b/drivers/gpu/drm/vkms/vkms_gem.c
@@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
return ERR_PTR(-ENOMEM);
}
- drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
+ drm_prime_sg_to_page_array(sg, obj->pages, npages);
return &obj->gem;
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f3830a0d1808..f4150ddfc5e2 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
xen_obj->sgt_imported = sgt;
- ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
- NULL, xen_obj->num_pages);
+ ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
+ xen_obj->num_pages);
if (ret < 0)
return ERR_PTR(ret);
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 093f760cc131..4bda9ab3a3bb 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
-int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
- dma_addr_t *addrs, int max_pages);
-
+int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
+ int max_pages);
+int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
+ int max_pages);
#endif /* __DRM_PRIME_H__ */
--
2.17.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
@ 2020-10-09 15:03 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-09 15:03 UTC (permalink / raw)
To: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
We have reoccurring requests on this so better document that
this approach doesn't work and dma_buf_mmap() needs to be used instead.
v2: split it into two functions
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
drivers/gpu/drm/msm/msm_gem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
include/drm/drm_prime.h | 7 ++-
10 files changed, 69 insertions(+), 42 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index ac463e706b19..6a65490de391 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
goto release_sg;
/* convert SG to linear array of pages and dma addresses */
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
ttm->sg = sgt;
}
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
- gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 4910c446db83..8b750c074494 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
EXPORT_SYMBOL(drm_gem_prime_import);
/**
- * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
+ * drm_prime_sg_to_page_array - convert an sg table into a page array
* @sgt: scatter-gather table to convert
- * @pages: optional array of page pointers to store the page array in
- * @addrs: optional array to store the dma bus address of each page
- * @max_entries: size of both the passed-in arrays
+ * @pages: array of page pointers to store the pages in
+ * @max_entries: size of the passed-in array
*
- * Exports an sg table into an array of pages and addresses. This is currently
- * required by the TTM driver in order to do correct fault handling.
+ * Exports an sg table into an array of pages.
*
- * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
- * implementation.
+ * This function is deprecated and strongly discouraged to be used.
+ * The page array is only useful for page faults and those can corrupt fields
+ * in the struct page if they are not handled by the exporting driver.
*/
-int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
- dma_addr_t *addrs, int max_entries)
+int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
+ struct page **pages,
+ int max_entries)
{
unsigned count;
struct scatterlist *sg;
struct page *page;
u32 page_len, page_index;
- dma_addr_t addr;
- u32 dma_len, dma_index;
/*
* Scatterlist elements contains both pages and DMA addresses, but
@@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
* described by the sg_dma_address(sg).
*/
page_index = 0;
- dma_index = 0;
for_each_sg(sgt->sgl, sg, sgt->nents, count) {
page_len = sg->length;
page = sg_page(sg);
- dma_len = sg_dma_len(sg);
- addr = sg_dma_address(sg);
- while (pages && page_len > 0) {
+ while (page_len > 0) {
if (WARN_ON(page_index >= max_entries))
return -1;
pages[page_index] = page;
@@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
page_len -= PAGE_SIZE;
page_index++;
}
- while (addrs && dma_len > 0) {
+ }
+ return 0;
+}
+EXPORT_SYMBOL(drm_prime_sg_to_page_array);
+
+/**
+ * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
+ * @sgt: scatter-gather table to convert
+ * @addrs: array to store the dma bus address of each page
+ * @max_entries: size of both the passed-in arrays
+ *
+ * Exports an sg table into an array of addresses.
+ *
+ * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
+ * implementation.
+ */
+int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
+ int max_entries)
+{
+ struct scatterlist *sg;
+ u32 dma_len, dma_index;
+ dma_addr_t addr;
+ unsigned count;
+
+ /*
+ * Scatterlist elements contains both pages and DMA addresses, but
+ * one shoud not assume 1:1 relation between them. The sg->length is
+ * the size of the physical memory chunk described by the sg->page,
+ * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
+ * described by the sg_dma_address(sg).
+ */
+ dma_index = 0;
+ for_each_sg(sgt->sgl, sg, sgt->nents, count) {
+ dma_len = sg_dma_len(sg);
+ addr = sg_dma_address(sg);
+
+ while (dma_len > 0) {
if (WARN_ON(dma_index >= max_entries))
return -1;
addrs[dma_index] = addr;
@@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
}
return 0;
}
-EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
+EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
/**
* drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..8c04b8e8054c 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
goto fail;
}
- ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
- NULL, npages);
+ ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index a71f42870d5e..616b87641740 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
goto fail;
}
- ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
+ ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
if (ret) {
mutex_unlock(&msm_obj->lock);
goto fail;
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index e378bb491688..835edd74ef59 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
return 0;
if (slave && ttm->sg) {
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
- ttm_dma->dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 4b92cdbcd29b..7997e4564576 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
if (r)
goto release_sg;
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
return 0;
@@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
}
if (slave && ttm->sg) {
- drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
- gtt->ttm.dma_address,
- ttm->num_pages);
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+ ttm->num_pages);
ttm_tt_set_populated(ttm);
return 0;
}
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index ea0eecae5153..e505e5a291b3 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
}
obj->pages_pin_count++; /* perma-pinned */
- drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
- npages);
+ drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
return &obj->base;
}
diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
index 19a0e260a4df..a2ff21f47101 100644
--- a/drivers/gpu/drm/vkms/vkms_gem.c
+++ b/drivers/gpu/drm/vkms/vkms_gem.c
@@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
return ERR_PTR(-ENOMEM);
}
- drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
+ drm_prime_sg_to_page_array(sg, obj->pages, npages);
return &obj->gem;
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f3830a0d1808..f4150ddfc5e2 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
xen_obj->sgt_imported = sgt;
- ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
- NULL, xen_obj->num_pages);
+ ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
+ xen_obj->num_pages);
if (ret < 0)
return ERR_PTR(ret);
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 093f760cc131..4bda9ab3a3bb 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
-int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
- dma_addr_t *addrs, int max_pages);
-
+int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
+ int max_pages);
+int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
+ int max_pages);
#endif /* __DRM_PRIME_H__ */
--
2.17.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 2/6] mm: introduce vma_set_file function v3
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:14 ` Jason Gunthorpe
-1 siblings, 0 replies; 29+ messages in thread
From: Jason Gunthorpe @ 2020-10-09 15:14 UTC (permalink / raw)
To: Christian König
Cc: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard
On Fri, Oct 09, 2020 at 05:03:38PM +0200, Christian König wrote:
> +/*
> + * Change backing file, only valid to use during initial VMA setup.
> + */
> +void vma_set_file(struct vm_area_struct *vma, struct file *file)
> +{
> + if (file)
> + get_file(file);
> +
> + swap(vma->vm_file, file);
> +
> + if (file)
> + fput(file);
> +}
fput crashes when file is NULL so the error handling after
unmap_and_free_vma: can't handle this case, similarly vm_file can't be
NULL either.
So just simply:
swap(vma->vm_file, file);
get_file(vma->vm_file);
fput(file);
Will do?
Just let it crash if any of them are wrongly NULL.
Jason
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/6] mm: introduce vma_set_file function v3
@ 2020-10-09 15:14 ` Jason Gunthorpe
0 siblings, 0 replies; 29+ messages in thread
From: Jason Gunthorpe @ 2020-10-09 15:14 UTC (permalink / raw)
To: Christian König
Cc: jhubbard, linux-kernel, dri-devel, chris, linaro-mm-sig,
linux-mm, willy, airlied, akpm, linux-media
On Fri, Oct 09, 2020 at 05:03:38PM +0200, Christian König wrote:
> +/*
> + * Change backing file, only valid to use during initial VMA setup.
> + */
> +void vma_set_file(struct vm_area_struct *vma, struct file *file)
> +{
> + if (file)
> + get_file(file);
> +
> + swap(vma->vm_file, file);
> +
> + if (file)
> + fput(file);
> +}
fput crashes when file is NULL so the error handling after
unmap_and_free_vma: can't handle this case, similarly vm_file can't be
NULL either.
So just simply:
swap(vma->vm_file, file);
get_file(vma->vm_file);
fput(file);
Will do?
Just let it crash if any of them are wrongly NULL.
Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
2020-10-09 15:03 ` Christian König
@ 2020-10-09 15:15 ` Jason Gunthorpe
-1 siblings, 0 replies; 29+ messages in thread
From: Jason Gunthorpe @ 2020-10-09 15:15 UTC (permalink / raw)
To: Christian König
Cc: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard
On Fri, Oct 09, 2020 at 05:03:37PM +0200, Christian König wrote:
> Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
> adds a workaround for a bug in mmap_region.
>
> As the comment states ->mmap() callback can change
> vma->vm_file and so we might call fput() on the wrong file.
>
> Revert the workaround and proper fix this in mmap_region.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> drivers/dma-buf/dma-buf.c | 22 +++++-----------------
> mm/mmap.c | 2 +-
> 2 files changed, 6 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index a6ba4d598f0e..edd57402a48a 100644
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1143,9 +1143,6 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
> int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> unsigned long pgoff)
> {
> - struct file *oldfile;
> - int ret;
> -
> if (WARN_ON(!dmabuf || !vma))
> return -EINVAL;
>
> @@ -1163,22 +1160,13 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> return -EINVAL;
>
> /* readjust the vma */
> - get_file(dmabuf->file);
> - oldfile = vma->vm_file;
> - vma->vm_file = dmabuf->file;
> - vma->vm_pgoff = pgoff;
> + if (vma->vm_file)
> + fput(vma->vm_file);
This if is redundant too
But otherwise
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Thanks,
Jason
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
@ 2020-10-09 15:15 ` Jason Gunthorpe
0 siblings, 0 replies; 29+ messages in thread
From: Jason Gunthorpe @ 2020-10-09 15:15 UTC (permalink / raw)
To: Christian König
Cc: jhubbard, linux-kernel, dri-devel, chris, linaro-mm-sig,
linux-mm, willy, airlied, akpm, linux-media
On Fri, Oct 09, 2020 at 05:03:37PM +0200, Christian König wrote:
> Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
> adds a workaround for a bug in mmap_region.
>
> As the comment states ->mmap() callback can change
> vma->vm_file and so we might call fput() on the wrong file.
>
> Revert the workaround and proper fix this in mmap_region.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> drivers/dma-buf/dma-buf.c | 22 +++++-----------------
> mm/mmap.c | 2 +-
> 2 files changed, 6 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index a6ba4d598f0e..edd57402a48a 100644
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1143,9 +1143,6 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
> int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> unsigned long pgoff)
> {
> - struct file *oldfile;
> - int ret;
> -
> if (WARN_ON(!dmabuf || !vma))
> return -EINVAL;
>
> @@ -1163,22 +1160,13 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> return -EINVAL;
>
> /* readjust the vma */
> - get_file(dmabuf->file);
> - oldfile = vma->vm_file;
> - vma->vm_file = dmabuf->file;
> - vma->vm_pgoff = pgoff;
> + if (vma->vm_file)
> + fput(vma->vm_file);
This if is redundant too
But otherwise
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Thanks,
Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
2020-10-09 15:03 ` Christian König
@ 2020-10-09 16:24 ` Daniel Vetter
-1 siblings, 0 replies; 29+ messages in thread
From: Daniel Vetter @ 2020-10-09 16:24 UTC (permalink / raw)
To: Christian König
Cc: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard, jgg
On Fri, Oct 09, 2020 at 05:03:42PM +0200, Christian König wrote:
> We have reoccurring requests on this so better document that
> this approach doesn't work and dma_buf_mmap() needs to be used instead.
>
> v2: split it into two functions
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
Patches 3-5:
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This one looks good, but you have it on a strange baseline. This doesn't
contain the sg walking fixes from Marek, so reintroduces the bugs.
Probably need to request a backmerge chain, first of -rc8 into drm-next,
and then that into drm-misc-next.
Everything else in here lgtm.
-Daniel
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
> drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
> drivers/gpu/drm/msm/msm_gem.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
> drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
> drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
> drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
> include/drm/drm_prime.h | 7 ++-
> 10 files changed, 69 insertions(+), 42 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index ac463e706b19..6a65490de391 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
> goto release_sg;
>
> /* convert SG to linear array of pages and dma addresses */
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
>
> return 0;
>
> @@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
> ttm->sg = sgt;
> }
>
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> - gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
> ttm_tt_set_populated(ttm);
> return 0;
> }
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 4910c446db83..8b750c074494 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> EXPORT_SYMBOL(drm_gem_prime_import);
>
> /**
> - * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
> + * drm_prime_sg_to_page_array - convert an sg table into a page array
> * @sgt: scatter-gather table to convert
> - * @pages: optional array of page pointers to store the page array in
> - * @addrs: optional array to store the dma bus address of each page
> - * @max_entries: size of both the passed-in arrays
> + * @pages: array of page pointers to store the pages in
> + * @max_entries: size of the passed-in array
> *
> - * Exports an sg table into an array of pages and addresses. This is currently
> - * required by the TTM driver in order to do correct fault handling.
> + * Exports an sg table into an array of pages.
> *
> - * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
> - * implementation.
> + * This function is deprecated and strongly discouraged to be used.
> + * The page array is only useful for page faults and those can corrupt fields
> + * in the struct page if they are not handled by the exporting driver.
> */
> -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> - dma_addr_t *addrs, int max_entries)
> +int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
> + struct page **pages,
> + int max_entries)
> {
> unsigned count;
> struct scatterlist *sg;
> struct page *page;
> u32 page_len, page_index;
> - dma_addr_t addr;
> - u32 dma_len, dma_index;
>
> /*
> * Scatterlist elements contains both pages and DMA addresses, but
> @@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> * described by the sg_dma_address(sg).
> */
> page_index = 0;
> - dma_index = 0;
> for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> page_len = sg->length;
> page = sg_page(sg);
> - dma_len = sg_dma_len(sg);
> - addr = sg_dma_address(sg);
>
> - while (pages && page_len > 0) {
> + while (page_len > 0) {
> if (WARN_ON(page_index >= max_entries))
> return -1;
> pages[page_index] = page;
> @@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> page_len -= PAGE_SIZE;
> page_index++;
> }
> - while (addrs && dma_len > 0) {
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL(drm_prime_sg_to_page_array);
> +
> +/**
> + * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
> + * @sgt: scatter-gather table to convert
> + * @addrs: array to store the dma bus address of each page
> + * @max_entries: size of both the passed-in arrays
> + *
> + * Exports an sg table into an array of addresses.
> + *
> + * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
s/should/can/
There's no requirement, if your driver just handles everything as an sgt
there's no conversion needed.
> + * implementation.
> + */
> +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> + int max_entries)
> +{
> + struct scatterlist *sg;
> + u32 dma_len, dma_index;
> + dma_addr_t addr;
> + unsigned count;
> +
> + /*
> + * Scatterlist elements contains both pages and DMA addresses, but
> + * one shoud not assume 1:1 relation between them. The sg->length is
> + * the size of the physical memory chunk described by the sg->page,
> + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
> + * described by the sg_dma_address(sg).
> + */
> + dma_index = 0;
> + for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> + dma_len = sg_dma_len(sg);
> + addr = sg_dma_address(sg);
> +
> + while (dma_len > 0) {
> if (WARN_ON(dma_index >= max_entries))
> return -1;
> addrs[dma_index] = addr;
> @@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> }
> return 0;
> }
> -EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
> +EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
>
> /**
> * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..8c04b8e8054c 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> goto fail;
> }
>
> - ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
> - NULL, npages);
> + ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
> if (ret)
> goto fail;
>
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index a71f42870d5e..616b87641740 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
> goto fail;
> }
>
> - ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
> + ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
> if (ret) {
> mutex_unlock(&msm_obj->lock);
> goto fail;
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> index e378bb491688..835edd74ef59 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> @@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
> return 0;
>
> if (slave && ttm->sg) {
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> - ttm_dma->dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
> + ttm->num_pages);
> ttm_tt_set_populated(ttm);
> return 0;
> }
> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> index 4b92cdbcd29b..7997e4564576 100644
> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> @@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> if (r)
> goto release_sg;
>
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
>
> return 0;
>
> @@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> }
>
> if (slave && ttm->sg) {
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> - gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
> ttm_tt_set_populated(ttm);
> return 0;
> }
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index ea0eecae5153..e505e5a291b3 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> }
>
> obj->pages_pin_count++; /* perma-pinned */
> - drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
> - npages);
> + drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
> return &obj->base;
> }
>
> diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
> index 19a0e260a4df..a2ff21f47101 100644
> --- a/drivers/gpu/drm/vkms/vkms_gem.c
> +++ b/drivers/gpu/drm/vkms/vkms_gem.c
> @@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
> return ERR_PTR(-ENOMEM);
> }
>
> - drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
> + drm_prime_sg_to_page_array(sg, obj->pages, npages);
> return &obj->gem;
> }
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index f3830a0d1808..f4150ddfc5e2 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
>
> xen_obj->sgt_imported = sgt;
>
> - ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> - NULL, xen_obj->num_pages);
> + ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
> + xen_obj->num_pages);
> if (ret < 0)
> return ERR_PTR(ret);
>
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index 093f760cc131..4bda9ab3a3bb 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
>
> void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
>
> -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> - dma_addr_t *addrs, int max_pages);
> -
> +int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
> + int max_pages);
> +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> + int max_pages);
>
> #endif /* __DRM_PRIME_H__ */
> --
> 2.17.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
@ 2020-10-09 16:24 ` Daniel Vetter
0 siblings, 0 replies; 29+ messages in thread
From: Daniel Vetter @ 2020-10-09 16:24 UTC (permalink / raw)
To: Christian König
Cc: jgg, jhubbard, linux-kernel, dri-devel, chris, linaro-mm-sig,
linux-mm, willy, airlied, akpm, linux-media
On Fri, Oct 09, 2020 at 05:03:42PM +0200, Christian König wrote:
> We have reoccurring requests on this so better document that
> this approach doesn't work and dma_buf_mmap() needs to be used instead.
>
> v2: split it into two functions
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
Patches 3-5:
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This one looks good, but you have it on a strange baseline. This doesn't
contain the sg walking fixes from Marek, so reintroduces the bugs.
Probably need to request a backmerge chain, first of -rc8 into drm-next,
and then that into drm-misc-next.
Everything else in here lgtm.
-Daniel
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
> drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
> drivers/gpu/drm/msm/msm_gem.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
> drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
> drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
> drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
> include/drm/drm_prime.h | 7 ++-
> 10 files changed, 69 insertions(+), 42 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index ac463e706b19..6a65490de391 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
> goto release_sg;
>
> /* convert SG to linear array of pages and dma addresses */
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
>
> return 0;
>
> @@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
> ttm->sg = sgt;
> }
>
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> - gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
> ttm_tt_set_populated(ttm);
> return 0;
> }
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 4910c446db83..8b750c074494 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> EXPORT_SYMBOL(drm_gem_prime_import);
>
> /**
> - * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
> + * drm_prime_sg_to_page_array - convert an sg table into a page array
> * @sgt: scatter-gather table to convert
> - * @pages: optional array of page pointers to store the page array in
> - * @addrs: optional array to store the dma bus address of each page
> - * @max_entries: size of both the passed-in arrays
> + * @pages: array of page pointers to store the pages in
> + * @max_entries: size of the passed-in array
> *
> - * Exports an sg table into an array of pages and addresses. This is currently
> - * required by the TTM driver in order to do correct fault handling.
> + * Exports an sg table into an array of pages.
> *
> - * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
> - * implementation.
> + * This function is deprecated and strongly discouraged to be used.
> + * The page array is only useful for page faults and those can corrupt fields
> + * in the struct page if they are not handled by the exporting driver.
> */
> -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> - dma_addr_t *addrs, int max_entries)
> +int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
> + struct page **pages,
> + int max_entries)
> {
> unsigned count;
> struct scatterlist *sg;
> struct page *page;
> u32 page_len, page_index;
> - dma_addr_t addr;
> - u32 dma_len, dma_index;
>
> /*
> * Scatterlist elements contains both pages and DMA addresses, but
> @@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> * described by the sg_dma_address(sg).
> */
> page_index = 0;
> - dma_index = 0;
> for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> page_len = sg->length;
> page = sg_page(sg);
> - dma_len = sg_dma_len(sg);
> - addr = sg_dma_address(sg);
>
> - while (pages && page_len > 0) {
> + while (page_len > 0) {
> if (WARN_ON(page_index >= max_entries))
> return -1;
> pages[page_index] = page;
> @@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> page_len -= PAGE_SIZE;
> page_index++;
> }
> - while (addrs && dma_len > 0) {
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL(drm_prime_sg_to_page_array);
> +
> +/**
> + * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
> + * @sgt: scatter-gather table to convert
> + * @addrs: array to store the dma bus address of each page
> + * @max_entries: size of both the passed-in arrays
> + *
> + * Exports an sg table into an array of addresses.
> + *
> + * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
s/should/can/
There's no requirement, if your driver just handles everything as an sgt
there's no conversion needed.
> + * implementation.
> + */
> +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> + int max_entries)
> +{
> + struct scatterlist *sg;
> + u32 dma_len, dma_index;
> + dma_addr_t addr;
> + unsigned count;
> +
> + /*
> + * Scatterlist elements contains both pages and DMA addresses, but
> + * one shoud not assume 1:1 relation between them. The sg->length is
> + * the size of the physical memory chunk described by the sg->page,
> + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
> + * described by the sg_dma_address(sg).
> + */
> + dma_index = 0;
> + for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> + dma_len = sg_dma_len(sg);
> + addr = sg_dma_address(sg);
> +
> + while (dma_len > 0) {
> if (WARN_ON(dma_index >= max_entries))
> return -1;
> addrs[dma_index] = addr;
> @@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> }
> return 0;
> }
> -EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
> +EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
>
> /**
> * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..8c04b8e8054c 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> goto fail;
> }
>
> - ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
> - NULL, npages);
> + ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
> if (ret)
> goto fail;
>
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index a71f42870d5e..616b87641740 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
> goto fail;
> }
>
> - ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
> + ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
> if (ret) {
> mutex_unlock(&msm_obj->lock);
> goto fail;
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> index e378bb491688..835edd74ef59 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> @@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
> return 0;
>
> if (slave && ttm->sg) {
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> - ttm_dma->dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
> + ttm->num_pages);
> ttm_tt_set_populated(ttm);
> return 0;
> }
> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> index 4b92cdbcd29b..7997e4564576 100644
> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> @@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> if (r)
> goto release_sg;
>
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
>
> return 0;
>
> @@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> }
>
> if (slave && ttm->sg) {
> - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> - gtt->ttm.dma_address,
> - ttm->num_pages);
> + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> + ttm->num_pages);
> ttm_tt_set_populated(ttm);
> return 0;
> }
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index ea0eecae5153..e505e5a291b3 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> }
>
> obj->pages_pin_count++; /* perma-pinned */
> - drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
> - npages);
> + drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
> return &obj->base;
> }
>
> diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
> index 19a0e260a4df..a2ff21f47101 100644
> --- a/drivers/gpu/drm/vkms/vkms_gem.c
> +++ b/drivers/gpu/drm/vkms/vkms_gem.c
> @@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
> return ERR_PTR(-ENOMEM);
> }
>
> - drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
> + drm_prime_sg_to_page_array(sg, obj->pages, npages);
> return &obj->gem;
> }
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index f3830a0d1808..f4150ddfc5e2 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
>
> xen_obj->sgt_imported = sgt;
>
> - ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> - NULL, xen_obj->num_pages);
> + ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
> + xen_obj->num_pages);
> if (ret < 0)
> return ERR_PTR(ret);
>
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index 093f760cc131..4bda9ab3a3bb 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
>
> void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
>
> -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> - dma_addr_t *addrs, int max_pages);
> -
> +int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
> + int max_pages);
> +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> + int max_pages);
>
> #endif /* __DRM_PRIME_H__ */
> --
> 2.17.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
2020-10-09 16:24 ` Daniel Vetter
(?)
@ 2020-10-09 16:27 ` Daniel Vetter
-1 siblings, 0 replies; 29+ messages in thread
From: Daniel Vetter @ 2020-10-09 16:27 UTC (permalink / raw)
To: Christian König, Thomas Zimmermann
Cc: Linux MM, Linux Kernel Mailing List,
moderated list:DMA BUFFER SHARING FRAMEWORK, dri-devel,
open list:DMA BUFFER SHARING FRAMEWORK, Wilson, Chris,
Dave Airlie, Andrew Morton, Sumit Semwal, Matthew Wilcox,
John Hubbard, Jason Gunthorpe
On Fri, Oct 9, 2020 at 6:24 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Fri, Oct 09, 2020 at 05:03:42PM +0200, Christian König wrote:
> > We have reoccurring requests on this so better document that
> > this approach doesn't work and dma_buf_mmap() needs to be used instead.
> >
> > v2: split it into two functions
> >
> > Signed-off-by: Christian König <christian.koenig@amd.com>
>
> Patches 3-5:
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> This one looks good, but you have it on a strange baseline. This doesn't
> contain the sg walking fixes from Marek, so reintroduces the bugs.
> Probably need to request a backmerge chain, first of -rc8 into drm-next,
> and then that into drm-misc-next.
Marek's patch is in drm-next, so just needs a backmerge into drm-misc-next.
Thomas, can you pls do that? We need 0552daac2d18f
I'll wait for the next round for patches 1&2 since Jason seems to have
found a small issue with them.
-Daniel
>
> Everything else in here lgtm.
> -Daniel
>
>
>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
> > drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
> > drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
> > drivers/gpu/drm/msm/msm_gem.c | 2 +-
> > drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
> > drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
> > drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
> > drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
> > drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
> > include/drm/drm_prime.h | 7 ++-
> > 10 files changed, 69 insertions(+), 42 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index ac463e706b19..6a65490de391 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
> > goto release_sg;
> >
> > /* convert SG to linear array of pages and dma addresses */
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> >
> > return 0;
> >
> > @@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
> > ttm->sg = sgt;
> > }
> >
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > index 4910c446db83..8b750c074494 100644
> > --- a/drivers/gpu/drm/drm_prime.c
> > +++ b/drivers/gpu/drm/drm_prime.c
> > @@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> > EXPORT_SYMBOL(drm_gem_prime_import);
> >
> > /**
> > - * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
> > + * drm_prime_sg_to_page_array - convert an sg table into a page array
> > * @sgt: scatter-gather table to convert
> > - * @pages: optional array of page pointers to store the page array in
> > - * @addrs: optional array to store the dma bus address of each page
> > - * @max_entries: size of both the passed-in arrays
> > + * @pages: array of page pointers to store the pages in
> > + * @max_entries: size of the passed-in array
> > *
> > - * Exports an sg table into an array of pages and addresses. This is currently
> > - * required by the TTM driver in order to do correct fault handling.
> > + * Exports an sg table into an array of pages.
> > *
> > - * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
> > - * implementation.
> > + * This function is deprecated and strongly discouraged to be used.
> > + * The page array is only useful for page faults and those can corrupt fields
> > + * in the struct page if they are not handled by the exporting driver.
> > */
> > -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > - dma_addr_t *addrs, int max_entries)
> > +int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
> > + struct page **pages,
> > + int max_entries)
> > {
> > unsigned count;
> > struct scatterlist *sg;
> > struct page *page;
> > u32 page_len, page_index;
> > - dma_addr_t addr;
> > - u32 dma_len, dma_index;
> >
> > /*
> > * Scatterlist elements contains both pages and DMA addresses, but
> > @@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > * described by the sg_dma_address(sg).
> > */
> > page_index = 0;
> > - dma_index = 0;
> > for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> > page_len = sg->length;
> > page = sg_page(sg);
> > - dma_len = sg_dma_len(sg);
> > - addr = sg_dma_address(sg);
> >
> > - while (pages && page_len > 0) {
> > + while (page_len > 0) {
> > if (WARN_ON(page_index >= max_entries))
> > return -1;
> > pages[page_index] = page;
> > @@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > page_len -= PAGE_SIZE;
> > page_index++;
> > }
> > - while (addrs && dma_len > 0) {
> > + }
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(drm_prime_sg_to_page_array);
> > +
> > +/**
> > + * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
> > + * @sgt: scatter-gather table to convert
> > + * @addrs: array to store the dma bus address of each page
> > + * @max_entries: size of both the passed-in arrays
> > + *
> > + * Exports an sg table into an array of addresses.
> > + *
> > + * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
>
> s/should/can/
>
> There's no requirement, if your driver just handles everything as an sgt
> there's no conversion needed.
>
> > + * implementation.
> > + */
> > +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> > + int max_entries)
> > +{
> > + struct scatterlist *sg;
> > + u32 dma_len, dma_index;
> > + dma_addr_t addr;
> > + unsigned count;
> > +
> > + /*
> > + * Scatterlist elements contains both pages and DMA addresses, but
> > + * one shoud not assume 1:1 relation between them. The sg->length is
> > + * the size of the physical memory chunk described by the sg->page,
> > + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
> > + * described by the sg_dma_address(sg).
> > + */
> > + dma_index = 0;
> > + for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> > + dma_len = sg_dma_len(sg);
> > + addr = sg_dma_address(sg);
> > +
> > + while (dma_len > 0) {
> > if (WARN_ON(dma_index >= max_entries))
> > return -1;
> > addrs[dma_index] = addr;
> > @@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > }
> > return 0;
> > }
> > -EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
> > +EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
> >
> > /**
> > * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > index 135fbff6fecf..8c04b8e8054c 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > @@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> > goto fail;
> > }
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
> > - NULL, npages);
> > + ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
> > if (ret)
> > goto fail;
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index a71f42870d5e..616b87641740 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
> > goto fail;
> > }
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
> > + ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
> > if (ret) {
> > mutex_unlock(&msm_obj->lock);
> > goto fail;
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > index e378bb491688..835edd74ef59 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > @@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
> > return 0;
> >
> > if (slave && ttm->sg) {
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - ttm_dma->dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> > index 4b92cdbcd29b..7997e4564576 100644
> > --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> > +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> > @@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> > if (r)
> > goto release_sg;
> >
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> >
> > return 0;
> >
> > @@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> > }
> >
> > if (slave && ttm->sg) {
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> > index ea0eecae5153..e505e5a291b3 100644
> > --- a/drivers/gpu/drm/vgem/vgem_drv.c
> > +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> > @@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> > }
> >
> > obj->pages_pin_count++; /* perma-pinned */
> > - drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
> > - npages);
> > + drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
> > return &obj->base;
> > }
> >
> > diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
> > index 19a0e260a4df..a2ff21f47101 100644
> > --- a/drivers/gpu/drm/vkms/vkms_gem.c
> > +++ b/drivers/gpu/drm/vkms/vkms_gem.c
> > @@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
> > return ERR_PTR(-ENOMEM);
> > }
> >
> > - drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
> > + drm_prime_sg_to_page_array(sg, obj->pages, npages);
> > return &obj->gem;
> > }
> > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > index f3830a0d1808..f4150ddfc5e2 100644
> > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > @@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
> >
> > xen_obj->sgt_imported = sgt;
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> > - NULL, xen_obj->num_pages);
> > + ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
> > + xen_obj->num_pages);
> > if (ret < 0)
> > return ERR_PTR(ret);
> >
> > diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> > index 093f760cc131..4bda9ab3a3bb 100644
> > --- a/include/drm/drm_prime.h
> > +++ b/include/drm/drm_prime.h
> > @@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> >
> > void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
> >
> > -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > - dma_addr_t *addrs, int max_pages);
> > -
> > +int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
> > + int max_pages);
> > +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> > + int max_pages);
> >
> > #endif /* __DRM_PRIME_H__ */
> > --
> > 2.17.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
@ 2020-10-09 16:27 ` Daniel Vetter
0 siblings, 0 replies; 29+ messages in thread
From: Daniel Vetter @ 2020-10-09 16:27 UTC (permalink / raw)
To: Christian König, Thomas Zimmermann
Cc: Linux MM, Linux Kernel Mailing List,
moderated list:DMA BUFFER SHARING FRAMEWORK, dri-devel,
open list:DMA BUFFER SHARING FRAMEWORK, Wilson, Chris,
Dave Airlie, Andrew Morton, Sumit Semwal, Matthew Wilcox,
John Hubbard, Jason Gunthorpe
On Fri, Oct 9, 2020 at 6:24 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Fri, Oct 09, 2020 at 05:03:42PM +0200, Christian König wrote:
> > We have reoccurring requests on this so better document that
> > this approach doesn't work and dma_buf_mmap() needs to be used instead.
> >
> > v2: split it into two functions
> >
> > Signed-off-by: Christian König <christian.koenig@amd.com>
>
> Patches 3-5:
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> This one looks good, but you have it on a strange baseline. This doesn't
> contain the sg walking fixes from Marek, so reintroduces the bugs.
> Probably need to request a backmerge chain, first of -rc8 into drm-next,
> and then that into drm-misc-next.
Marek's patch is in drm-next, so just needs a backmerge into drm-misc-next.
Thomas, can you pls do that? We need 0552daac2d18f
I'll wait for the next round for patches 1&2 since Jason seems to have
found a small issue with them.
-Daniel
>
> Everything else in here lgtm.
> -Daniel
>
>
>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
> > drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
> > drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
> > drivers/gpu/drm/msm/msm_gem.c | 2 +-
> > drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
> > drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
> > drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
> > drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
> > drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
> > include/drm/drm_prime.h | 7 ++-
> > 10 files changed, 69 insertions(+), 42 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index ac463e706b19..6a65490de391 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
> > goto release_sg;
> >
> > /* convert SG to linear array of pages and dma addresses */
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> >
> > return 0;
> >
> > @@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
> > ttm->sg = sgt;
> > }
> >
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > index 4910c446db83..8b750c074494 100644
> > --- a/drivers/gpu/drm/drm_prime.c
> > +++ b/drivers/gpu/drm/drm_prime.c
> > @@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> > EXPORT_SYMBOL(drm_gem_prime_import);
> >
> > /**
> > - * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
> > + * drm_prime_sg_to_page_array - convert an sg table into a page array
> > * @sgt: scatter-gather table to convert
> > - * @pages: optional array of page pointers to store the page array in
> > - * @addrs: optional array to store the dma bus address of each page
> > - * @max_entries: size of both the passed-in arrays
> > + * @pages: array of page pointers to store the pages in
> > + * @max_entries: size of the passed-in array
> > *
> > - * Exports an sg table into an array of pages and addresses. This is currently
> > - * required by the TTM driver in order to do correct fault handling.
> > + * Exports an sg table into an array of pages.
> > *
> > - * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
> > - * implementation.
> > + * This function is deprecated and strongly discouraged to be used.
> > + * The page array is only useful for page faults and those can corrupt fields
> > + * in the struct page if they are not handled by the exporting driver.
> > */
> > -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > - dma_addr_t *addrs, int max_entries)
> > +int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
> > + struct page **pages,
> > + int max_entries)
> > {
> > unsigned count;
> > struct scatterlist *sg;
> > struct page *page;
> > u32 page_len, page_index;
> > - dma_addr_t addr;
> > - u32 dma_len, dma_index;
> >
> > /*
> > * Scatterlist elements contains both pages and DMA addresses, but
> > @@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > * described by the sg_dma_address(sg).
> > */
> > page_index = 0;
> > - dma_index = 0;
> > for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> > page_len = sg->length;
> > page = sg_page(sg);
> > - dma_len = sg_dma_len(sg);
> > - addr = sg_dma_address(sg);
> >
> > - while (pages && page_len > 0) {
> > + while (page_len > 0) {
> > if (WARN_ON(page_index >= max_entries))
> > return -1;
> > pages[page_index] = page;
> > @@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > page_len -= PAGE_SIZE;
> > page_index++;
> > }
> > - while (addrs && dma_len > 0) {
> > + }
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(drm_prime_sg_to_page_array);
> > +
> > +/**
> > + * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
> > + * @sgt: scatter-gather table to convert
> > + * @addrs: array to store the dma bus address of each page
> > + * @max_entries: size of both the passed-in arrays
> > + *
> > + * Exports an sg table into an array of addresses.
> > + *
> > + * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
>
> s/should/can/
>
> There's no requirement, if your driver just handles everything as an sgt
> there's no conversion needed.
>
> > + * implementation.
> > + */
> > +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> > + int max_entries)
> > +{
> > + struct scatterlist *sg;
> > + u32 dma_len, dma_index;
> > + dma_addr_t addr;
> > + unsigned count;
> > +
> > + /*
> > + * Scatterlist elements contains both pages and DMA addresses, but
> > + * one shoud not assume 1:1 relation between them. The sg->length is
> > + * the size of the physical memory chunk described by the sg->page,
> > + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
> > + * described by the sg_dma_address(sg).
> > + */
> > + dma_index = 0;
> > + for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> > + dma_len = sg_dma_len(sg);
> > + addr = sg_dma_address(sg);
> > +
> > + while (dma_len > 0) {
> > if (WARN_ON(dma_index >= max_entries))
> > return -1;
> > addrs[dma_index] = addr;
> > @@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > }
> > return 0;
> > }
> > -EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
> > +EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
> >
> > /**
> > * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > index 135fbff6fecf..8c04b8e8054c 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > @@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> > goto fail;
> > }
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
> > - NULL, npages);
> > + ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
> > if (ret)
> > goto fail;
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index a71f42870d5e..616b87641740 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
> > goto fail;
> > }
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
> > + ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
> > if (ret) {
> > mutex_unlock(&msm_obj->lock);
> > goto fail;
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > index e378bb491688..835edd74ef59 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > @@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
> > return 0;
> >
> > if (slave && ttm->sg) {
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - ttm_dma->dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> > index 4b92cdbcd29b..7997e4564576 100644
> > --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> > +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> > @@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> > if (r)
> > goto release_sg;
> >
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> >
> > return 0;
> >
> > @@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> > }
> >
> > if (slave && ttm->sg) {
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> > index ea0eecae5153..e505e5a291b3 100644
> > --- a/drivers/gpu/drm/vgem/vgem_drv.c
> > +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> > @@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> > }
> >
> > obj->pages_pin_count++; /* perma-pinned */
> > - drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
> > - npages);
> > + drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
> > return &obj->base;
> > }
> >
> > diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
> > index 19a0e260a4df..a2ff21f47101 100644
> > --- a/drivers/gpu/drm/vkms/vkms_gem.c
> > +++ b/drivers/gpu/drm/vkms/vkms_gem.c
> > @@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
> > return ERR_PTR(-ENOMEM);
> > }
> >
> > - drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
> > + drm_prime_sg_to_page_array(sg, obj->pages, npages);
> > return &obj->gem;
> > }
> > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > index f3830a0d1808..f4150ddfc5e2 100644
> > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > @@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
> >
> > xen_obj->sgt_imported = sgt;
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> > - NULL, xen_obj->num_pages);
> > + ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
> > + xen_obj->num_pages);
> > if (ret < 0)
> > return ERR_PTR(ret);
> >
> > diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> > index 093f760cc131..4bda9ab3a3bb 100644
> > --- a/include/drm/drm_prime.h
> > +++ b/include/drm/drm_prime.h
> > @@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> >
> > void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
> >
> > -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > - dma_addr_t *addrs, int max_pages);
> > -
> > +int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
> > + int max_pages);
> > +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> > + int max_pages);
> >
> > #endif /* __DRM_PRIME_H__ */
> > --
> > 2.17.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 6/6] drm/prime: document that use the page array is deprecated v2
@ 2020-10-09 16:27 ` Daniel Vetter
0 siblings, 0 replies; 29+ messages in thread
From: Daniel Vetter @ 2020-10-09 16:27 UTC (permalink / raw)
To: Christian König, Thomas Zimmermann
Cc: Jason Gunthorpe, John Hubbard, Linux Kernel Mailing List,
dri-devel, Wilson, Chris,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Matthew Wilcox, Dave Airlie, Andrew Morton,
open list:DMA BUFFER SHARING FRAMEWORK
On Fri, Oct 9, 2020 at 6:24 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Fri, Oct 09, 2020 at 05:03:42PM +0200, Christian König wrote:
> > We have reoccurring requests on this so better document that
> > this approach doesn't work and dma_buf_mmap() needs to be used instead.
> >
> > v2: split it into two functions
> >
> > Signed-off-by: Christian König <christian.koenig@amd.com>
>
> Patches 3-5:
>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> This one looks good, but you have it on a strange baseline. This doesn't
> contain the sg walking fixes from Marek, so reintroduces the bugs.
> Probably need to request a backmerge chain, first of -rc8 into drm-next,
> and then that into drm-misc-next.
Marek's patch is in drm-next, so just needs a backmerge into drm-misc-next.
Thomas, can you pls do that? We need 0552daac2d18f
I'll wait for the next round for patches 1&2 since Jason seems to have
found a small issue with them.
-Daniel
>
> Everything else in here lgtm.
> -Daniel
>
>
>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 ++-
> > drivers/gpu/drm/drm_prime.c | 67 +++++++++++++++------
> > drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 3 +-
> > drivers/gpu/drm/msm/msm_gem.c | 2 +-
> > drivers/gpu/drm/nouveau/nouveau_bo.c | 5 +-
> > drivers/gpu/drm/radeon/radeon_ttm.c | 9 ++-
> > drivers/gpu/drm/vgem/vgem_drv.c | 3 +-
> > drivers/gpu/drm/vkms/vkms_gem.c | 2 +-
> > drivers/gpu/drm/xen/xen_drm_front_gem.c | 4 +-
> > include/drm/drm_prime.h | 7 ++-
> > 10 files changed, 69 insertions(+), 42 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index ac463e706b19..6a65490de391 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -1011,8 +1011,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_bo_device *bdev,
> > goto release_sg;
> >
> > /* convert SG to linear array of pages and dma addresses */
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> >
> > return 0;
> >
> > @@ -1345,9 +1345,8 @@ static int amdgpu_ttm_tt_populate(struct ttm_bo_device *bdev,
> > ttm->sg = sgt;
> > }
> >
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > index 4910c446db83..8b750c074494 100644
> > --- a/drivers/gpu/drm/drm_prime.c
> > +++ b/drivers/gpu/drm/drm_prime.c
> > @@ -954,27 +954,25 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> > EXPORT_SYMBOL(drm_gem_prime_import);
> >
> > /**
> > - * drm_prime_sg_to_page_addr_arrays - convert an sg table into a page array
> > + * drm_prime_sg_to_page_array - convert an sg table into a page array
> > * @sgt: scatter-gather table to convert
> > - * @pages: optional array of page pointers to store the page array in
> > - * @addrs: optional array to store the dma bus address of each page
> > - * @max_entries: size of both the passed-in arrays
> > + * @pages: array of page pointers to store the pages in
> > + * @max_entries: size of the passed-in array
> > *
> > - * Exports an sg table into an array of pages and addresses. This is currently
> > - * required by the TTM driver in order to do correct fault handling.
> > + * Exports an sg table into an array of pages.
> > *
> > - * Drivers can use this in their &drm_driver.gem_prime_import_sg_table
> > - * implementation.
> > + * This function is deprecated and strongly discouraged to be used.
> > + * The page array is only useful for page faults and those can corrupt fields
> > + * in the struct page if they are not handled by the exporting driver.
> > */
> > -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > - dma_addr_t *addrs, int max_entries)
> > +int __deprecated drm_prime_sg_to_page_array(struct sg_table *sgt,
> > + struct page **pages,
> > + int max_entries)
> > {
> > unsigned count;
> > struct scatterlist *sg;
> > struct page *page;
> > u32 page_len, page_index;
> > - dma_addr_t addr;
> > - u32 dma_len, dma_index;
> >
> > /*
> > * Scatterlist elements contains both pages and DMA addresses, but
> > @@ -984,14 +982,11 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > * described by the sg_dma_address(sg).
> > */
> > page_index = 0;
> > - dma_index = 0;
> > for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> > page_len = sg->length;
> > page = sg_page(sg);
> > - dma_len = sg_dma_len(sg);
> > - addr = sg_dma_address(sg);
> >
> > - while (pages && page_len > 0) {
> > + while (page_len > 0) {
> > if (WARN_ON(page_index >= max_entries))
> > return -1;
> > pages[page_index] = page;
> > @@ -999,7 +994,43 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > page_len -= PAGE_SIZE;
> > page_index++;
> > }
> > - while (addrs && dma_len > 0) {
> > + }
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(drm_prime_sg_to_page_array);
> > +
> > +/**
> > + * drm_prime_sg_to_dma_addr_array - convert an sg table into a dma addr array
> > + * @sgt: scatter-gather table to convert
> > + * @addrs: array to store the dma bus address of each page
> > + * @max_entries: size of both the passed-in arrays
> > + *
> > + * Exports an sg table into an array of addresses.
> > + *
> > + * Drivers should use this in their &drm_driver.gem_prime_import_sg_table
>
> s/should/can/
>
> There's no requirement, if your driver just handles everything as an sgt
> there's no conversion needed.
>
> > + * implementation.
> > + */
> > +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> > + int max_entries)
> > +{
> > + struct scatterlist *sg;
> > + u32 dma_len, dma_index;
> > + dma_addr_t addr;
> > + unsigned count;
> > +
> > + /*
> > + * Scatterlist elements contains both pages and DMA addresses, but
> > + * one shoud not assume 1:1 relation between them. The sg->length is
> > + * the size of the physical memory chunk described by the sg->page,
> > + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
> > + * described by the sg_dma_address(sg).
> > + */
> > + dma_index = 0;
> > + for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> > + dma_len = sg_dma_len(sg);
> > + addr = sg_dma_address(sg);
> > +
> > + while (dma_len > 0) {
> > if (WARN_ON(dma_index >= max_entries))
> > return -1;
> > addrs[dma_index] = addr;
> > @@ -1010,7 +1041,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > }
> > return 0;
> > }
> > -EXPORT_SYMBOL(drm_prime_sg_to_page_addr_arrays);
> > +EXPORT_SYMBOL(drm_prime_sg_to_dma_addr_array);
> >
> > /**
> > * drm_prime_gem_destroy - helper to clean up a PRIME-imported GEM object
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > index 135fbff6fecf..8c04b8e8054c 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> > @@ -133,8 +133,7 @@ struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> > goto fail;
> > }
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, etnaviv_obj->pages,
> > - NULL, npages);
> > + ret = drm_prime_sg_to_page_array(sgt, etnaviv_obj->pages, npages);
> > if (ret)
> > goto fail;
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index a71f42870d5e..616b87641740 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -1174,7 +1174,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
> > goto fail;
> > }
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages);
> > + ret = drm_prime_sg_to_page_array(sgt, msm_obj->pages, npages);
> > if (ret) {
> > mutex_unlock(&msm_obj->lock);
> > goto fail;
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > index e378bb491688..835edd74ef59 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > @@ -1299,9 +1299,8 @@ nouveau_ttm_tt_populate(struct ttm_bo_device *bdev,
> > return 0;
> >
> > if (slave && ttm->sg) {
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - ttm_dma->dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, ttm_dma->dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> > index 4b92cdbcd29b..7997e4564576 100644
> > --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> > +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> > @@ -474,8 +474,8 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_bo_device *bdev, struct ttm_tt *
> > if (r)
> > goto release_sg;
> >
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL, gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> >
> > return 0;
> >
> > @@ -642,9 +642,8 @@ static int radeon_ttm_tt_populate(struct ttm_bo_device *bdev,
> > }
> >
> > if (slave && ttm->sg) {
> > - drm_prime_sg_to_page_addr_arrays(ttm->sg, NULL,
> > - gtt->ttm.dma_address,
> > - ttm->num_pages);
> > + drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
> > + ttm->num_pages);
> > ttm_tt_set_populated(ttm);
> > return 0;
> > }
> > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> > index ea0eecae5153..e505e5a291b3 100644
> > --- a/drivers/gpu/drm/vgem/vgem_drv.c
> > +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> > @@ -356,8 +356,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
> > }
> >
> > obj->pages_pin_count++; /* perma-pinned */
> > - drm_prime_sg_to_page_addr_arrays(obj->table, obj->pages, NULL,
> > - npages);
> > + drm_prime_sg_to_page_array(obj->table, obj->pages, npages);
> > return &obj->base;
> > }
> >
> > diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
> > index 19a0e260a4df..a2ff21f47101 100644
> > --- a/drivers/gpu/drm/vkms/vkms_gem.c
> > +++ b/drivers/gpu/drm/vkms/vkms_gem.c
> > @@ -256,6 +256,6 @@ vkms_prime_import_sg_table(struct drm_device *dev,
> > return ERR_PTR(-ENOMEM);
> > }
> >
> > - drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages);
> > + drm_prime_sg_to_page_array(sg, obj->pages, npages);
> > return &obj->gem;
> > }
> > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > index f3830a0d1808..f4150ddfc5e2 100644
> > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > @@ -220,8 +220,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
> >
> > xen_obj->sgt_imported = sgt;
> >
> > - ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> > - NULL, xen_obj->num_pages);
> > + ret = drm_prime_sg_to_page_array(sgt, xen_obj->pages,
> > + xen_obj->num_pages);
> > if (ret < 0)
> > return ERR_PTR(ret);
> >
> > diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> > index 093f760cc131..4bda9ab3a3bb 100644
> > --- a/include/drm/drm_prime.h
> > +++ b/include/drm/drm_prime.h
> > @@ -103,8 +103,9 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
> >
> > void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
> >
> > -int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
> > - dma_addr_t *addrs, int max_pages);
> > -
> > +int drm_prime_sg_to_page_array(struct sg_table *sgt, struct page **pages,
> > + int max_pages);
> > +int drm_prime_sg_to_dma_addr_array(struct sg_table *sgt, dma_addr_t *addrs,
> > + int max_pages);
> >
> > #endif /* __DRM_PRIME_H__ */
> > --
> > 2.17.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
2020-10-09 15:03 ` Christian König
@ 2020-10-09 22:04 ` Andrew Morton
-1 siblings, 0 replies; 29+ messages in thread
From: Andrew Morton @ 2020-10-09 22:04 UTC (permalink / raw)
To: Christian König
Cc: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, daniel, sumit.semwal, willy, jhubbard, jgg,
Miaohe Lin
On Fri, 9 Oct 2020 17:03:37 +0200 "Christian König" <ckoenig.leichtzumerken@gmail.com> wrote:
> Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
> adds a workaround for a bug in mmap_region.
>
> As the comment states ->mmap() callback can change
> vma->vm_file and so we might call fput() on the wrong file.
>
> Revert the workaround and proper fix this in mmap_region.
>
Doesn't this patch series address the same thing as
https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
@ 2020-10-09 22:04 ` Andrew Morton
0 siblings, 0 replies; 29+ messages in thread
From: Andrew Morton @ 2020-10-09 22:04 UTC (permalink / raw)
To: Christian König
Cc: Miaohe Lin, jgg, jhubbard, linux-kernel, dri-devel, chris,
linaro-mm-sig, linux-mm, willy, airlied, linux-media
On Fri, 9 Oct 2020 17:03:37 +0200 "Christian König" <ckoenig.leichtzumerken@gmail.com> wrote:
> Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
> adds a workaround for a bug in mmap_region.
>
> As the comment states ->mmap() callback can change
> vma->vm_file and so we might call fput() on the wrong file.
>
> Revert the workaround and proper fix this in mmap_region.
>
Doesn't this patch series address the same thing as
https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com?
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
2020-10-09 22:04 ` Andrew Morton
@ 2020-10-09 22:25 ` Jason Gunthorpe
-1 siblings, 0 replies; 29+ messages in thread
From: Jason Gunthorpe @ 2020-10-09 22:25 UTC (permalink / raw)
To: Andrew Morton
Cc: Christian König, linux-mm, linux-kernel, linaro-mm-sig,
dri-devel, linux-media, chris, airlied, daniel, sumit.semwal,
willy, jhubbard, Miaohe Lin
On Fri, Oct 09, 2020 at 03:04:20PM -0700, Andrew Morton wrote:
> On Fri, 9 Oct 2020 17:03:37 +0200 "Christian König" <ckoenig.leichtzumerken@gmail.com> wrote:
>
> > Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
> > adds a workaround for a bug in mmap_region.
> >
> > As the comment states ->mmap() callback can change
> > vma->vm_file and so we might call fput() on the wrong file.
> >
> > Revert the workaround and proper fix this in mmap_region.
> >
>
> Doesn't this patch series address the same thing as
> https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com?
Same basic issue, looks like both of these patches should be combined
to plug it fully.
Jason
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
@ 2020-10-09 22:25 ` Jason Gunthorpe
0 siblings, 0 replies; 29+ messages in thread
From: Jason Gunthorpe @ 2020-10-09 22:25 UTC (permalink / raw)
To: Andrew Morton
Cc: Miaohe Lin, Christian König, linux-kernel, dri-devel, chris,
linaro-mm-sig, linux-mm, willy, jhubbard, airlied, linux-media
On Fri, Oct 09, 2020 at 03:04:20PM -0700, Andrew Morton wrote:
> On Fri, 9 Oct 2020 17:03:37 +0200 "Christian König" <ckoenig.leichtzumerken@gmail.com> wrote:
>
> > Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
> > adds a workaround for a bug in mmap_region.
> >
> > As the comment states ->mmap() callback can change
> > vma->vm_file and so we might call fput() on the wrong file.
> >
> > Revert the workaround and proper fix this in mmap_region.
> >
>
> Doesn't this patch series address the same thing as
> https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com?
Same basic issue, looks like both of these patches should be combined
to plug it fully.
Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/6] mm: introduce vma_set_file function v3
2020-10-09 15:14 ` Jason Gunthorpe
@ 2020-10-12 8:11 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-12 8:11 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, akpm, daniel, sumit.semwal, willy, jhubbard
Am 09.10.20 um 17:14 schrieb Jason Gunthorpe:
> On Fri, Oct 09, 2020 at 05:03:38PM +0200, Christian König wrote:
>> +/*
>> + * Change backing file, only valid to use during initial VMA setup.
>> + */
>> +void vma_set_file(struct vm_area_struct *vma, struct file *file)
>> +{
>> + if (file)
>> + get_file(file);
>> +
>> + swap(vma->vm_file, file);
>> +
>> + if (file)
>> + fput(file);
>> +}
> fput crashes when file is NULL so the error handling after
> unmap_and_free_vma: can't handle this case, similarly vm_file can't be
> NULL either.
>
> So just simply:
>
> swap(vma->vm_file, file);
> get_file(vma->vm_file);
> fput(file);
>
> Will do?
I was considering this as well, yes.
> Just let it crash if any of them are wrongly NULL.
Mhm, changing from anonymous to file backed or reverse is probably not
such a good idea.
So yes catching those problems early is probably the best approach we
could do.
Going to do this in v4 if nobody objects.
Regards,
Christian.
>
> Jason
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/6] mm: introduce vma_set_file function v3
@ 2020-10-12 8:11 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-12 8:11 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: jhubbard, linux-kernel, dri-devel, chris, linaro-mm-sig,
linux-mm, willy, airlied, akpm, linux-media
Am 09.10.20 um 17:14 schrieb Jason Gunthorpe:
> On Fri, Oct 09, 2020 at 05:03:38PM +0200, Christian König wrote:
>> +/*
>> + * Change backing file, only valid to use during initial VMA setup.
>> + */
>> +void vma_set_file(struct vm_area_struct *vma, struct file *file)
>> +{
>> + if (file)
>> + get_file(file);
>> +
>> + swap(vma->vm_file, file);
>> +
>> + if (file)
>> + fput(file);
>> +}
> fput crashes when file is NULL so the error handling after
> unmap_and_free_vma: can't handle this case, similarly vm_file can't be
> NULL either.
>
> So just simply:
>
> swap(vma->vm_file, file);
> get_file(vma->vm_file);
> fput(file);
>
> Will do?
I was considering this as well, yes.
> Just let it crash if any of them are wrongly NULL.
Mhm, changing from anonymous to file backed or reverse is probably not
such a good idea.
So yes catching those problems early is probably the best approach we
could do.
Going to do this in v4 if nobody objects.
Regards,
Christian.
>
> Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
2020-10-09 22:25 ` Jason Gunthorpe
@ 2020-10-12 8:13 ` Christian König
-1 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-12 8:13 UTC (permalink / raw)
To: Jason Gunthorpe, Andrew Morton
Cc: linux-mm, linux-kernel, linaro-mm-sig, dri-devel, linux-media,
chris, airlied, daniel, sumit.semwal, willy, jhubbard,
Miaohe Lin
Am 10.10.20 um 00:25 schrieb Jason Gunthorpe:
> On Fri, Oct 09, 2020 at 03:04:20PM -0700, Andrew Morton wrote:
>> On Fri, 9 Oct 2020 17:03:37 +0200 "Christian König" <ckoenig.leichtzumerken@gmail.com> wrote:
>>
>>> Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
>>> adds a workaround for a bug in mmap_region.
>>>
>>> As the comment states ->mmap() callback can change
>>> vma->vm_file and so we might call fput() on the wrong file.
>>>
>>> Revert the workaround and proper fix this in mmap_region.
>>>
>> Doesn't this patch series address the same thing as
>> https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com?
> Same basic issue, looks like both of these patches should be combined
> to plug it fully.
Yes, agree completely.
It's a different error path, but we need to fix both occasions.
Christian.
>
> Jason
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/6] mm: mmap: fix fput in error path
@ 2020-10-12 8:13 ` Christian König
0 siblings, 0 replies; 29+ messages in thread
From: Christian König @ 2020-10-12 8:13 UTC (permalink / raw)
To: Jason Gunthorpe, Andrew Morton
Cc: Miaohe Lin, jhubbard, linux-kernel, dri-devel, chris,
linaro-mm-sig, linux-mm, willy, airlied, linux-media
Am 10.10.20 um 00:25 schrieb Jason Gunthorpe:
> On Fri, Oct 09, 2020 at 03:04:20PM -0700, Andrew Morton wrote:
>> On Fri, 9 Oct 2020 17:03:37 +0200 "Christian König" <ckoenig.leichtzumerken@gmail.com> wrote:
>>
>>> Patch "495c10cc1c0c CHROMIUM: dma-buf: restore args..."
>>> adds a workaround for a bug in mmap_region.
>>>
>>> As the comment states ->mmap() callback can change
>>> vma->vm_file and so we might call fput() on the wrong file.
>>>
>>> Revert the workaround and proper fix this in mmap_region.
>>>
>> Doesn't this patch series address the same thing as
>> https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com?
> Same basic issue, looks like both of these patches should be combined
> to plug it fully.
Yes, agree completely.
It's a different error path, but we need to fix both occasions.
Christian.
>
> Jason
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2020-10-12 8:13 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-09 15:03 [PATCH 1/6] mm: mmap: fix fput in error path Christian König
2020-10-09 15:03 ` Christian König
2020-10-09 15:03 ` [PATCH 2/6] mm: introduce vma_set_file function v3 Christian König
2020-10-09 15:03 ` Christian König
2020-10-09 15:14 ` Jason Gunthorpe
2020-10-09 15:14 ` Jason Gunthorpe
2020-10-12 8:11 ` Christian König
2020-10-12 8:11 ` Christian König
2020-10-09 15:03 ` [PATCH 3/6] drm/radeon: stop using pages with drm_prime_sg_to_page_addr_arrays Christian König
2020-10-09 15:03 ` Christian König
2020-10-09 15:03 ` [PATCH 4/6] drm/amdgpu: " Christian König
2020-10-09 15:03 ` Christian König
2020-10-09 15:03 ` [PATCH 5/6] drm/nouveau: " Christian König
2020-10-09 15:03 ` Christian König
2020-10-09 15:03 ` [PATCH 6/6] drm/prime: document that use the page array is deprecated v2 Christian König
2020-10-09 15:03 ` Christian König
2020-10-09 16:24 ` Daniel Vetter
2020-10-09 16:24 ` Daniel Vetter
2020-10-09 16:27 ` Daniel Vetter
2020-10-09 16:27 ` Daniel Vetter
2020-10-09 16:27 ` Daniel Vetter
2020-10-09 15:15 ` [PATCH 1/6] mm: mmap: fix fput in error path Jason Gunthorpe
2020-10-09 15:15 ` Jason Gunthorpe
2020-10-09 22:04 ` Andrew Morton
2020-10-09 22:04 ` Andrew Morton
2020-10-09 22:25 ` Jason Gunthorpe
2020-10-09 22:25 ` Jason Gunthorpe
2020-10-12 8:13 ` Christian König
2020-10-12 8:13 ` Christian König
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.