All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] drm/cma-helper: Move mmap to object functions
@ 2020-11-23 11:56 Thomas Zimmermann
  2020-11-23 11:56 ` [PATCH 1/2] drm/cma-helper: Remove prime infix from GEM " Thomas Zimmermann
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2020-11-23 11:56 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, eric
  Cc: Thomas Zimmermann, dri-devel

This patchset moves CMA's mmap into a GEM object function. This change
allows for removing CMA's mmap and gem_prime_mmap callbacks in favor of
the default implementation.

Tested with vc4 on RPi3.

Thomas Zimmermann (2):
  drm/cma-helper: Remove prime infix from GEM object functions
  drm/cma-helper: Implement mmap as GEM CMA object functions

 drivers/gpu/drm/drm_file.c           |   3 +-
 drivers/gpu/drm/drm_gem_cma_helper.c | 141 +++++++++------------------
 drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
 drivers/gpu/drm/vc4/vc4_bo.c         |   6 +-
 include/drm/drm_gem_cma_helper.h     |  14 +--
 5 files changed, 58 insertions(+), 108 deletions(-)

--
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/2] drm/cma-helper: Remove prime infix from GEM object functions
  2020-11-23 11:56 [PATCH 0/2] drm/cma-helper: Move mmap to object functions Thomas Zimmermann
@ 2020-11-23 11:56 ` Thomas Zimmermann
  2020-11-23 11:56 ` [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA " Thomas Zimmermann
  2020-11-23 12:32 ` [PATCH 0/2] drm/cma-helper: Move mmap to " Maxime Ripard
  2 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2020-11-23 11:56 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, eric
  Cc: Thomas Zimmermann, dri-devel

These functions are not directly related to PRIME interfaces any
longer, but are now GEM object functions. Rename them accordingly
and fix the docs.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 20 ++++++++++----------
 drivers/gpu/drm/vc4/vc4_bo.c         |  4 ++--
 include/drm/drm_gem_cma_helper.h     |  4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 4d5c1d86b022..6a4ef335ebc9 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -36,8 +36,8 @@
 static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.free = drm_gem_cma_free_object,
 	.print_info = drm_gem_cma_print_info,
-	.get_sg_table = drm_gem_cma_prime_get_sg_table,
-	.vmap = drm_gem_cma_prime_vmap,
+	.get_sg_table = drm_gem_cma_get_sg_table,
+	.vmap = drm_gem_cma_vmap,
 	.vm_ops = &drm_gem_cma_vm_ops,
 };
 
@@ -424,18 +424,18 @@ void drm_gem_cma_print_info(struct drm_printer *p, unsigned int indent,
 EXPORT_SYMBOL(drm_gem_cma_print_info);
 
 /**
- * drm_gem_cma_prime_get_sg_table - provide a scatter/gather table of pinned
+ * drm_gem_cma_get_sg_table - provide a scatter/gather table of pinned
  *     pages for a CMA GEM object
  * @obj: GEM object
  *
- * This function exports a scatter/gather table suitable for PRIME usage by
+ * This function exports a scatter/gather table by
  * calling the standard DMA mapping API. Drivers using the CMA helpers should
  * set this as their &drm_gem_object_funcs.get_sg_table callback.
  *
  * Returns:
  * A pointer to the scatter/gather table of pinned pages or NULL on failure.
  */
-struct sg_table *drm_gem_cma_prime_get_sg_table(struct drm_gem_object *obj)
+struct sg_table *drm_gem_cma_get_sg_table(struct drm_gem_object *obj)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 	struct sg_table *sgt;
@@ -456,7 +456,7 @@ struct sg_table *drm_gem_cma_prime_get_sg_table(struct drm_gem_object *obj)
 	kfree(sgt);
 	return ERR_PTR(ret);
 }
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table);
+EXPORT_SYMBOL_GPL(drm_gem_cma_get_sg_table);
 
 /**
  * drm_gem_cma_prime_import_sg_table - produce a CMA GEM object from another
@@ -528,13 +528,13 @@ int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
 
 /**
- * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
+ * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
  * @map: Returns the kernel virtual address of the CMA GEM object's backing
  *       store.
  *
- * This function maps a buffer exported via DRM PRIME into the kernel's
+ * This function maps a buffer into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
  * kernel virtual address space this simply returns the cached virtual
  * address. Drivers using the CMA helpers should set this as their DRM
@@ -543,7 +543,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * Returns:
  * 0 on success, or a negative error code otherwise.
  */
-int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
+int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
@@ -551,7 +551,7 @@ int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 
 	return 0;
 }
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
+EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
 
 /**
  * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 469d1b4f2643..813e6cb3f9af 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -385,7 +385,7 @@ static const struct vm_operations_struct vc4_vm_ops = {
 static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
 	.free = vc4_free_object,
 	.export = vc4_prime_export,
-	.get_sg_table = drm_gem_cma_prime_get_sg_table,
+	.get_sg_table = drm_gem_cma_get_sg_table,
 	.vmap = vc4_prime_vmap,
 	.vm_ops = &vc4_vm_ops,
 };
@@ -794,7 +794,7 @@ int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_vmap(obj, map);
+	return drm_gem_cma_vmap(obj, map);
 }
 
 struct drm_gem_object *
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 5605c1b8f779..4680275ab339 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -96,14 +96,14 @@ unsigned long drm_gem_cma_get_unmapped_area(struct file *filp,
 void drm_gem_cma_print_info(struct drm_printer *p, unsigned int indent,
 			    const struct drm_gem_object *obj);
 
-struct sg_table *drm_gem_cma_prime_get_sg_table(struct drm_gem_object *obj);
+struct sg_table *drm_gem_cma_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *
 drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct dma_buf_attachment *attach,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /**
  * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2020-11-23 11:56 [PATCH 0/2] drm/cma-helper: Move mmap to object functions Thomas Zimmermann
  2020-11-23 11:56 ` [PATCH 1/2] drm/cma-helper: Remove prime infix from GEM " Thomas Zimmermann
@ 2020-11-23 11:56 ` Thomas Zimmermann
  2021-01-14 12:51     ` Kieran Bingham
  2020-11-23 12:32 ` [PATCH 0/2] drm/cma-helper: Move mmap to " Maxime Ripard
  2 siblings, 1 reply; 19+ messages in thread
From: Thomas Zimmermann @ 2020-11-23 11:56 UTC (permalink / raw)
  To: maarten.lankhorst, mripard, airlied, daniel, eric
  Cc: Thomas Zimmermann, dri-devel

The new GEM object function drm_gem_cma_mmap() sets the VMA flags
and offset as in the old implementation and immediately maps in the
buffer's memory pages.

Changing CMA helpers to use the GEM object function allows for the
removal of the special implementations for mmap and gem_prime_mmap
callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
are now used.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_file.c           |   3 +-
 drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
 drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
 drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
 include/drm/drm_gem_cma_helper.h     |  10 +--
 5 files changed, 44 insertions(+), 94 deletions(-)

diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
index b50380fa80ce..80886d50d0f1 100644
--- a/drivers/gpu/drm/drm_file.c
+++ b/drivers/gpu/drm/drm_file.c
@@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
  * The memory mapping implementation will vary depending on how the driver
  * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
  * function, modern drivers should use one of the provided memory-manager
- * specific implementations. For GEM-based drivers this is drm_gem_mmap(), and
- * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
+ * specific implementations. For GEM-based drivers this is drm_gem_mmap().
  *
  * No other file operations are supported by the DRM userspace API. Overall the
  * following is an example &file_operations structure::
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 6a4ef335ebc9..7942cf05cd93 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.print_info = drm_gem_cma_print_info,
 	.get_sg_table = drm_gem_cma_get_sg_table,
 	.vmap = drm_gem_cma_vmap,
+	.mmap = drm_gem_cma_mmap,
 	.vm_ops = &drm_gem_cma_vm_ops,
 };
 
@@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
 };
 EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
 
-static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
-				struct vm_area_struct *vma)
-{
-	int ret;
-
-	/*
-	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
-	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-	 * the whole buffer.
-	 */
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_pgoff = 0;
-
-	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
-			  cma_obj->paddr, vma->vm_end - vma->vm_start);
-	if (ret)
-		drm_gem_vm_close(vma);
-
-	return ret;
-}
-
-/**
- * drm_gem_cma_mmap - memory-map a CMA GEM object
- * @filp: file object
- * @vma: VMA for the area to be mapped
- *
- * This function implements an augmented version of the GEM DRM file mmap
- * operation for CMA objects: In addition to the usual GEM VMA setup it
- * immediately faults in the entire object instead of using on-demaind
- * faulting. Drivers which employ the CMA helpers should use this function
- * as their ->mmap() handler in the DRM device file's file_operations
- * structure.
- *
- * Instead of directly referencing this function, drivers should use the
- * DEFINE_DRM_GEM_CMA_FOPS().macro.
- *
- * Returns:
- * 0 on success or a negative error code on failure.
- */
-int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	struct drm_gem_cma_object *cma_obj;
-	struct drm_gem_object *gem_obj;
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	gem_obj = vma->vm_private_data;
-	cma_obj = to_drm_gem_cma_obj(gem_obj);
-
-	return drm_gem_cma_mmap_obj(cma_obj, vma);
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
-
 #ifndef CONFIG_MMU
 /**
  * drm_gem_cma_get_unmapped_area - propose address for mapping in noMMU cases
@@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
 
-/**
- * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
- * @obj: GEM object
- * @vma: VMA for the area to be mapped
- *
- * This function maps a buffer imported via DRM PRIME into a userspace
- * process's address space. Drivers that use the CMA helpers should set this
- * as their &drm_driver.gem_prime_mmap callback.
- *
- * Returns:
- * 0 on success or a negative error code on failure.
- */
-int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma)
-{
-	struct drm_gem_cma_object *cma_obj;
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret < 0)
-		return ret;
-
-	cma_obj = to_drm_gem_cma_obj(obj);
-	return drm_gem_cma_mmap_obj(cma_obj, vma);
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
-
 /**
  * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
@@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
 
+/**
+ * drm_gem_cma_mmap - memory-map an exported CMA GEM object
+ * @obj: GEM object
+ * @vma: VMA for the area to be mapped
+ *
+ * This function maps a buffer into a userspace process's address space.
+ * In addition to the usual GEM VMA setup it immediately faults in the entire
+ * object instead of using on-demand faulting. Drivers that use the CMA
+ * helpers should set this as their &drm_gem_object_funcs.mmap callback.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+	struct drm_gem_cma_object *cma_obj;
+	int ret;
+
+	/*
+	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+	 * the whole buffer.
+	 */
+	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
+	vma->vm_flags &= ~VM_PFNMAP;
+
+	cma_obj = to_drm_gem_cma_obj(obj);
+
+	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
+			  cma_obj->paddr, vma->vm_end - vma->vm_start);
+	if (ret)
+		drm_gem_vm_close(vma);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
+
 /**
  * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
  *	scatter/gather table and get the virtual address of the buffer
diff --git a/drivers/gpu/drm/pl111/pl111_drv.c b/drivers/gpu/drm/pl111/pl111_drv.c
index 40e6708fbbe2..e4dcaef6c143 100644
--- a/drivers/gpu/drm/pl111/pl111_drv.c
+++ b/drivers/gpu/drm/pl111/pl111_drv.c
@@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_import_sg_table = pl111_gem_import_sg_table,
-	.gem_prime_mmap = drm_gem_cma_prime_mmap,
+	.gem_prime_mmap = drm_gem_prime_mmap,
 
 #if defined(CONFIG_DEBUG_FS)
 	.debugfs_init = pl111_debugfs_init,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 813e6cb3f9af..dc316cb79e00 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_mmap(obj, vma);
+	return drm_gem_prime_mmap(obj, vma);
 }
 
 int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 4680275ab339..0a9711caa3e8 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -59,7 +59,7 @@ struct drm_gem_cma_object {
 		.poll		= drm_poll,\
 		.read		= drm_read,\
 		.llseek		= noop_llseek,\
-		.mmap		= drm_gem_cma_mmap,\
+		.mmap		= drm_gem_mmap,\
 		DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
 	}
 
@@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
 			    struct drm_device *drm,
 			    struct drm_mode_create_dumb *args);
 
-/* set vm_flags and we can change the VM attribute to other one at here */
-int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
-
 /* allocate physical memory */
 struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
 					      size_t size);
@@ -101,9 +98,8 @@ struct drm_gem_object *
 drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct dma_buf_attachment *attach,
 				  struct sg_table *sgt);
-int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
-			   struct vm_area_struct *vma);
 int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 
 /**
  * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
@@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd, \
 	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle, \
 	.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
-	.gem_prime_mmap		= drm_gem_cma_prime_mmap
+	.gem_prime_mmap		= drm_gem_prime_mmap
 
 /**
  * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/2] drm/cma-helper: Move mmap to object functions
  2020-11-23 11:56 [PATCH 0/2] drm/cma-helper: Move mmap to object functions Thomas Zimmermann
  2020-11-23 11:56 ` [PATCH 1/2] drm/cma-helper: Remove prime infix from GEM " Thomas Zimmermann
  2020-11-23 11:56 ` [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA " Thomas Zimmermann
@ 2020-11-23 12:32 ` Maxime Ripard
  2020-11-23 18:58   ` Thomas Zimmermann
  2 siblings, 1 reply; 19+ messages in thread
From: Maxime Ripard @ 2020-11-23 12:32 UTC (permalink / raw)
  To: Thomas Zimmermann; +Cc: airlied, dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 357 bytes --]

On Mon, Nov 23, 2020 at 12:56:44PM +0100, Thomas Zimmermann wrote:
> This patchset moves CMA's mmap into a GEM object function. This change
> allows for removing CMA's mmap and gem_prime_mmap callbacks in favor of
> the default implementation.
> 
> Tested with vc4 on RPi3.

For both patches,
Acked-by: Maxime Ripard <mripard@kernel.org>

Maxime

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/2] drm/cma-helper: Move mmap to object functions
  2020-11-23 12:32 ` [PATCH 0/2] drm/cma-helper: Move mmap to " Maxime Ripard
@ 2020-11-23 18:58   ` Thomas Zimmermann
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2020-11-23 18:58 UTC (permalink / raw)
  To: Maxime Ripard; +Cc: airlied, dri-devel


[-- Attachment #1.1.1.1: Type: text/plain, Size: 640 bytes --]



Am 23.11.20 um 13:32 schrieb Maxime Ripard:
> On Mon, Nov 23, 2020 at 12:56:44PM +0100, Thomas Zimmermann wrote:
>> This patchset moves CMA's mmap into a GEM object function. This change
>> allows for removing CMA's mmap and gem_prime_mmap callbacks in favor of
>> the default implementation.
>>
>> Tested with vc4 on RPi3.
> 
> For both patches,
> Acked-by: Maxime Ripard <mripard@kernel.org>

Thanks, Maxime!

> 
> Maxime
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.1.2: OpenPGP_0x680DC11D530B7A23.asc --]
[-- Type: application/pgp-keys, Size: 7535 bytes --]

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2020-11-23 11:56 ` [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA " Thomas Zimmermann
@ 2021-01-14 12:51     ` Kieran Bingham
  0 siblings, 0 replies; 19+ messages in thread
From: Kieran Bingham @ 2021-01-14 12:51 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel

Hi Thomas,

On 23/11/2020 11:56, Thomas Zimmermann wrote:
> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
> and offset as in the old implementation and immediately maps in the
> buffer's memory pages.
> 
> Changing CMA helpers to use the GEM object function allows for the
> removal of the special implementations for mmap and gem_prime_mmap
> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
> are now used.

I've encountered a memory leak regression in our Renesas R-Car DU tests,
and git bisection has led me to this patch (as commit f5ca8eb6f9).

Running the tests sequentially, while grepping /proc/meminfo for Cma, it
is evident that CMA memory is not released, until exhausted and the
allocations fail (seen in [0]) shown by the error report:

>     self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay, mode.vdisplay, "XR24"))
> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory


Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
successfully [1] on the commit previous to that (bc2532ab7c2):

Reverting f5ca8eb6f9 also produces a successful pass [2]

 [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
 [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
 [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert


I don't believe we handle mmap specially in the RCar-DU driver, so I
wonder if this issue has hit anyone else as well?

Any ideas of a repair without a revert ? Or do we just need to submit a
revert?

I've yet to fully understand the implications of the patch below.

Thanks
--
Regards

Kieran



> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_file.c           |   3 +-
>  drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>  drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>  drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>  include/drm/drm_gem_cma_helper.h     |  10 +--
>  5 files changed, 44 insertions(+), 94 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> index b50380fa80ce..80886d50d0f1 100644
> --- a/drivers/gpu/drm/drm_file.c
> +++ b/drivers/gpu/drm/drm_file.c
> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
>   * The memory mapping implementation will vary depending on how the driver
>   * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
>   * function, modern drivers should use one of the provided memory-manager
> - * specific implementations. For GEM-based drivers this is drm_gem_mmap(), and
> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
> + * specific implementations. For GEM-based drivers this is drm_gem_mmap().
>   *
>   * No other file operations are supported by the DRM userspace API. Overall the
>   * following is an example &file_operations structure::
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 6a4ef335ebc9..7942cf05cd93 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
>  	.print_info = drm_gem_cma_print_info,
>  	.get_sg_table = drm_gem_cma_get_sg_table,
>  	.vmap = drm_gem_cma_vmap,
> +	.mmap = drm_gem_cma_mmap,
>  	.vm_ops = &drm_gem_cma_vm_ops,
>  };
>  
> @@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
>  };
>  EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>  
> -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
> -				struct vm_area_struct *vma)
> -{
> -	int ret;
> -
> -	/*
> -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> -	 * the whole buffer.
> -	 */
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_pgoff = 0;
> -
> -	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
> -			  cma_obj->paddr, vma->vm_end - vma->vm_start);
> -	if (ret)
> -		drm_gem_vm_close(vma);
> -
> -	return ret;
> -}
> -
> -/**
> - * drm_gem_cma_mmap - memory-map a CMA GEM object
> - * @filp: file object
> - * @vma: VMA for the area to be mapped
> - *
> - * This function implements an augmented version of the GEM DRM file mmap
> - * operation for CMA objects: In addition to the usual GEM VMA setup it
> - * immediately faults in the entire object instead of using on-demaind
> - * faulting. Drivers which employ the CMA helpers should use this function
> - * as their ->mmap() handler in the DRM device file's file_operations
> - * structure.
> - *
> - * Instead of directly referencing this function, drivers should use the
> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> - */
> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
> -{
> -	struct drm_gem_cma_object *cma_obj;
> -	struct drm_gem_object *gem_obj;
> -	int ret;
> -
> -	ret = drm_gem_mmap(filp, vma);
> -	if (ret)
> -		return ret;
> -
> -	gem_obj = vma->vm_private_data;
> -	cma_obj = to_drm_gem_cma_obj(gem_obj);
> -
> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
> -
>  #ifndef CONFIG_MMU
>  /**
>   * drm_gem_cma_get_unmapped_area - propose address for mapping in noMMU cases
> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>  
> -/**
> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
> - * @obj: GEM object
> - * @vma: VMA for the area to be mapped
> - *
> - * This function maps a buffer imported via DRM PRIME into a userspace
> - * process's address space. Drivers that use the CMA helpers should set this
> - * as their &drm_driver.gem_prime_mmap callback.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> - */
> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> -			   struct vm_area_struct *vma)
> -{
> -	struct drm_gem_cma_object *cma_obj;
> -	int ret;
> -
> -	ret = drm_gem_mmap_obj(obj, obj->size, vma);
> -	if (ret < 0)
> -		return ret;
> -
> -	cma_obj = to_drm_gem_cma_obj(obj);
> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> -
>  /**
>   * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>   *     address space
> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>  
> +/**
> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
> + * @obj: GEM object
> + * @vma: VMA for the area to be mapped
> + *
> + * This function maps a buffer into a userspace process's address space.
> + * In addition to the usual GEM VMA setup it immediately faults in the entire
> + * object instead of using on-demand faulting. Drivers that use the CMA
> + * helpers should set this as their &drm_gem_object_funcs.mmap callback.
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +{
> +	struct drm_gem_cma_object *cma_obj;
> +	int ret;
> +
> +	/*
> +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> +	 * the whole buffer.
> +	 */
> +	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> +	vma->vm_flags &= ~VM_PFNMAP;
> +
> +	cma_obj = to_drm_gem_cma_obj(obj);
> +
> +	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
> +			  cma_obj->paddr, vma->vm_end - vma->vm_start);
> +	if (ret)
> +		drm_gem_vm_close(vma);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
> +
>  /**
>   * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
>   *	scatter/gather table and get the virtual address of the buffer
> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c b/drivers/gpu/drm/pl111/pl111_drv.c
> index 40e6708fbbe2..e4dcaef6c143 100644
> --- a/drivers/gpu/drm/pl111/pl111_drv.c
> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>  	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>  	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>  	.gem_prime_import_sg_table = pl111_gem_import_sg_table,
> -	.gem_prime_mmap = drm_gem_cma_prime_mmap,
> +	.gem_prime_mmap = drm_gem_prime_mmap,
>  
>  #if defined(CONFIG_DEBUG_FS)
>  	.debugfs_init = pl111_debugfs_init,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 813e6cb3f9af..dc316cb79e00 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>  		return -EINVAL;
>  	}
>  
> -	return drm_gem_cma_prime_mmap(obj, vma);
> +	return drm_gem_prime_mmap(obj, vma);
>  }
>  
>  int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 4680275ab339..0a9711caa3e8 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>  		.poll		= drm_poll,\
>  		.read		= drm_read,\
>  		.llseek		= noop_llseek,\
> -		.mmap		= drm_gem_cma_mmap,\
> +		.mmap		= drm_gem_mmap,\
>  		DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>  	}
>  
> @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
>  			    struct drm_device *drm,
>  			    struct drm_mode_create_dumb *args);
>  
> -/* set vm_flags and we can change the VM attribute to other one at here */
> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
> -
>  /* allocate physical memory */
>  struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>  					      size_t size);
> @@ -101,9 +98,8 @@ struct drm_gem_object *
>  drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>  				  struct dma_buf_attachment *attach,
>  				  struct sg_table *sgt);
> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> -			   struct vm_area_struct *vma);
>  int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>  
>  /**
>   * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd, \
>  	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle, \
>  	.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
> -	.gem_prime_mmap		= drm_gem_cma_prime_mmap
> +	.gem_prime_mmap		= drm_gem_prime_mmap
>  
>  /**
>   * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-14 12:51     ` Kieran Bingham
  0 siblings, 0 replies; 19+ messages in thread
From: Kieran Bingham @ 2021-01-14 12:51 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel

Hi Thomas,

On 23/11/2020 11:56, Thomas Zimmermann wrote:
> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
> and offset as in the old implementation and immediately maps in the
> buffer's memory pages.
> 
> Changing CMA helpers to use the GEM object function allows for the
> removal of the special implementations for mmap and gem_prime_mmap
> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
> are now used.

I've encountered a memory leak regression in our Renesas R-Car DU tests,
and git bisection has led me to this patch (as commit f5ca8eb6f9).

Running the tests sequentially, while grepping /proc/meminfo for Cma, it
is evident that CMA memory is not released, until exhausted and the
allocations fail (seen in [0]) shown by the error report:

>     self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay, mode.vdisplay, "XR24"))
> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory


Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
successfully [1] on the commit previous to that (bc2532ab7c2):

Reverting f5ca8eb6f9 also produces a successful pass [2]

 [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
 [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
 [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert


I don't believe we handle mmap specially in the RCar-DU driver, so I
wonder if this issue has hit anyone else as well?

Any ideas of a repair without a revert ? Or do we just need to submit a
revert?

I've yet to fully understand the implications of the patch below.

Thanks
--
Regards

Kieran



> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_file.c           |   3 +-
>  drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>  drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>  drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>  include/drm/drm_gem_cma_helper.h     |  10 +--
>  5 files changed, 44 insertions(+), 94 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> index b50380fa80ce..80886d50d0f1 100644
> --- a/drivers/gpu/drm/drm_file.c
> +++ b/drivers/gpu/drm/drm_file.c
> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
>   * The memory mapping implementation will vary depending on how the driver
>   * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
>   * function, modern drivers should use one of the provided memory-manager
> - * specific implementations. For GEM-based drivers this is drm_gem_mmap(), and
> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
> + * specific implementations. For GEM-based drivers this is drm_gem_mmap().
>   *
>   * No other file operations are supported by the DRM userspace API. Overall the
>   * following is an example &file_operations structure::
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 6a4ef335ebc9..7942cf05cd93 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
>  	.print_info = drm_gem_cma_print_info,
>  	.get_sg_table = drm_gem_cma_get_sg_table,
>  	.vmap = drm_gem_cma_vmap,
> +	.mmap = drm_gem_cma_mmap,
>  	.vm_ops = &drm_gem_cma_vm_ops,
>  };
>  
> @@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
>  };
>  EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>  
> -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
> -				struct vm_area_struct *vma)
> -{
> -	int ret;
> -
> -	/*
> -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> -	 * the whole buffer.
> -	 */
> -	vma->vm_flags &= ~VM_PFNMAP;
> -	vma->vm_pgoff = 0;
> -
> -	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
> -			  cma_obj->paddr, vma->vm_end - vma->vm_start);
> -	if (ret)
> -		drm_gem_vm_close(vma);
> -
> -	return ret;
> -}
> -
> -/**
> - * drm_gem_cma_mmap - memory-map a CMA GEM object
> - * @filp: file object
> - * @vma: VMA for the area to be mapped
> - *
> - * This function implements an augmented version of the GEM DRM file mmap
> - * operation for CMA objects: In addition to the usual GEM VMA setup it
> - * immediately faults in the entire object instead of using on-demaind
> - * faulting. Drivers which employ the CMA helpers should use this function
> - * as their ->mmap() handler in the DRM device file's file_operations
> - * structure.
> - *
> - * Instead of directly referencing this function, drivers should use the
> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> - */
> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
> -{
> -	struct drm_gem_cma_object *cma_obj;
> -	struct drm_gem_object *gem_obj;
> -	int ret;
> -
> -	ret = drm_gem_mmap(filp, vma);
> -	if (ret)
> -		return ret;
> -
> -	gem_obj = vma->vm_private_data;
> -	cma_obj = to_drm_gem_cma_obj(gem_obj);
> -
> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
> -
>  #ifndef CONFIG_MMU
>  /**
>   * drm_gem_cma_get_unmapped_area - propose address for mapping in noMMU cases
> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>  
> -/**
> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
> - * @obj: GEM object
> - * @vma: VMA for the area to be mapped
> - *
> - * This function maps a buffer imported via DRM PRIME into a userspace
> - * process's address space. Drivers that use the CMA helpers should set this
> - * as their &drm_driver.gem_prime_mmap callback.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> - */
> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> -			   struct vm_area_struct *vma)
> -{
> -	struct drm_gem_cma_object *cma_obj;
> -	int ret;
> -
> -	ret = drm_gem_mmap_obj(obj, obj->size, vma);
> -	if (ret < 0)
> -		return ret;
> -
> -	cma_obj = to_drm_gem_cma_obj(obj);
> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
> -
>  /**
>   * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>   *     address space
> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>  
> +/**
> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
> + * @obj: GEM object
> + * @vma: VMA for the area to be mapped
> + *
> + * This function maps a buffer into a userspace process's address space.
> + * In addition to the usual GEM VMA setup it immediately faults in the entire
> + * object instead of using on-demand faulting. Drivers that use the CMA
> + * helpers should set this as their &drm_gem_object_funcs.mmap callback.
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +{
> +	struct drm_gem_cma_object *cma_obj;
> +	int ret;
> +
> +	/*
> +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> +	 * the whole buffer.
> +	 */
> +	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
> +	vma->vm_flags &= ~VM_PFNMAP;
> +
> +	cma_obj = to_drm_gem_cma_obj(obj);
> +
> +	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
> +			  cma_obj->paddr, vma->vm_end - vma->vm_start);
> +	if (ret)
> +		drm_gem_vm_close(vma);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
> +
>  /**
>   * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
>   *	scatter/gather table and get the virtual address of the buffer
> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c b/drivers/gpu/drm/pl111/pl111_drv.c
> index 40e6708fbbe2..e4dcaef6c143 100644
> --- a/drivers/gpu/drm/pl111/pl111_drv.c
> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>  	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>  	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>  	.gem_prime_import_sg_table = pl111_gem_import_sg_table,
> -	.gem_prime_mmap = drm_gem_cma_prime_mmap,
> +	.gem_prime_mmap = drm_gem_prime_mmap,
>  
>  #if defined(CONFIG_DEBUG_FS)
>  	.debugfs_init = pl111_debugfs_init,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 813e6cb3f9af..dc316cb79e00 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>  		return -EINVAL;
>  	}
>  
> -	return drm_gem_cma_prime_mmap(obj, vma);
> +	return drm_gem_prime_mmap(obj, vma);
>  }
>  
>  int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 4680275ab339..0a9711caa3e8 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>  		.poll		= drm_poll,\
>  		.read		= drm_read,\
>  		.llseek		= noop_llseek,\
> -		.mmap		= drm_gem_cma_mmap,\
> +		.mmap		= drm_gem_mmap,\
>  		DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>  	}
>  
> @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
>  			    struct drm_device *drm,
>  			    struct drm_mode_create_dumb *args);
>  
> -/* set vm_flags and we can change the VM attribute to other one at here */
> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
> -
>  /* allocate physical memory */
>  struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>  					      size_t size);
> @@ -101,9 +98,8 @@ struct drm_gem_object *
>  drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>  				  struct dma_buf_attachment *attach,
>  				  struct sg_table *sgt);
> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
> -			   struct vm_area_struct *vma);
>  int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>  
>  /**
>   * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd, \
>  	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle, \
>  	.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
> -	.gem_prime_mmap		= drm_gem_cma_prime_mmap
> +	.gem_prime_mmap		= drm_gem_prime_mmap
>  
>  /**
>   * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
> 

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2021-01-14 12:51     ` Kieran Bingham
@ 2021-01-14 13:26       ` Thomas Zimmermann
  -1 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2021-01-14 13:26 UTC (permalink / raw)
  To: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 12295 bytes --]

Hi Kieran

Am 14.01.21 um 13:51 schrieb Kieran Bingham:
> Hi Thomas,
> 
> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>> and offset as in the old implementation and immediately maps in the
>> buffer's memory pages.
>>
>> Changing CMA helpers to use the GEM object function allows for the
>> removal of the special implementations for mmap and gem_prime_mmap
>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>> are now used.
> 
> I've encountered a memory leak regression in our Renesas R-Car DU tests,
> and git bisection has led me to this patch (as commit f5ca8eb6f9).
> 
> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
> is evident that CMA memory is not released, until exhausted and the
> allocations fail (seen in [0]) shown by the error report:
> 
>>      self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay, mode.vdisplay, "XR24"))
>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
> 
> 
> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
> successfully [1] on the commit previous to that (bc2532ab7c2):
> 
> Reverting f5ca8eb6f9 also produces a successful pass [2]
> 
>   [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>   [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>   [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
> 
> 
> I don't believe we handle mmap specially in the RCar-DU driver, so I
> wonder if this issue has hit anyone else as well?
> 
> Any ideas of a repair without a revert ? Or do we just need to submit a
> revert?

I think we might not be setting the VMA ops and therefore not finalize 
the BO correctly. Could you please apply the attched (quick-and-dirty) 
patch and try again?

Best regards
Thomas

> 
> I've yet to fully understand the implications of the patch below.
> 
> Thanks
> --
> Regards
> 
> Kieran
> 
> 
> 
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>   drivers/gpu/drm/drm_file.c           |   3 +-
>>   drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>>   drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>   drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>>   include/drm/drm_gem_cma_helper.h     |  10 +--
>>   5 files changed, 44 insertions(+), 94 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>> index b50380fa80ce..80886d50d0f1 100644
>> --- a/drivers/gpu/drm/drm_file.c
>> +++ b/drivers/gpu/drm/drm_file.c
>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
>>    * The memory mapping implementation will vary depending on how the driver
>>    * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
>>    * function, modern drivers should use one of the provided memory-manager
>> - * specific implementations. For GEM-based drivers this is drm_gem_mmap(), and
>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>> + * specific implementations. For GEM-based drivers this is drm_gem_mmap().
>>    *
>>    * No other file operations are supported by the DRM userspace API. Overall the
>>    * following is an example &file_operations structure::
>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
>> index 6a4ef335ebc9..7942cf05cd93 100644
>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
>>   	.print_info = drm_gem_cma_print_info,
>>   	.get_sg_table = drm_gem_cma_get_sg_table,
>>   	.vmap = drm_gem_cma_vmap,
>> +	.mmap = drm_gem_cma_mmap,
>>   	.vm_ops = &drm_gem_cma_vm_ops,
>>   };
>>   
>> @@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
>>   };
>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>   
>> -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>> -				struct vm_area_struct *vma)
>> -{
>> -	int ret;
>> -
>> -	/*
>> -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> -	 * the whole buffer.
>> -	 */
>> -	vma->vm_flags &= ~VM_PFNMAP;
>> -	vma->vm_pgoff = 0;
>> -
>> -	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>> -			  cma_obj->paddr, vma->vm_end - vma->vm_start);
>> -	if (ret)
>> -		drm_gem_vm_close(vma);
>> -
>> -	return ret;
>> -}
>> -
>> -/**
>> - * drm_gem_cma_mmap - memory-map a CMA GEM object
>> - * @filp: file object
>> - * @vma: VMA for the area to be mapped
>> - *
>> - * This function implements an augmented version of the GEM DRM file mmap
>> - * operation for CMA objects: In addition to the usual GEM VMA setup it
>> - * immediately faults in the entire object instead of using on-demaind
>> - * faulting. Drivers which employ the CMA helpers should use this function
>> - * as their ->mmap() handler in the DRM device file's file_operations
>> - * structure.
>> - *
>> - * Instead of directly referencing this function, drivers should use the
>> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
>> - *
>> - * Returns:
>> - * 0 on success or a negative error code on failure.
>> - */
>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>> -{
>> -	struct drm_gem_cma_object *cma_obj;
>> -	struct drm_gem_object *gem_obj;
>> -	int ret;
>> -
>> -	ret = drm_gem_mmap(filp, vma);
>> -	if (ret)
>> -		return ret;
>> -
>> -	gem_obj = vma->vm_private_data;
>> -	cma_obj = to_drm_gem_cma_obj(gem_obj);
>> -
>> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
>> -}
>> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>> -
>>   #ifndef CONFIG_MMU
>>   /**
>>    * drm_gem_cma_get_unmapped_area - propose address for mapping in noMMU cases
>> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>   }
>>   EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>>   
>> -/**
>> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
>> - * @obj: GEM object
>> - * @vma: VMA for the area to be mapped
>> - *
>> - * This function maps a buffer imported via DRM PRIME into a userspace
>> - * process's address space. Drivers that use the CMA helpers should set this
>> - * as their &drm_driver.gem_prime_mmap callback.
>> - *
>> - * Returns:
>> - * 0 on success or a negative error code on failure.
>> - */
>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>> -			   struct vm_area_struct *vma)
>> -{
>> -	struct drm_gem_cma_object *cma_obj;
>> -	int ret;
>> -
>> -	ret = drm_gem_mmap_obj(obj, obj->size, vma);
>> -	if (ret < 0)
>> -		return ret;
>> -
>> -	cma_obj = to_drm_gem_cma_obj(obj);
>> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
>> -}
>> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>> -
>>   /**
>>    * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>>    *     address space
>> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>>   }
>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>>   
>> +/**
>> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
>> + * @obj: GEM object
>> + * @vma: VMA for the area to be mapped
>> + *
>> + * This function maps a buffer into a userspace process's address space.
>> + * In addition to the usual GEM VMA setup it immediately faults in the entire
>> + * object instead of using on-demand faulting. Drivers that use the CMA
>> + * helpers should set this as their &drm_gem_object_funcs.mmap callback.
>> + *
>> + * Returns:
>> + * 0 on success or a negative error code on failure.
>> + */
>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>> +{
>> +	struct drm_gem_cma_object *cma_obj;
>> +	int ret;
>> +
>> +	/*
>> +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> +	 * the whole buffer.
>> +	 */
>> +	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>> +	vma->vm_flags &= ~VM_PFNMAP;
>> +
>> +	cma_obj = to_drm_gem_cma_obj(obj);
>> +
>> +	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>> +			  cma_obj->paddr, vma->vm_end - vma->vm_start);
>> +	if (ret)
>> +		drm_gem_vm_close(vma);
>> +
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>> +
>>   /**
>>    * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
>>    *	scatter/gather table and get the virtual address of the buffer
>> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c b/drivers/gpu/drm/pl111/pl111_drv.c
>> index 40e6708fbbe2..e4dcaef6c143 100644
>> --- a/drivers/gpu/drm/pl111/pl111_drv.c
>> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
>> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>>   	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>   	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>   	.gem_prime_import_sg_table = pl111_gem_import_sg_table,
>> -	.gem_prime_mmap = drm_gem_cma_prime_mmap,
>> +	.gem_prime_mmap = drm_gem_prime_mmap,
>>   
>>   #if defined(CONFIG_DEBUG_FS)
>>   	.debugfs_init = pl111_debugfs_init,
>> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
>> index 813e6cb3f9af..dc316cb79e00 100644
>> --- a/drivers/gpu/drm/vc4/vc4_bo.c
>> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
>> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>>   		return -EINVAL;
>>   	}
>>   
>> -	return drm_gem_cma_prime_mmap(obj, vma);
>> +	return drm_gem_prime_mmap(obj, vma);
>>   }
>>   
>>   int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
>> index 4680275ab339..0a9711caa3e8 100644
>> --- a/include/drm/drm_gem_cma_helper.h
>> +++ b/include/drm/drm_gem_cma_helper.h
>> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>>   		.poll		= drm_poll,\
>>   		.read		= drm_read,\
>>   		.llseek		= noop_llseek,\
>> -		.mmap		= drm_gem_cma_mmap,\
>> +		.mmap		= drm_gem_mmap,\
>>   		DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>>   	}
>>   
>> @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
>>   			    struct drm_device *drm,
>>   			    struct drm_mode_create_dumb *args);
>>   
>> -/* set vm_flags and we can change the VM attribute to other one at here */
>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
>> -
>>   /* allocate physical memory */
>>   struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>>   					      size_t size);
>> @@ -101,9 +98,8 @@ struct drm_gem_object *
>>   drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>   				  struct dma_buf_attachment *attach,
>>   				  struct sg_table *sgt);
>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>> -			   struct vm_area_struct *vma);
>>   int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>>   
>>   /**
>>    * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
>> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>>   	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd, \
>>   	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle, \
>>   	.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
>> -	.gem_prime_mmap		= drm_gem_cma_prime_mmap
>> +	.gem_prime_mmap		= drm_gem_prime_mmap
>>   
>>   /**
>>    * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
>>
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.2: 0001-drm-cma-Set-vma-ops-in-mmap-function.patch --]
[-- Type: text/x-patch, Size: 942 bytes --]

From d0583fe22cd0cd29749ff679e46e13b58de325cb Mon Sep 17 00:00:00 2001
From: Thomas Zimmermann <tzimmermann@suse.de>
Date: Thu, 14 Jan 2021 14:21:51 +0100
Subject: [PATCH] drm/cma: Set vma ops in mmap function

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 7942cf05cd93..0bd192736169 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -489,6 +489,8 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	struct drm_gem_cma_object *cma_obj;
 	int ret;
 
+	vma->vm_ops = obj->funcs->vm_ops;
+
 	/*
 	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
 	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-- 
2.29.2


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-14 13:26       ` Thomas Zimmermann
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2021-01-14 13:26 UTC (permalink / raw)
  To: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel


[-- Attachment #1.1.1.1: Type: text/plain, Size: 12295 bytes --]

Hi Kieran

Am 14.01.21 um 13:51 schrieb Kieran Bingham:
> Hi Thomas,
> 
> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>> and offset as in the old implementation and immediately maps in the
>> buffer's memory pages.
>>
>> Changing CMA helpers to use the GEM object function allows for the
>> removal of the special implementations for mmap and gem_prime_mmap
>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>> are now used.
> 
> I've encountered a memory leak regression in our Renesas R-Car DU tests,
> and git bisection has led me to this patch (as commit f5ca8eb6f9).
> 
> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
> is evident that CMA memory is not released, until exhausted and the
> allocations fail (seen in [0]) shown by the error report:
> 
>>      self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay, mode.vdisplay, "XR24"))
>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
> 
> 
> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
> successfully [1] on the commit previous to that (bc2532ab7c2):
> 
> Reverting f5ca8eb6f9 also produces a successful pass [2]
> 
>   [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>   [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>   [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
> 
> 
> I don't believe we handle mmap specially in the RCar-DU driver, so I
> wonder if this issue has hit anyone else as well?
> 
> Any ideas of a repair without a revert ? Or do we just need to submit a
> revert?

I think we might not be setting the VMA ops and therefore not finalize 
the BO correctly. Could you please apply the attched (quick-and-dirty) 
patch and try again?

Best regards
Thomas

> 
> I've yet to fully understand the implications of the patch below.
> 
> Thanks
> --
> Regards
> 
> Kieran
> 
> 
> 
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>   drivers/gpu/drm/drm_file.c           |   3 +-
>>   drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>>   drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>   drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>>   include/drm/drm_gem_cma_helper.h     |  10 +--
>>   5 files changed, 44 insertions(+), 94 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>> index b50380fa80ce..80886d50d0f1 100644
>> --- a/drivers/gpu/drm/drm_file.c
>> +++ b/drivers/gpu/drm/drm_file.c
>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
>>    * The memory mapping implementation will vary depending on how the driver
>>    * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
>>    * function, modern drivers should use one of the provided memory-manager
>> - * specific implementations. For GEM-based drivers this is drm_gem_mmap(), and
>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>> + * specific implementations. For GEM-based drivers this is drm_gem_mmap().
>>    *
>>    * No other file operations are supported by the DRM userspace API. Overall the
>>    * following is an example &file_operations structure::
>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
>> index 6a4ef335ebc9..7942cf05cd93 100644
>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
>>   	.print_info = drm_gem_cma_print_info,
>>   	.get_sg_table = drm_gem_cma_get_sg_table,
>>   	.vmap = drm_gem_cma_vmap,
>> +	.mmap = drm_gem_cma_mmap,
>>   	.vm_ops = &drm_gem_cma_vm_ops,
>>   };
>>   
>> @@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
>>   };
>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>   
>> -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>> -				struct vm_area_struct *vma)
>> -{
>> -	int ret;
>> -
>> -	/*
>> -	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> -	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> -	 * the whole buffer.
>> -	 */
>> -	vma->vm_flags &= ~VM_PFNMAP;
>> -	vma->vm_pgoff = 0;
>> -
>> -	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>> -			  cma_obj->paddr, vma->vm_end - vma->vm_start);
>> -	if (ret)
>> -		drm_gem_vm_close(vma);
>> -
>> -	return ret;
>> -}
>> -
>> -/**
>> - * drm_gem_cma_mmap - memory-map a CMA GEM object
>> - * @filp: file object
>> - * @vma: VMA for the area to be mapped
>> - *
>> - * This function implements an augmented version of the GEM DRM file mmap
>> - * operation for CMA objects: In addition to the usual GEM VMA setup it
>> - * immediately faults in the entire object instead of using on-demaind
>> - * faulting. Drivers which employ the CMA helpers should use this function
>> - * as their ->mmap() handler in the DRM device file's file_operations
>> - * structure.
>> - *
>> - * Instead of directly referencing this function, drivers should use the
>> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
>> - *
>> - * Returns:
>> - * 0 on success or a negative error code on failure.
>> - */
>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>> -{
>> -	struct drm_gem_cma_object *cma_obj;
>> -	struct drm_gem_object *gem_obj;
>> -	int ret;
>> -
>> -	ret = drm_gem_mmap(filp, vma);
>> -	if (ret)
>> -		return ret;
>> -
>> -	gem_obj = vma->vm_private_data;
>> -	cma_obj = to_drm_gem_cma_obj(gem_obj);
>> -
>> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
>> -}
>> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>> -
>>   #ifndef CONFIG_MMU
>>   /**
>>    * drm_gem_cma_get_unmapped_area - propose address for mapping in noMMU cases
>> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>   }
>>   EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>>   
>> -/**
>> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
>> - * @obj: GEM object
>> - * @vma: VMA for the area to be mapped
>> - *
>> - * This function maps a buffer imported via DRM PRIME into a userspace
>> - * process's address space. Drivers that use the CMA helpers should set this
>> - * as their &drm_driver.gem_prime_mmap callback.
>> - *
>> - * Returns:
>> - * 0 on success or a negative error code on failure.
>> - */
>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>> -			   struct vm_area_struct *vma)
>> -{
>> -	struct drm_gem_cma_object *cma_obj;
>> -	int ret;
>> -
>> -	ret = drm_gem_mmap_obj(obj, obj->size, vma);
>> -	if (ret < 0)
>> -		return ret;
>> -
>> -	cma_obj = to_drm_gem_cma_obj(obj);
>> -	return drm_gem_cma_mmap_obj(cma_obj, vma);
>> -}
>> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>> -
>>   /**
>>    * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>>    *     address space
>> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>>   }
>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>>   
>> +/**
>> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
>> + * @obj: GEM object
>> + * @vma: VMA for the area to be mapped
>> + *
>> + * This function maps a buffer into a userspace process's address space.
>> + * In addition to the usual GEM VMA setup it immediately faults in the entire
>> + * object instead of using on-demand faulting. Drivers that use the CMA
>> + * helpers should set this as their &drm_gem_object_funcs.mmap callback.
>> + *
>> + * Returns:
>> + * 0 on success or a negative error code on failure.
>> + */
>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>> +{
>> +	struct drm_gem_cma_object *cma_obj;
>> +	int ret;
>> +
>> +	/*
>> +	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> +	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> +	 * the whole buffer.
>> +	 */
>> +	vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>> +	vma->vm_flags &= ~VM_PFNMAP;
>> +
>> +	cma_obj = to_drm_gem_cma_obj(obj);
>> +
>> +	ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>> +			  cma_obj->paddr, vma->vm_end - vma->vm_start);
>> +	if (ret)
>> +		drm_gem_vm_close(vma);
>> +
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>> +
>>   /**
>>    * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another driver's
>>    *	scatter/gather table and get the virtual address of the buffer
>> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c b/drivers/gpu/drm/pl111/pl111_drv.c
>> index 40e6708fbbe2..e4dcaef6c143 100644
>> --- a/drivers/gpu/drm/pl111/pl111_drv.c
>> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
>> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>>   	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>   	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>   	.gem_prime_import_sg_table = pl111_gem_import_sg_table,
>> -	.gem_prime_mmap = drm_gem_cma_prime_mmap,
>> +	.gem_prime_mmap = drm_gem_prime_mmap,
>>   
>>   #if defined(CONFIG_DEBUG_FS)
>>   	.debugfs_init = pl111_debugfs_init,
>> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
>> index 813e6cb3f9af..dc316cb79e00 100644
>> --- a/drivers/gpu/drm/vc4/vc4_bo.c
>> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
>> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>>   		return -EINVAL;
>>   	}
>>   
>> -	return drm_gem_cma_prime_mmap(obj, vma);
>> +	return drm_gem_prime_mmap(obj, vma);
>>   }
>>   
>>   int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
>> index 4680275ab339..0a9711caa3e8 100644
>> --- a/include/drm/drm_gem_cma_helper.h
>> +++ b/include/drm/drm_gem_cma_helper.h
>> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>>   		.poll		= drm_poll,\
>>   		.read		= drm_read,\
>>   		.llseek		= noop_llseek,\
>> -		.mmap		= drm_gem_cma_mmap,\
>> +		.mmap		= drm_gem_mmap,\
>>   		DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>>   	}
>>   
>> @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
>>   			    struct drm_device *drm,
>>   			    struct drm_mode_create_dumb *args);
>>   
>> -/* set vm_flags and we can change the VM attribute to other one at here */
>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
>> -
>>   /* allocate physical memory */
>>   struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>>   					      size_t size);
>> @@ -101,9 +98,8 @@ struct drm_gem_object *
>>   drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>   				  struct dma_buf_attachment *attach,
>>   				  struct sg_table *sgt);
>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>> -			   struct vm_area_struct *vma);
>>   int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>>   
>>   /**
>>    * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
>> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>>   	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd, \
>>   	.prime_fd_to_handle	= drm_gem_prime_fd_to_handle, \
>>   	.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
>> -	.gem_prime_mmap		= drm_gem_cma_prime_mmap
>> +	.gem_prime_mmap		= drm_gem_prime_mmap
>>   
>>   /**
>>    * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
>>
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer

[-- Attachment #1.1.1.2: 0001-drm-cma-Set-vma-ops-in-mmap-function.patch --]
[-- Type: text/x-patch, Size: 942 bytes --]

From d0583fe22cd0cd29749ff679e46e13b58de325cb Mon Sep 17 00:00:00 2001
From: Thomas Zimmermann <tzimmermann@suse.de>
Date: Thu, 14 Jan 2021 14:21:51 +0100
Subject: [PATCH] drm/cma: Set vma ops in mmap function

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 7942cf05cd93..0bd192736169 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -489,6 +489,8 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	struct drm_gem_cma_object *cma_obj;
 	int ret;
 
+	vma->vm_ops = obj->funcs->vm_ops;
+
 	/*
 	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
 	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-- 
2.29.2


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2021-01-14 13:26       ` Thomas Zimmermann
@ 2021-01-14 14:34         ` Kieran Bingham
  -1 siblings, 0 replies; 19+ messages in thread
From: Kieran Bingham @ 2021-01-14 14:34 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel

Hi Thomas,

On 14/01/2021 13:26, Thomas Zimmermann wrote:
> Hi Kieran
> 
> Am 14.01.21 um 13:51 schrieb Kieran Bingham:
>> Hi Thomas,
>>
>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>> and offset as in the old implementation and immediately maps in the
>>> buffer's memory pages.
>>>
>>> Changing CMA helpers to use the GEM object function allows for the
>>> removal of the special implementations for mmap and gem_prime_mmap
>>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>>> are now used.
>>
>> I've encountered a memory leak regression in our Renesas R-Car DU tests,
>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>
>> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
>> is evident that CMA memory is not released, until exhausted and the
>> allocations fail (seen in [0]) shown by the error report:
>>
>>>      self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>> mode.vdisplay, "XR24"))
>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>
>>
>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>
>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>
>>   [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>   [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>   [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>
>>
>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>> wonder if this issue has hit anyone else as well?
>>
>> Any ideas of a repair without a revert ? Or do we just need to submit a
>> revert?
> 
> I think we might not be setting the VMA ops and therefore not finalize
> the BO correctly. Could you please apply the attched (quick-and-dirty)
> patch and try again?

Thanks for the quick response.

I can confirm the quick-and-dirty patch resolves the issue:
  https://paste.ubuntu.com/p/sKDp3dNvwV/

You can add a
Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>

if it stays like that, but I suspect there might be a better place to
initialise the ops rather than in the mmap call itself.

Happy to do further testing as required.

--
Regards

Kieran


> Best regards
> Thomas
> 
>>
>> I've yet to fully understand the implications of the patch below.
>>
>> Thanks
>> -- 
>> Regards
>>
>> Kieran
>>
>>
>>
>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>> ---
>>>   drivers/gpu/drm/drm_file.c           |   3 +-
>>>   drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>>>   drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>>   drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>>>   include/drm/drm_gem_cma_helper.h     |  10 +--
>>>   5 files changed, 44 insertions(+), 94 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>> index b50380fa80ce..80886d50d0f1 100644
>>> --- a/drivers/gpu/drm/drm_file.c
>>> +++ b/drivers/gpu/drm/drm_file.c
>>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device
>>> *dev)
>>>    * The memory mapping implementation will vary depending on how the
>>> driver
>>>    * manages memory. Legacy drivers will use the deprecated
>>> drm_legacy_mmap()
>>>    * function, modern drivers should use one of the provided
>>> memory-manager
>>> - * specific implementations. For GEM-based drivers this is
>>> drm_gem_mmap(), and
>>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>>> + * specific implementations. For GEM-based drivers this is
>>> drm_gem_mmap().
>>>    *
>>>    * No other file operations are supported by the DRM userspace API.
>>> Overall the
>>>    * following is an example &file_operations structure::
>>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>>> b/drivers/gpu/drm/drm_gem_cma_helper.c
>>> index 6a4ef335ebc9..7942cf05cd93 100644
>>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs
>>> drm_gem_cma_default_funcs = {
>>>       .print_info = drm_gem_cma_print_info,
>>>       .get_sg_table = drm_gem_cma_get_sg_table,
>>>       .vmap = drm_gem_cma_vmap,
>>> +    .mmap = drm_gem_cma_mmap,
>>>       .vm_ops = &drm_gem_cma_vm_ops,
>>>   };
>>>   @@ -277,62 +278,6 @@ const struct vm_operations_struct
>>> drm_gem_cma_vm_ops = {
>>>   };
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>>   -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>>> -                struct vm_area_struct *vma)
>>> -{
>>> -    int ret;
>>> -
>>> -    /*
>>> -     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>> set the
>>> -     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>> want to map
>>> -     * the whole buffer.
>>> -     */
>>> -    vma->vm_flags &= ~VM_PFNMAP;
>>> -    vma->vm_pgoff = 0;
>>> -
>>> -    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>> -              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>> -    if (ret)
>>> -        drm_gem_vm_close(vma);
>>> -
>>> -    return ret;
>>> -}
>>> -
>>> -/**
>>> - * drm_gem_cma_mmap - memory-map a CMA GEM object
>>> - * @filp: file object
>>> - * @vma: VMA for the area to be mapped
>>> - *
>>> - * This function implements an augmented version of the GEM DRM file
>>> mmap
>>> - * operation for CMA objects: In addition to the usual GEM VMA setup it
>>> - * immediately faults in the entire object instead of using on-demaind
>>> - * faulting. Drivers which employ the CMA helpers should use this
>>> function
>>> - * as their ->mmap() handler in the DRM device file's file_operations
>>> - * structure.
>>> - *
>>> - * Instead of directly referencing this function, drivers should use
>>> the
>>> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
>>> - *
>>> - * Returns:
>>> - * 0 on success or a negative error code on failure.
>>> - */
>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>>> -{
>>> -    struct drm_gem_cma_object *cma_obj;
>>> -    struct drm_gem_object *gem_obj;
>>> -    int ret;
>>> -
>>> -    ret = drm_gem_mmap(filp, vma);
>>> -    if (ret)
>>> -        return ret;
>>> -
>>> -    gem_obj = vma->vm_private_data;
>>> -    cma_obj = to_drm_gem_cma_obj(gem_obj);
>>> -
>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>> -}
>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>> -
>>>   #ifndef CONFIG_MMU
>>>   /**
>>>    * drm_gem_cma_get_unmapped_area - propose address for mapping in
>>> noMMU cases
>>> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct
>>> drm_device *dev,
>>>   }
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>>>   -/**
>>> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
>>> - * @obj: GEM object
>>> - * @vma: VMA for the area to be mapped
>>> - *
>>> - * This function maps a buffer imported via DRM PRIME into a userspace
>>> - * process's address space. Drivers that use the CMA helpers should
>>> set this
>>> - * as their &drm_driver.gem_prime_mmap callback.
>>> - *
>>> - * Returns:
>>> - * 0 on success or a negative error code on failure.
>>> - */
>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>> -               struct vm_area_struct *vma)
>>> -{
>>> -    struct drm_gem_cma_object *cma_obj;
>>> -    int ret;
>>> -
>>> -    ret = drm_gem_mmap_obj(obj, obj->size, vma);
>>> -    if (ret < 0)
>>> -        return ret;
>>> -
>>> -    cma_obj = to_drm_gem_cma_obj(obj);
>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>> -}
>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>>> -
>>>   /**
>>>    * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>>>    *     address space
>>> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>> struct dma_buf_map *map)
>>>   }
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>>>   +/**
>>> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
>>> + * @obj: GEM object
>>> + * @vma: VMA for the area to be mapped
>>> + *
>>> + * This function maps a buffer into a userspace process's address
>>> space.
>>> + * In addition to the usual GEM VMA setup it immediately faults in
>>> the entire
>>> + * object instead of using on-demand faulting. Drivers that use the CMA
>>> + * helpers should set this as their &drm_gem_object_funcs.mmap
>>> callback.
>>> + *
>>> + * Returns:
>>> + * 0 on success or a negative error code on failure.
>>> + */
>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>> vm_area_struct *vma)
>>> +{
>>> +    struct drm_gem_cma_object *cma_obj;
>>> +    int ret;
>>> +
>>> +    /*
>>> +     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>> set the
>>> +     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>> want to map
>>> +     * the whole buffer.
>>> +     */
>>> +    vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>>> +    vma->vm_flags &= ~VM_PFNMAP;
>>> +
>>> +    cma_obj = to_drm_gem_cma_obj(obj);
>>> +
>>> +    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>> +              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>> +    if (ret)
>>> +        drm_gem_vm_close(vma);
>>> +
>>> +    return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>> +
>>>   /**
>>>    * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another
>>> driver's
>>>    *    scatter/gather table and get the virtual address of the buffer
>>> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c
>>> b/drivers/gpu/drm/pl111/pl111_drv.c
>>> index 40e6708fbbe2..e4dcaef6c143 100644
>>> --- a/drivers/gpu/drm/pl111/pl111_drv.c
>>> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
>>> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>>>       .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>>       .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>>       .gem_prime_import_sg_table = pl111_gem_import_sg_table,
>>> -    .gem_prime_mmap = drm_gem_cma_prime_mmap,
>>> +    .gem_prime_mmap = drm_gem_prime_mmap,
>>>     #if defined(CONFIG_DEBUG_FS)
>>>       .debugfs_init = pl111_debugfs_init,
>>> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
>>> index 813e6cb3f9af..dc316cb79e00 100644
>>> --- a/drivers/gpu/drm/vc4/vc4_bo.c
>>> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
>>> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj,
>>> struct vm_area_struct *vma)
>>>           return -EINVAL;
>>>       }
>>>   -    return drm_gem_cma_prime_mmap(obj, vma);
>>> +    return drm_gem_prime_mmap(obj, vma);
>>>   }
>>>     int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>> *map)
>>> diff --git a/include/drm/drm_gem_cma_helper.h
>>> b/include/drm/drm_gem_cma_helper.h
>>> index 4680275ab339..0a9711caa3e8 100644
>>> --- a/include/drm/drm_gem_cma_helper.h
>>> +++ b/include/drm/drm_gem_cma_helper.h
>>> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>>>           .poll        = drm_poll,\
>>>           .read        = drm_read,\
>>>           .llseek        = noop_llseek,\
>>> -        .mmap        = drm_gem_cma_mmap,\
>>> +        .mmap        = drm_gem_mmap,\
>>>           DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>>>       }
>>>   @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file
>>> *file_priv,
>>>                   struct drm_device *drm,
>>>                   struct drm_mode_create_dumb *args);
>>>   -/* set vm_flags and we can change the VM attribute to other one at
>>> here */
>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
>>> -
>>>   /* allocate physical memory */
>>>   struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>>>                             size_t size);
>>> @@ -101,9 +98,8 @@ struct drm_gem_object *
>>>   drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>>                     struct dma_buf_attachment *attach,
>>>                     struct sg_table *sgt);
>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>> -               struct vm_area_struct *vma);
>>>   int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>> *map);
>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>> vm_area_struct *vma);
>>>     /**
>>>    * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
>>> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>> struct dma_buf_map *map);
>>>       .prime_handle_to_fd    = drm_gem_prime_handle_to_fd, \
>>>       .prime_fd_to_handle    = drm_gem_prime_fd_to_handle, \
>>>       .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
>>> -    .gem_prime_mmap        = drm_gem_cma_prime_mmap
>>> +    .gem_prime_mmap        = drm_gem_prime_mmap
>>>     /**
>>>    * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
>>>
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-14 14:34         ` Kieran Bingham
  0 siblings, 0 replies; 19+ messages in thread
From: Kieran Bingham @ 2021-01-14 14:34 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel

Hi Thomas,

On 14/01/2021 13:26, Thomas Zimmermann wrote:
> Hi Kieran
> 
> Am 14.01.21 um 13:51 schrieb Kieran Bingham:
>> Hi Thomas,
>>
>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>> and offset as in the old implementation and immediately maps in the
>>> buffer's memory pages.
>>>
>>> Changing CMA helpers to use the GEM object function allows for the
>>> removal of the special implementations for mmap and gem_prime_mmap
>>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>>> are now used.
>>
>> I've encountered a memory leak regression in our Renesas R-Car DU tests,
>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>
>> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
>> is evident that CMA memory is not released, until exhausted and the
>> allocations fail (seen in [0]) shown by the error report:
>>
>>>      self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>> mode.vdisplay, "XR24"))
>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>
>>
>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>
>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>
>>   [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>   [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>   [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>
>>
>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>> wonder if this issue has hit anyone else as well?
>>
>> Any ideas of a repair without a revert ? Or do we just need to submit a
>> revert?
> 
> I think we might not be setting the VMA ops and therefore not finalize
> the BO correctly. Could you please apply the attched (quick-and-dirty)
> patch and try again?

Thanks for the quick response.

I can confirm the quick-and-dirty patch resolves the issue:
  https://paste.ubuntu.com/p/sKDp3dNvwV/

You can add a
Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>

if it stays like that, but I suspect there might be a better place to
initialise the ops rather than in the mmap call itself.

Happy to do further testing as required.

--
Regards

Kieran


> Best regards
> Thomas
> 
>>
>> I've yet to fully understand the implications of the patch below.
>>
>> Thanks
>> -- 
>> Regards
>>
>> Kieran
>>
>>
>>
>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>> ---
>>>   drivers/gpu/drm/drm_file.c           |   3 +-
>>>   drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>>>   drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>>   drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>>>   include/drm/drm_gem_cma_helper.h     |  10 +--
>>>   5 files changed, 44 insertions(+), 94 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>> index b50380fa80ce..80886d50d0f1 100644
>>> --- a/drivers/gpu/drm/drm_file.c
>>> +++ b/drivers/gpu/drm/drm_file.c
>>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device
>>> *dev)
>>>    * The memory mapping implementation will vary depending on how the
>>> driver
>>>    * manages memory. Legacy drivers will use the deprecated
>>> drm_legacy_mmap()
>>>    * function, modern drivers should use one of the provided
>>> memory-manager
>>> - * specific implementations. For GEM-based drivers this is
>>> drm_gem_mmap(), and
>>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>>> + * specific implementations. For GEM-based drivers this is
>>> drm_gem_mmap().
>>>    *
>>>    * No other file operations are supported by the DRM userspace API.
>>> Overall the
>>>    * following is an example &file_operations structure::
>>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>>> b/drivers/gpu/drm/drm_gem_cma_helper.c
>>> index 6a4ef335ebc9..7942cf05cd93 100644
>>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs
>>> drm_gem_cma_default_funcs = {
>>>       .print_info = drm_gem_cma_print_info,
>>>       .get_sg_table = drm_gem_cma_get_sg_table,
>>>       .vmap = drm_gem_cma_vmap,
>>> +    .mmap = drm_gem_cma_mmap,
>>>       .vm_ops = &drm_gem_cma_vm_ops,
>>>   };
>>>   @@ -277,62 +278,6 @@ const struct vm_operations_struct
>>> drm_gem_cma_vm_ops = {
>>>   };
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>>   -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>>> -                struct vm_area_struct *vma)
>>> -{
>>> -    int ret;
>>> -
>>> -    /*
>>> -     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>> set the
>>> -     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>> want to map
>>> -     * the whole buffer.
>>> -     */
>>> -    vma->vm_flags &= ~VM_PFNMAP;
>>> -    vma->vm_pgoff = 0;
>>> -
>>> -    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>> -              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>> -    if (ret)
>>> -        drm_gem_vm_close(vma);
>>> -
>>> -    return ret;
>>> -}
>>> -
>>> -/**
>>> - * drm_gem_cma_mmap - memory-map a CMA GEM object
>>> - * @filp: file object
>>> - * @vma: VMA for the area to be mapped
>>> - *
>>> - * This function implements an augmented version of the GEM DRM file
>>> mmap
>>> - * operation for CMA objects: In addition to the usual GEM VMA setup it
>>> - * immediately faults in the entire object instead of using on-demaind
>>> - * faulting. Drivers which employ the CMA helpers should use this
>>> function
>>> - * as their ->mmap() handler in the DRM device file's file_operations
>>> - * structure.
>>> - *
>>> - * Instead of directly referencing this function, drivers should use
>>> the
>>> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
>>> - *
>>> - * Returns:
>>> - * 0 on success or a negative error code on failure.
>>> - */
>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>>> -{
>>> -    struct drm_gem_cma_object *cma_obj;
>>> -    struct drm_gem_object *gem_obj;
>>> -    int ret;
>>> -
>>> -    ret = drm_gem_mmap(filp, vma);
>>> -    if (ret)
>>> -        return ret;
>>> -
>>> -    gem_obj = vma->vm_private_data;
>>> -    cma_obj = to_drm_gem_cma_obj(gem_obj);
>>> -
>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>> -}
>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>> -
>>>   #ifndef CONFIG_MMU
>>>   /**
>>>    * drm_gem_cma_get_unmapped_area - propose address for mapping in
>>> noMMU cases
>>> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct
>>> drm_device *dev,
>>>   }
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>>>   -/**
>>> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
>>> - * @obj: GEM object
>>> - * @vma: VMA for the area to be mapped
>>> - *
>>> - * This function maps a buffer imported via DRM PRIME into a userspace
>>> - * process's address space. Drivers that use the CMA helpers should
>>> set this
>>> - * as their &drm_driver.gem_prime_mmap callback.
>>> - *
>>> - * Returns:
>>> - * 0 on success or a negative error code on failure.
>>> - */
>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>> -               struct vm_area_struct *vma)
>>> -{
>>> -    struct drm_gem_cma_object *cma_obj;
>>> -    int ret;
>>> -
>>> -    ret = drm_gem_mmap_obj(obj, obj->size, vma);
>>> -    if (ret < 0)
>>> -        return ret;
>>> -
>>> -    cma_obj = to_drm_gem_cma_obj(obj);
>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>> -}
>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>>> -
>>>   /**
>>>    * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>>>    *     address space
>>> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>> struct dma_buf_map *map)
>>>   }
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>>>   +/**
>>> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
>>> + * @obj: GEM object
>>> + * @vma: VMA for the area to be mapped
>>> + *
>>> + * This function maps a buffer into a userspace process's address
>>> space.
>>> + * In addition to the usual GEM VMA setup it immediately faults in
>>> the entire
>>> + * object instead of using on-demand faulting. Drivers that use the CMA
>>> + * helpers should set this as their &drm_gem_object_funcs.mmap
>>> callback.
>>> + *
>>> + * Returns:
>>> + * 0 on success or a negative error code on failure.
>>> + */
>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>> vm_area_struct *vma)
>>> +{
>>> +    struct drm_gem_cma_object *cma_obj;
>>> +    int ret;
>>> +
>>> +    /*
>>> +     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>> set the
>>> +     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>> want to map
>>> +     * the whole buffer.
>>> +     */
>>> +    vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>>> +    vma->vm_flags &= ~VM_PFNMAP;
>>> +
>>> +    cma_obj = to_drm_gem_cma_obj(obj);
>>> +
>>> +    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>> +              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>> +    if (ret)
>>> +        drm_gem_vm_close(vma);
>>> +
>>> +    return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>> +
>>>   /**
>>>    * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another
>>> driver's
>>>    *    scatter/gather table and get the virtual address of the buffer
>>> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c
>>> b/drivers/gpu/drm/pl111/pl111_drv.c
>>> index 40e6708fbbe2..e4dcaef6c143 100644
>>> --- a/drivers/gpu/drm/pl111/pl111_drv.c
>>> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
>>> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>>>       .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>>       .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>>       .gem_prime_import_sg_table = pl111_gem_import_sg_table,
>>> -    .gem_prime_mmap = drm_gem_cma_prime_mmap,
>>> +    .gem_prime_mmap = drm_gem_prime_mmap,
>>>     #if defined(CONFIG_DEBUG_FS)
>>>       .debugfs_init = pl111_debugfs_init,
>>> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
>>> index 813e6cb3f9af..dc316cb79e00 100644
>>> --- a/drivers/gpu/drm/vc4/vc4_bo.c
>>> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
>>> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj,
>>> struct vm_area_struct *vma)
>>>           return -EINVAL;
>>>       }
>>>   -    return drm_gem_cma_prime_mmap(obj, vma);
>>> +    return drm_gem_prime_mmap(obj, vma);
>>>   }
>>>     int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>> *map)
>>> diff --git a/include/drm/drm_gem_cma_helper.h
>>> b/include/drm/drm_gem_cma_helper.h
>>> index 4680275ab339..0a9711caa3e8 100644
>>> --- a/include/drm/drm_gem_cma_helper.h
>>> +++ b/include/drm/drm_gem_cma_helper.h
>>> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>>>           .poll        = drm_poll,\
>>>           .read        = drm_read,\
>>>           .llseek        = noop_llseek,\
>>> -        .mmap        = drm_gem_cma_mmap,\
>>> +        .mmap        = drm_gem_mmap,\
>>>           DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>>>       }
>>>   @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file
>>> *file_priv,
>>>                   struct drm_device *drm,
>>>                   struct drm_mode_create_dumb *args);
>>>   -/* set vm_flags and we can change the VM attribute to other one at
>>> here */
>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
>>> -
>>>   /* allocate physical memory */
>>>   struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>>>                             size_t size);
>>> @@ -101,9 +98,8 @@ struct drm_gem_object *
>>>   drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>>                     struct dma_buf_attachment *attach,
>>>                     struct sg_table *sgt);
>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>> -               struct vm_area_struct *vma);
>>>   int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>> *map);
>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>> vm_area_struct *vma);
>>>     /**
>>>    * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
>>> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>> struct dma_buf_map *map);
>>>       .prime_handle_to_fd    = drm_gem_prime_handle_to_fd, \
>>>       .prime_fd_to_handle    = drm_gem_prime_fd_to_handle, \
>>>       .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
>>> -    .gem_prime_mmap        = drm_gem_cma_prime_mmap
>>> +    .gem_prime_mmap        = drm_gem_prime_mmap
>>>     /**
>>>    * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
>>>
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
> 

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2021-01-14 14:34         ` Kieran Bingham
@ 2021-01-14 15:15           ` Thomas Zimmermann
  -1 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2021-01-14 15:15 UTC (permalink / raw)
  To: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 15114 bytes --]

Hi

Am 14.01.21 um 15:34 schrieb Kieran Bingham:
> Hi Thomas,
> 
> On 14/01/2021 13:26, Thomas Zimmermann wrote:
>> Hi Kieran
>>
>> Am 14.01.21 um 13:51 schrieb Kieran Bingham:
>>> Hi Thomas,
>>>
>>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>>> and offset as in the old implementation and immediately maps in the
>>>> buffer's memory pages.
>>>>
>>>> Changing CMA helpers to use the GEM object function allows for the
>>>> removal of the special implementations for mmap and gem_prime_mmap
>>>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>>>> are now used.
>>>
>>> I've encountered a memory leak regression in our Renesas R-Car DU tests,
>>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>>
>>> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
>>> is evident that CMA memory is not released, until exhausted and the
>>> allocations fail (seen in [0]) shown by the error report:
>>>
>>>>       self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>>> mode.vdisplay, "XR24"))
>>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>>
>>>
>>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>>
>>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>>
>>>    [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>>    [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>>    [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>>
>>>
>>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>>> wonder if this issue has hit anyone else as well?
>>>
>>> Any ideas of a repair without a revert ? Or do we just need to submit a
>>> revert?
>>
>> I think we might not be setting the VMA ops and therefore not finalize
>> the BO correctly. Could you please apply the attched (quick-and-dirty)
>> patch and try again?
> 
> Thanks for the quick response.
> 
> I can confirm the quick-and-dirty patch resolves the issue:
>    https://paste.ubuntu.com/p/sKDp3dNvwV/
> 
> You can add a
> Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>

Great! If you don't mind, I'd also add you in the Reported-by tag.

> 
> if it stays like that, but I suspect there might be a better place to
> initialise the ops rather than in the mmap call itself.

I think that's the fix, basically. We could put such a line as a 
fall-back somewhere into the DRM core code. But I don't know if this 
really works with all drivers. Maybe there's one that requires vm_ops to 
be NULL.

Thanks for reporting this issue and testing quickly.

Best regards
Thomas

> 
> Happy to do further testing as required.
> 
> --
> Regards
> 
> Kieran
> 
> 
>> Best regards
>> Thomas
>>
>>>
>>> I've yet to fully understand the implications of the patch below.
>>>
>>> Thanks
>>> -- 
>>> Regards
>>>
>>> Kieran
>>>
>>>
>>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>>    drivers/gpu/drm/drm_file.c           |   3 +-
>>>>    drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>>>>    drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>>>    drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>>>>    include/drm/drm_gem_cma_helper.h     |  10 +--
>>>>    5 files changed, 44 insertions(+), 94 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>>> index b50380fa80ce..80886d50d0f1 100644
>>>> --- a/drivers/gpu/drm/drm_file.c
>>>> +++ b/drivers/gpu/drm/drm_file.c
>>>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device
>>>> *dev)
>>>>     * The memory mapping implementation will vary depending on how the
>>>> driver
>>>>     * manages memory. Legacy drivers will use the deprecated
>>>> drm_legacy_mmap()
>>>>     * function, modern drivers should use one of the provided
>>>> memory-manager
>>>> - * specific implementations. For GEM-based drivers this is
>>>> drm_gem_mmap(), and
>>>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>>>> + * specific implementations. For GEM-based drivers this is
>>>> drm_gem_mmap().
>>>>     *
>>>>     * No other file operations are supported by the DRM userspace API.
>>>> Overall the
>>>>     * following is an example &file_operations structure::
>>>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> b/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> index 6a4ef335ebc9..7942cf05cd93 100644
>>>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs
>>>> drm_gem_cma_default_funcs = {
>>>>        .print_info = drm_gem_cma_print_info,
>>>>        .get_sg_table = drm_gem_cma_get_sg_table,
>>>>        .vmap = drm_gem_cma_vmap,
>>>> +    .mmap = drm_gem_cma_mmap,
>>>>        .vm_ops = &drm_gem_cma_vm_ops,
>>>>    };
>>>>    @@ -277,62 +278,6 @@ const struct vm_operations_struct
>>>> drm_gem_cma_vm_ops = {
>>>>    };
>>>>    EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>>>    -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>>>> -                struct vm_area_struct *vma)
>>>> -{
>>>> -    int ret;
>>>> -
>>>> -    /*
>>>> -     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>>> set the
>>>> -     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>>> want to map
>>>> -     * the whole buffer.
>>>> -     */
>>>> -    vma->vm_flags &= ~VM_PFNMAP;
>>>> -    vma->vm_pgoff = 0;
>>>> -
>>>> -    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>>> -              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>>> -    if (ret)
>>>> -        drm_gem_vm_close(vma);
>>>> -
>>>> -    return ret;
>>>> -}
>>>> -
>>>> -/**
>>>> - * drm_gem_cma_mmap - memory-map a CMA GEM object
>>>> - * @filp: file object
>>>> - * @vma: VMA for the area to be mapped
>>>> - *
>>>> - * This function implements an augmented version of the GEM DRM file
>>>> mmap
>>>> - * operation for CMA objects: In addition to the usual GEM VMA setup it
>>>> - * immediately faults in the entire object instead of using on-demaind
>>>> - * faulting. Drivers which employ the CMA helpers should use this
>>>> function
>>>> - * as their ->mmap() handler in the DRM device file's file_operations
>>>> - * structure.
>>>> - *
>>>> - * Instead of directly referencing this function, drivers should use
>>>> the
>>>> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
>>>> - *
>>>> - * Returns:
>>>> - * 0 on success or a negative error code on failure.
>>>> - */
>>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>>>> -{
>>>> -    struct drm_gem_cma_object *cma_obj;
>>>> -    struct drm_gem_object *gem_obj;
>>>> -    int ret;
>>>> -
>>>> -    ret = drm_gem_mmap(filp, vma);
>>>> -    if (ret)
>>>> -        return ret;
>>>> -
>>>> -    gem_obj = vma->vm_private_data;
>>>> -    cma_obj = to_drm_gem_cma_obj(gem_obj);
>>>> -
>>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>>> -}
>>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>>> -
>>>>    #ifndef CONFIG_MMU
>>>>    /**
>>>>     * drm_gem_cma_get_unmapped_area - propose address for mapping in
>>>> noMMU cases
>>>> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct
>>>> drm_device *dev,
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>>>>    -/**
>>>> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
>>>> - * @obj: GEM object
>>>> - * @vma: VMA for the area to be mapped
>>>> - *
>>>> - * This function maps a buffer imported via DRM PRIME into a userspace
>>>> - * process's address space. Drivers that use the CMA helpers should
>>>> set this
>>>> - * as their &drm_driver.gem_prime_mmap callback.
>>>> - *
>>>> - * Returns:
>>>> - * 0 on success or a negative error code on failure.
>>>> - */
>>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>>> -               struct vm_area_struct *vma)
>>>> -{
>>>> -    struct drm_gem_cma_object *cma_obj;
>>>> -    int ret;
>>>> -
>>>> -    ret = drm_gem_mmap_obj(obj, obj->size, vma);
>>>> -    if (ret < 0)
>>>> -        return ret;
>>>> -
>>>> -    cma_obj = to_drm_gem_cma_obj(obj);
>>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>>> -}
>>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>>>> -
>>>>    /**
>>>>     * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>>>>     *     address space
>>>> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>>> struct dma_buf_map *map)
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>>>>    +/**
>>>> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
>>>> + * @obj: GEM object
>>>> + * @vma: VMA for the area to be mapped
>>>> + *
>>>> + * This function maps a buffer into a userspace process's address
>>>> space.
>>>> + * In addition to the usual GEM VMA setup it immediately faults in
>>>> the entire
>>>> + * object instead of using on-demand faulting. Drivers that use the CMA
>>>> + * helpers should set this as their &drm_gem_object_funcs.mmap
>>>> callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success or a negative error code on failure.
>>>> + */
>>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>>> vm_area_struct *vma)
>>>> +{
>>>> +    struct drm_gem_cma_object *cma_obj;
>>>> +    int ret;
>>>> +
>>>> +    /*
>>>> +     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>>> set the
>>>> +     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>>> want to map
>>>> +     * the whole buffer.
>>>> +     */
>>>> +    vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>>>> +    vma->vm_flags &= ~VM_PFNMAP;
>>>> +
>>>> +    cma_obj = to_drm_gem_cma_obj(obj);
>>>> +
>>>> +    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>>> +              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>>> +    if (ret)
>>>> +        drm_gem_vm_close(vma);
>>>> +
>>>> +    return ret;
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>>> +
>>>>    /**
>>>>     * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another
>>>> driver's
>>>>     *    scatter/gather table and get the virtual address of the buffer
>>>> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c
>>>> b/drivers/gpu/drm/pl111/pl111_drv.c
>>>> index 40e6708fbbe2..e4dcaef6c143 100644
>>>> --- a/drivers/gpu/drm/pl111/pl111_drv.c
>>>> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
>>>> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>>>>        .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>>>        .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>>>        .gem_prime_import_sg_table = pl111_gem_import_sg_table,
>>>> -    .gem_prime_mmap = drm_gem_cma_prime_mmap,
>>>> +    .gem_prime_mmap = drm_gem_prime_mmap,
>>>>      #if defined(CONFIG_DEBUG_FS)
>>>>        .debugfs_init = pl111_debugfs_init,
>>>> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
>>>> index 813e6cb3f9af..dc316cb79e00 100644
>>>> --- a/drivers/gpu/drm/vc4/vc4_bo.c
>>>> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
>>>> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj,
>>>> struct vm_area_struct *vma)
>>>>            return -EINVAL;
>>>>        }
>>>>    -    return drm_gem_cma_prime_mmap(obj, vma);
>>>> +    return drm_gem_prime_mmap(obj, vma);
>>>>    }
>>>>      int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>>> *map)
>>>> diff --git a/include/drm/drm_gem_cma_helper.h
>>>> b/include/drm/drm_gem_cma_helper.h
>>>> index 4680275ab339..0a9711caa3e8 100644
>>>> --- a/include/drm/drm_gem_cma_helper.h
>>>> +++ b/include/drm/drm_gem_cma_helper.h
>>>> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>>>>            .poll        = drm_poll,\
>>>>            .read        = drm_read,\
>>>>            .llseek        = noop_llseek,\
>>>> -        .mmap        = drm_gem_cma_mmap,\
>>>> +        .mmap        = drm_gem_mmap,\
>>>>            DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>>>>        }
>>>>    @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file
>>>> *file_priv,
>>>>                    struct drm_device *drm,
>>>>                    struct drm_mode_create_dumb *args);
>>>>    -/* set vm_flags and we can change the VM attribute to other one at
>>>> here */
>>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
>>>> -
>>>>    /* allocate physical memory */
>>>>    struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>>>>                              size_t size);
>>>> @@ -101,9 +98,8 @@ struct drm_gem_object *
>>>>    drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>>>                      struct dma_buf_attachment *attach,
>>>>                      struct sg_table *sgt);
>>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>>> -               struct vm_area_struct *vma);
>>>>    int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>>> *map);
>>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>>> vm_area_struct *vma);
>>>>      /**
>>>>     * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
>>>> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>>> struct dma_buf_map *map);
>>>>        .prime_handle_to_fd    = drm_gem_prime_handle_to_fd, \
>>>>        .prime_fd_to_handle    = drm_gem_prime_fd_to_handle, \
>>>>        .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
>>>> -    .gem_prime_mmap        = drm_gem_cma_prime_mmap
>>>> +    .gem_prime_mmap        = drm_gem_prime_mmap
>>>>      /**
>>>>     * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
>>>>
>>>
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-14 15:15           ` Thomas Zimmermann
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2021-01-14 15:15 UTC (permalink / raw)
  To: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 15114 bytes --]

Hi

Am 14.01.21 um 15:34 schrieb Kieran Bingham:
> Hi Thomas,
> 
> On 14/01/2021 13:26, Thomas Zimmermann wrote:
>> Hi Kieran
>>
>> Am 14.01.21 um 13:51 schrieb Kieran Bingham:
>>> Hi Thomas,
>>>
>>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>>> and offset as in the old implementation and immediately maps in the
>>>> buffer's memory pages.
>>>>
>>>> Changing CMA helpers to use the GEM object function allows for the
>>>> removal of the special implementations for mmap and gem_prime_mmap
>>>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>>>> are now used.
>>>
>>> I've encountered a memory leak regression in our Renesas R-Car DU tests,
>>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>>
>>> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
>>> is evident that CMA memory is not released, until exhausted and the
>>> allocations fail (seen in [0]) shown by the error report:
>>>
>>>>       self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>>> mode.vdisplay, "XR24"))
>>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>>
>>>
>>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>>
>>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>>
>>>    [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>>    [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>>    [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>>
>>>
>>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>>> wonder if this issue has hit anyone else as well?
>>>
>>> Any ideas of a repair without a revert ? Or do we just need to submit a
>>> revert?
>>
>> I think we might not be setting the VMA ops and therefore not finalize
>> the BO correctly. Could you please apply the attched (quick-and-dirty)
>> patch and try again?
> 
> Thanks for the quick response.
> 
> I can confirm the quick-and-dirty patch resolves the issue:
>    https://paste.ubuntu.com/p/sKDp3dNvwV/
> 
> You can add a
> Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>

Great! If you don't mind, I'd also add you in the Reported-by tag.

> 
> if it stays like that, but I suspect there might be a better place to
> initialise the ops rather than in the mmap call itself.

I think that's the fix, basically. We could put such a line as a 
fall-back somewhere into the DRM core code. But I don't know if this 
really works with all drivers. Maybe there's one that requires vm_ops to 
be NULL.

Thanks for reporting this issue and testing quickly.

Best regards
Thomas

> 
> Happy to do further testing as required.
> 
> --
> Regards
> 
> Kieran
> 
> 
>> Best regards
>> Thomas
>>
>>>
>>> I've yet to fully understand the implications of the patch below.
>>>
>>> Thanks
>>> -- 
>>> Regards
>>>
>>> Kieran
>>>
>>>
>>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>>    drivers/gpu/drm/drm_file.c           |   3 +-
>>>>    drivers/gpu/drm/drm_gem_cma_helper.c | 121 +++++++++------------------
>>>>    drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>>>    drivers/gpu/drm/vc4/vc4_bo.c         |   2 +-
>>>>    include/drm/drm_gem_cma_helper.h     |  10 +--
>>>>    5 files changed, 44 insertions(+), 94 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>>> index b50380fa80ce..80886d50d0f1 100644
>>>> --- a/drivers/gpu/drm/drm_file.c
>>>> +++ b/drivers/gpu/drm/drm_file.c
>>>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device
>>>> *dev)
>>>>     * The memory mapping implementation will vary depending on how the
>>>> driver
>>>>     * manages memory. Legacy drivers will use the deprecated
>>>> drm_legacy_mmap()
>>>>     * function, modern drivers should use one of the provided
>>>> memory-manager
>>>> - * specific implementations. For GEM-based drivers this is
>>>> drm_gem_mmap(), and
>>>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>>>> + * specific implementations. For GEM-based drivers this is
>>>> drm_gem_mmap().
>>>>     *
>>>>     * No other file operations are supported by the DRM userspace API.
>>>> Overall the
>>>>     * following is an example &file_operations structure::
>>>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> b/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> index 6a4ef335ebc9..7942cf05cd93 100644
>>>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>>>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs
>>>> drm_gem_cma_default_funcs = {
>>>>        .print_info = drm_gem_cma_print_info,
>>>>        .get_sg_table = drm_gem_cma_get_sg_table,
>>>>        .vmap = drm_gem_cma_vmap,
>>>> +    .mmap = drm_gem_cma_mmap,
>>>>        .vm_ops = &drm_gem_cma_vm_ops,
>>>>    };
>>>>    @@ -277,62 +278,6 @@ const struct vm_operations_struct
>>>> drm_gem_cma_vm_ops = {
>>>>    };
>>>>    EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>>>    -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>>>> -                struct vm_area_struct *vma)
>>>> -{
>>>> -    int ret;
>>>> -
>>>> -    /*
>>>> -     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>>> set the
>>>> -     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>>> want to map
>>>> -     * the whole buffer.
>>>> -     */
>>>> -    vma->vm_flags &= ~VM_PFNMAP;
>>>> -    vma->vm_pgoff = 0;
>>>> -
>>>> -    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>>> -              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>>> -    if (ret)
>>>> -        drm_gem_vm_close(vma);
>>>> -
>>>> -    return ret;
>>>> -}
>>>> -
>>>> -/**
>>>> - * drm_gem_cma_mmap - memory-map a CMA GEM object
>>>> - * @filp: file object
>>>> - * @vma: VMA for the area to be mapped
>>>> - *
>>>> - * This function implements an augmented version of the GEM DRM file
>>>> mmap
>>>> - * operation for CMA objects: In addition to the usual GEM VMA setup it
>>>> - * immediately faults in the entire object instead of using on-demaind
>>>> - * faulting. Drivers which employ the CMA helpers should use this
>>>> function
>>>> - * as their ->mmap() handler in the DRM device file's file_operations
>>>> - * structure.
>>>> - *
>>>> - * Instead of directly referencing this function, drivers should use
>>>> the
>>>> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
>>>> - *
>>>> - * Returns:
>>>> - * 0 on success or a negative error code on failure.
>>>> - */
>>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>>>> -{
>>>> -    struct drm_gem_cma_object *cma_obj;
>>>> -    struct drm_gem_object *gem_obj;
>>>> -    int ret;
>>>> -
>>>> -    ret = drm_gem_mmap(filp, vma);
>>>> -    if (ret)
>>>> -        return ret;
>>>> -
>>>> -    gem_obj = vma->vm_private_data;
>>>> -    cma_obj = to_drm_gem_cma_obj(gem_obj);
>>>> -
>>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>>> -}
>>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>>> -
>>>>    #ifndef CONFIG_MMU
>>>>    /**
>>>>     * drm_gem_cma_get_unmapped_area - propose address for mapping in
>>>> noMMU cases
>>>> @@ -500,33 +445,6 @@ drm_gem_cma_prime_import_sg_table(struct
>>>> drm_device *dev,
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
>>>>    -/**
>>>> - * drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
>>>> - * @obj: GEM object
>>>> - * @vma: VMA for the area to be mapped
>>>> - *
>>>> - * This function maps a buffer imported via DRM PRIME into a userspace
>>>> - * process's address space. Drivers that use the CMA helpers should
>>>> set this
>>>> - * as their &drm_driver.gem_prime_mmap callback.
>>>> - *
>>>> - * Returns:
>>>> - * 0 on success or a negative error code on failure.
>>>> - */
>>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>>> -               struct vm_area_struct *vma)
>>>> -{
>>>> -    struct drm_gem_cma_object *cma_obj;
>>>> -    int ret;
>>>> -
>>>> -    ret = drm_gem_mmap_obj(obj, obj->size, vma);
>>>> -    if (ret < 0)
>>>> -        return ret;
>>>> -
>>>> -    cma_obj = to_drm_gem_cma_obj(obj);
>>>> -    return drm_gem_cma_mmap_obj(cma_obj, vma);
>>>> -}
>>>> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>>>> -
>>>>    /**
>>>>     * drm_gem_cma_vmap - map a CMA GEM object into the kernel's virtual
>>>>     *     address space
>>>> @@ -553,6 +471,43 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>>> struct dma_buf_map *map)
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(drm_gem_cma_vmap);
>>>>    +/**
>>>> + * drm_gem_cma_mmap - memory-map an exported CMA GEM object
>>>> + * @obj: GEM object
>>>> + * @vma: VMA for the area to be mapped
>>>> + *
>>>> + * This function maps a buffer into a userspace process's address
>>>> space.
>>>> + * In addition to the usual GEM VMA setup it immediately faults in
>>>> the entire
>>>> + * object instead of using on-demand faulting. Drivers that use the CMA
>>>> + * helpers should set this as their &drm_gem_object_funcs.mmap
>>>> callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success or a negative error code on failure.
>>>> + */
>>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>>> vm_area_struct *vma)
>>>> +{
>>>> +    struct drm_gem_cma_object *cma_obj;
>>>> +    int ret;
>>>> +
>>>> +    /*
>>>> +     * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>>> set the
>>>> +     * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>>> want to map
>>>> +     * the whole buffer.
>>>> +     */
>>>> +    vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
>>>> +    vma->vm_flags &= ~VM_PFNMAP;
>>>> +
>>>> +    cma_obj = to_drm_gem_cma_obj(obj);
>>>> +
>>>> +    ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
>>>> +              cma_obj->paddr, vma->vm_end - vma->vm_start);
>>>> +    if (ret)
>>>> +        drm_gem_vm_close(vma);
>>>> +
>>>> +    return ret;
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>>>> +
>>>>    /**
>>>>     * drm_gem_cma_prime_import_sg_table_vmap - PRIME import another
>>>> driver's
>>>>     *    scatter/gather table and get the virtual address of the buffer
>>>> diff --git a/drivers/gpu/drm/pl111/pl111_drv.c
>>>> b/drivers/gpu/drm/pl111/pl111_drv.c
>>>> index 40e6708fbbe2..e4dcaef6c143 100644
>>>> --- a/drivers/gpu/drm/pl111/pl111_drv.c
>>>> +++ b/drivers/gpu/drm/pl111/pl111_drv.c
>>>> @@ -228,7 +228,7 @@ static const struct drm_driver pl111_drm_driver = {
>>>>        .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>>>        .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>>>        .gem_prime_import_sg_table = pl111_gem_import_sg_table,
>>>> -    .gem_prime_mmap = drm_gem_cma_prime_mmap,
>>>> +    .gem_prime_mmap = drm_gem_prime_mmap,
>>>>      #if defined(CONFIG_DEBUG_FS)
>>>>        .debugfs_init = pl111_debugfs_init,
>>>> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
>>>> index 813e6cb3f9af..dc316cb79e00 100644
>>>> --- a/drivers/gpu/drm/vc4/vc4_bo.c
>>>> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
>>>> @@ -782,7 +782,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj,
>>>> struct vm_area_struct *vma)
>>>>            return -EINVAL;
>>>>        }
>>>>    -    return drm_gem_cma_prime_mmap(obj, vma);
>>>> +    return drm_gem_prime_mmap(obj, vma);
>>>>    }
>>>>      int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>>> *map)
>>>> diff --git a/include/drm/drm_gem_cma_helper.h
>>>> b/include/drm/drm_gem_cma_helper.h
>>>> index 4680275ab339..0a9711caa3e8 100644
>>>> --- a/include/drm/drm_gem_cma_helper.h
>>>> +++ b/include/drm/drm_gem_cma_helper.h
>>>> @@ -59,7 +59,7 @@ struct drm_gem_cma_object {
>>>>            .poll        = drm_poll,\
>>>>            .read        = drm_read,\
>>>>            .llseek        = noop_llseek,\
>>>> -        .mmap        = drm_gem_cma_mmap,\
>>>> +        .mmap        = drm_gem_mmap,\
>>>>            DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
>>>>        }
>>>>    @@ -76,9 +76,6 @@ int drm_gem_cma_dumb_create(struct drm_file
>>>> *file_priv,
>>>>                    struct drm_device *drm,
>>>>                    struct drm_mode_create_dumb *args);
>>>>    -/* set vm_flags and we can change the VM attribute to other one at
>>>> here */
>>>> -int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma);
>>>> -
>>>>    /* allocate physical memory */
>>>>    struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>>>>                              size_t size);
>>>> @@ -101,9 +98,8 @@ struct drm_gem_object *
>>>>    drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>>>>                      struct dma_buf_attachment *attach,
>>>>                      struct sg_table *sgt);
>>>> -int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>>>> -               struct vm_area_struct *vma);
>>>>    int drm_gem_cma_vmap(struct drm_gem_object *obj, struct dma_buf_map
>>>> *map);
>>>> +int drm_gem_cma_mmap(struct drm_gem_object *obj, struct
>>>> vm_area_struct *vma);
>>>>      /**
>>>>     * DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
>>>> @@ -123,7 +119,7 @@ int drm_gem_cma_vmap(struct drm_gem_object *obj,
>>>> struct dma_buf_map *map);
>>>>        .prime_handle_to_fd    = drm_gem_prime_handle_to_fd, \
>>>>        .prime_fd_to_handle    = drm_gem_prime_fd_to_handle, \
>>>>        .gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
>>>> -    .gem_prime_mmap        = drm_gem_cma_prime_mmap
>>>> +    .gem_prime_mmap        = drm_gem_prime_mmap
>>>>      /**
>>>>     * DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
>>>>
>>>
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2021-01-14 15:15           ` Thomas Zimmermann
@ 2021-01-14 16:28             ` Kieran Bingham
  -1 siblings, 0 replies; 19+ messages in thread
From: Kieran Bingham @ 2021-01-14 16:28 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel

Hi Thomas,

On 14/01/2021 15:15, Thomas Zimmermann wrote:
>>>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>>>> and offset as in the old implementation and immediately maps in the
>>>>> buffer's memory pages.
>>>>>
>>>>> Changing CMA helpers to use the GEM object function allows for the
>>>>> removal of the special implementations for mmap and gem_prime_mmap
>>>>> callbacks. The regular functions drm_gem_mmap() and
>>>>> drm_gem_prime_mmap()
>>>>> are now used.
>>>>
>>>> I've encountered a memory leak regression in our Renesas R-Car DU
>>>> tests,
>>>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>>>
>>>> Running the tests sequentially, while grepping /proc/meminfo for
>>>> Cma, it
>>>> is evident that CMA memory is not released, until exhausted and the
>>>> allocations fail (seen in [0]) shown by the error report:
>>>>
>>>>>       self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>>>> mode.vdisplay, "XR24"))
>>>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>>>
>>>>
>>>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>>>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>>>
>>>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>>>
>>>>    [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>>>    [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>>>    [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>>>
>>>>
>>>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>>>> wonder if this issue has hit anyone else as well?
>>>>
>>>> Any ideas of a repair without a revert ? Or do we just need to submit a
>>>> revert?
>>>
>>> I think we might not be setting the VMA ops and therefore not finalize
>>> the BO correctly. Could you please apply the attched (quick-and-dirty)
>>> patch and try again?
>>
>> Thanks for the quick response.
>>
>> I can confirm the quick-and-dirty patch resolves the issue:
>>    https://paste.ubuntu.com/p/sKDp3dNvwV/
>>
>> You can add a
>> Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
> 
> Great! If you don't mind, I'd also add you in the Reported-by tag.

Certainly!

>>
>> if it stays like that, but I suspect there might be a better place to
>> initialise the ops rather than in the mmap call itself.
> 
> I think that's the fix, basically. We could put such a line as a
> fall-back somewhere into the DRM core code. But I don't know if this
> really works with all drivers. Maybe there's one that requires vm_ops to
> be NULL.

Ok, that's reaching beyond code I've explored, so I'll leave it to you.


> Thanks for reporting this issue and testing quickly.

Thanks for fixing so quickly :-)

Regards

Kieran

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-14 16:28             ` Kieran Bingham
  0 siblings, 0 replies; 19+ messages in thread
From: Kieran Bingham @ 2021-01-14 16:28 UTC (permalink / raw)
  To: Thomas Zimmermann, maarten.lankhorst, mripard, airlied, daniel,
	eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel

Hi Thomas,

On 14/01/2021 15:15, Thomas Zimmermann wrote:
>>>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>>>> and offset as in the old implementation and immediately maps in the
>>>>> buffer's memory pages.
>>>>>
>>>>> Changing CMA helpers to use the GEM object function allows for the
>>>>> removal of the special implementations for mmap and gem_prime_mmap
>>>>> callbacks. The regular functions drm_gem_mmap() and
>>>>> drm_gem_prime_mmap()
>>>>> are now used.
>>>>
>>>> I've encountered a memory leak regression in our Renesas R-Car DU
>>>> tests,
>>>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>>>
>>>> Running the tests sequentially, while grepping /proc/meminfo for
>>>> Cma, it
>>>> is evident that CMA memory is not released, until exhausted and the
>>>> allocations fail (seen in [0]) shown by the error report:
>>>>
>>>>>       self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>>>> mode.vdisplay, "XR24"))
>>>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>>>
>>>>
>>>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>>>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>>>
>>>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>>>
>>>>    [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>>>    [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>>>    [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>>>
>>>>
>>>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>>>> wonder if this issue has hit anyone else as well?
>>>>
>>>> Any ideas of a repair without a revert ? Or do we just need to submit a
>>>> revert?
>>>
>>> I think we might not be setting the VMA ops and therefore not finalize
>>> the BO correctly. Could you please apply the attched (quick-and-dirty)
>>> patch and try again?
>>
>> Thanks for the quick response.
>>
>> I can confirm the quick-and-dirty patch resolves the issue:
>>    https://paste.ubuntu.com/p/sKDp3dNvwV/
>>
>> You can add a
>> Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
> 
> Great! If you don't mind, I'd also add you in the Reported-by tag.

Certainly!

>>
>> if it stays like that, but I suspect there might be a better place to
>> initialise the ops rather than in the mmap call itself.
> 
> I think that's the fix, basically. We could put such a line as a
> fall-back somewhere into the DRM core code. But I don't know if this
> really works with all drivers. Maybe there's one that requires vm_ops to
> be NULL.

Ok, that's reaching beyond code I've explored, so I'll leave it to you.


> Thanks for reporting this issue and testing quickly.

Thanks for fixing so quickly :-)

Regards

Kieran
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2021-01-14 13:26       ` Thomas Zimmermann
@ 2021-01-14 17:19         ` Daniel Vetter
  -1 siblings, 0 replies; 19+ messages in thread
From: Daniel Vetter @ 2021-01-14 17:19 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart, dri-devel

On Thu, Jan 14, 2021 at 02:26:41PM +0100, Thomas Zimmermann wrote:
> From d0583fe22cd0cd29749ff679e46e13b58de325cb Mon Sep 17 00:00:00 2001
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Date: Thu, 14 Jan 2021 14:21:51 +0100
> Subject: [PATCH] drm/cma: Set vma ops in mmap function
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_gem_cma_helper.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 7942cf05cd93..0bd192736169 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -489,6 +489,8 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>  	struct drm_gem_cma_object *cma_obj;
>  	int ret;
>  
> +	vma->vm_ops = obj->funcs->vm_ops;

I think this should be done in core, otherwise we have tons of refcount
leaks. I think this was an oversight when we've done that refactoring.

Also drivers can easily overwrite this one if they really have to, but not
assigned this is a clear bug.
-Daniel

> +
>  	/*
>  	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>  	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> -- 
> 2.29.2
> 


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-14 17:19         ` Daniel Vetter
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Vetter @ 2021-01-14 17:19 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: airlied, dri-devel, Linux-Renesas, kieran.bingham+renesas,
	Laurent Pinchart

On Thu, Jan 14, 2021 at 02:26:41PM +0100, Thomas Zimmermann wrote:
> From d0583fe22cd0cd29749ff679e46e13b58de325cb Mon Sep 17 00:00:00 2001
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Date: Thu, 14 Jan 2021 14:21:51 +0100
> Subject: [PATCH] drm/cma: Set vma ops in mmap function
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_gem_cma_helper.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 7942cf05cd93..0bd192736169 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -489,6 +489,8 @@ int drm_gem_cma_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>  	struct drm_gem_cma_object *cma_obj;
>  	int ret;
>  
> +	vma->vm_ops = obj->funcs->vm_ops;

I think this should be done in core, otherwise we have tons of refcount
leaks. I think this was an oversight when we've done that refactoring.

Also drivers can easily overwrite this one if they really have to, but not
assigned this is a clear bug.
-Daniel

> +
>  	/*
>  	 * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>  	 * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> -- 
> 2.29.2
> 


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
  2021-01-14 16:28             ` Kieran Bingham
@ 2021-01-15  8:15               ` Thomas Zimmermann
  -1 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2021-01-15  8:15 UTC (permalink / raw)
  To: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 3453 bytes --]

Hi

Am 14.01.21 um 17:28 schrieb Kieran Bingham:
> Hi Thomas,
> 
> On 14/01/2021 15:15, Thomas Zimmermann wrote:
>>>>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>>>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>>>>> and offset as in the old implementation and immediately maps in the
>>>>>> buffer's memory pages.
>>>>>>
>>>>>> Changing CMA helpers to use the GEM object function allows for the
>>>>>> removal of the special implementations for mmap and gem_prime_mmap
>>>>>> callbacks. The regular functions drm_gem_mmap() and
>>>>>> drm_gem_prime_mmap()
>>>>>> are now used.
>>>>>
>>>>> I've encountered a memory leak regression in our Renesas R-Car DU
>>>>> tests,
>>>>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>>>>
>>>>> Running the tests sequentially, while grepping /proc/meminfo for
>>>>> Cma, it
>>>>> is evident that CMA memory is not released, until exhausted and the
>>>>> allocations fail (seen in [0]) shown by the error report:
>>>>>
>>>>>>        self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>>>>> mode.vdisplay, "XR24"))
>>>>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>>>>
>>>>>
>>>>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>>>>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>>>>
>>>>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>>>>
>>>>>     [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>>>>     [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>>>>     [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>>>>
>>>>>
>>>>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>>>>> wonder if this issue has hit anyone else as well?
>>>>>
>>>>> Any ideas of a repair without a revert ? Or do we just need to submit a
>>>>> revert?
>>>>
>>>> I think we might not be setting the VMA ops and therefore not finalize
>>>> the BO correctly. Could you please apply the attched (quick-and-dirty)
>>>> patch and try again?
>>>
>>> Thanks for the quick response.
>>>
>>> I can confirm the quick-and-dirty patch resolves the issue:
>>>     https://paste.ubuntu.com/p/sKDp3dNvwV/
>>>
>>> You can add a
>>> Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
>>
>> Great! If you don't mind, I'd also add you in the Reported-by tag.
> 
> Certainly!
> 
>>>
>>> if it stays like that, but I suspect there might be a better place to
>>> initialise the ops rather than in the mmap call itself.
>>
>> I think that's the fix, basically. We could put such a line as a
>> fall-back somewhere into the DRM core code. But I don't know if this
>> really works with all drivers. Maybe there's one that requires vm_ops to
>> be NULL.
> 
> Ok, that's reaching beyond code I've explored, so I'll leave it to you.

Daniel asked for a fix in the DRM core, so I'll go with the alternative 
approach. If you have the time, I'd appreciate another test run.

Best regards
Thomas

> 
> 
>> Thanks for reporting this issue and testing quickly.
> 
> Thanks for fixing so quickly :-)
> 
> Regards
> 
> Kieran
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions
@ 2021-01-15  8:15               ` Thomas Zimmermann
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Zimmermann @ 2021-01-15  8:15 UTC (permalink / raw)
  To: kieran.bingham+renesas, maarten.lankhorst, mripard, airlied,
	daniel, eric, Linux-Renesas, Laurent Pinchart
  Cc: dri-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3453 bytes --]

Hi

Am 14.01.21 um 17:28 schrieb Kieran Bingham:
> Hi Thomas,
> 
> On 14/01/2021 15:15, Thomas Zimmermann wrote:
>>>>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>>>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>>>>> and offset as in the old implementation and immediately maps in the
>>>>>> buffer's memory pages.
>>>>>>
>>>>>> Changing CMA helpers to use the GEM object function allows for the
>>>>>> removal of the special implementations for mmap and gem_prime_mmap
>>>>>> callbacks. The regular functions drm_gem_mmap() and
>>>>>> drm_gem_prime_mmap()
>>>>>> are now used.
>>>>>
>>>>> I've encountered a memory leak regression in our Renesas R-Car DU
>>>>> tests,
>>>>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>>>>
>>>>> Running the tests sequentially, while grepping /proc/meminfo for
>>>>> Cma, it
>>>>> is evident that CMA memory is not released, until exhausted and the
>>>>> allocations fail (seen in [0]) shown by the error report:
>>>>>
>>>>>>        self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>>>>> mode.vdisplay, "XR24"))
>>>>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>>>>
>>>>>
>>>>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>>>>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>>>>
>>>>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>>>>
>>>>>     [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>>>>     [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>>>>     [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>>>>
>>>>>
>>>>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>>>>> wonder if this issue has hit anyone else as well?
>>>>>
>>>>> Any ideas of a repair without a revert ? Or do we just need to submit a
>>>>> revert?
>>>>
>>>> I think we might not be setting the VMA ops and therefore not finalize
>>>> the BO correctly. Could you please apply the attched (quick-and-dirty)
>>>> patch and try again?
>>>
>>> Thanks for the quick response.
>>>
>>> I can confirm the quick-and-dirty patch resolves the issue:
>>>     https://paste.ubuntu.com/p/sKDp3dNvwV/
>>>
>>> You can add a
>>> Tested-by: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
>>
>> Great! If you don't mind, I'd also add you in the Reported-by tag.
> 
> Certainly!
> 
>>>
>>> if it stays like that, but I suspect there might be a better place to
>>> initialise the ops rather than in the mmap call itself.
>>
>> I think that's the fix, basically. We could put such a line as a
>> fall-back somewhere into the DRM core code. But I don't know if this
>> really works with all drivers. Maybe there's one that requires vm_ops to
>> be NULL.
> 
> Ok, that's reaching beyond code I've explored, so I'll leave it to you.

Daniel asked for a fix in the DRM core, so I'll go with the alternative 
approach. If you have the time, I'd appreciate another test run.

Best regards
Thomas

> 
> 
>> Thanks for reporting this issue and testing quickly.
> 
> Thanks for fixing so quickly :-)
> 
> Regards
> 
> Kieran
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-01-15  8:16 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-23 11:56 [PATCH 0/2] drm/cma-helper: Move mmap to object functions Thomas Zimmermann
2020-11-23 11:56 ` [PATCH 1/2] drm/cma-helper: Remove prime infix from GEM " Thomas Zimmermann
2020-11-23 11:56 ` [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA " Thomas Zimmermann
2021-01-14 12:51   ` Kieran Bingham
2021-01-14 12:51     ` Kieran Bingham
2021-01-14 13:26     ` Thomas Zimmermann
2021-01-14 13:26       ` Thomas Zimmermann
2021-01-14 14:34       ` Kieran Bingham
2021-01-14 14:34         ` Kieran Bingham
2021-01-14 15:15         ` Thomas Zimmermann
2021-01-14 15:15           ` Thomas Zimmermann
2021-01-14 16:28           ` Kieran Bingham
2021-01-14 16:28             ` Kieran Bingham
2021-01-15  8:15             ` Thomas Zimmermann
2021-01-15  8:15               ` Thomas Zimmermann
2021-01-14 17:19       ` Daniel Vetter
2021-01-14 17:19         ` Daniel Vetter
2020-11-23 12:32 ` [PATCH 0/2] drm/cma-helper: Move mmap to " Maxime Ripard
2020-11-23 18:58   ` Thomas Zimmermann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.