All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] GEM CMA DMA-BUF support
@ 2013-06-04  2:20 Laurent Pinchart
  2013-06-04  2:20 ` [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping Laurent Pinchart
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04  2:20 UTC (permalink / raw)
  To: dri-devel

Hello,

Here's the second version of the GEM CMA DMA-BUF support patches.

The code is based on the Exynos DRM DMA-BUF implementation. The exporter role
has been successfully tested with the Renesas R-Car DU driver.

Dave, is there a chance this could make it to v3.11 ?

Changes compared to v1:

- Added a mixing sg_free_table() in drm_gem_cma_dmabuf_map()
- Implement drm_gem_cma_dmabuf_mmap()

Laurent Pinchart (5):
  drm/gem: Split drm_gem_mmap() into object search and object mapping
  drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap
  drm: GEM CMA: Split object creation into object alloc and DMA memory
    alloc
  drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping
  drm: GEM CMA: Add DRM PRIME support

 drivers/gpu/drm/drm_gem.c                 |  83 +++---
 drivers/gpu/drm/drm_gem_cma_helper.c      | 408 +++++++++++++++++++++++++++---
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c |  32 +--
 include/drm/drmP.h                        |   2 +
 include/drm/drm_gem_cma_helper.h          |   9 +
 5 files changed, 438 insertions(+), 96 deletions(-)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping
  2013-06-04  2:20 [PATCH v2 0/5] GEM CMA DMA-BUF support Laurent Pinchart
@ 2013-06-04  2:20 ` Laurent Pinchart
  2013-06-04 11:33   ` Rob Clark
  2013-06-04  2:20 ` [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap Laurent Pinchart
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04  2:20 UTC (permalink / raw)
  To: dri-devel

The drm_gem_mmap() function first finds the GEM object to be mapped
based on the fake mmap offset and then maps the object. Split the object
mapping code into a standalone drm_gem_mmap_obj() function that can be
used to implement dma-buf mmap() operations.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/gpu/drm/drm_gem.c | 83 +++++++++++++++++++++++++++++------------------
 include/drm/drmP.h        |  2 ++
 2 files changed, 54 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index cf919e3..4321713 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -644,6 +644,55 @@ void drm_gem_vm_close(struct vm_area_struct *vma)
 }
 EXPORT_SYMBOL(drm_gem_vm_close);
 
+/**
+ * drm_gem_mmap_obj - memory map a GEM object
+ * @obj: the GEM object to map
+ * @obj_size: the object size to be mapped, in bytes
+ * @vma: VMA for the area to be mapped
+ *
+ * Set up the VMA to prepare mapping of the GEM object using the gem_vm_ops
+ * provided by the driver. Depending on their requirements, drivers can either
+ * provide a fault handler in their gem_vm_ops (in which case any accesses to
+ * the object will be trapped, to perform migration, GTT binding, surface
+ * register allocation, or performance monitoring), or mmap the buffer memory
+ * synchronously after calling drm_gem_mmap_obj.
+ *
+ * This function is mainly intended to implement the DMABUF mmap operation, when
+ * the GEM object is not looked up based on its fake offset. To implement the
+ * DRM mmap operation, drivers should use the drm_gem_mmap() function.
+ *
+ * Return 0 or success or -EINVAL if the object size is smaller than the VMA
+ * size, or if no gem_vm_ops are provided.
+ */
+int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
+		     struct vm_area_struct *vma)
+{
+	struct drm_device *dev = obj->dev;
+
+	/* Check for valid size. */
+	if (obj_size < vma->vm_end - vma->vm_start)
+		return -EINVAL;
+
+	if (!dev->driver->gem_vm_ops)
+		return -EINVAL;
+
+	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
+	vma->vm_ops = dev->driver->gem_vm_ops;
+	vma->vm_private_data = obj;
+	vma->vm_page_prot =  pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+
+	/* Take a ref for this mapping of the object, so that the fault
+	 * handler can dereference the mmap offset's pointer to the object.
+	 * This reference is cleaned up by the corresponding vm_close
+	 * (which should happen whether the vma was created by this call, or
+	 * by a vm_open due to mremap or partial unmap or whatever).
+	 */
+	drm_gem_object_reference(obj);
+
+	drm_vm_open_locked(dev, vma);
+	return 0;
+}
+EXPORT_SYMBOL(drm_gem_mmap_obj);
 
 /**
  * drm_gem_mmap - memory map routine for GEM objects
@@ -653,11 +702,9 @@ EXPORT_SYMBOL(drm_gem_vm_close);
  * If a driver supports GEM object mapping, mmap calls on the DRM file
  * descriptor will end up here.
  *
- * If we find the object based on the offset passed in (vma->vm_pgoff will
+ * Look up the GEM object based on the offset passed in (vma->vm_pgoff will
  * contain the fake offset we created when the GTT map ioctl was called on
- * the object), we set up the driver fault handler so that any accesses
- * to the object can be trapped, to perform migration, GTT binding, surface
- * register allocation, or performance monitoring.
+ * the object) and map it with a call to drm_gem_mmap_obj().
  */
 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 {
@@ -665,7 +712,6 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	struct drm_device *dev = priv->minor->dev;
 	struct drm_gem_mm *mm = dev->mm_private;
 	struct drm_local_map *map = NULL;
-	struct drm_gem_object *obj;
 	struct drm_hash_item *hash;
 	int ret = 0;
 
@@ -686,32 +732,7 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 		goto out_unlock;
 	}
 
-	/* Check for valid size. */
-	if (map->size < vma->vm_end - vma->vm_start) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-
-	obj = map->handle;
-	if (!obj->dev->driver->gem_vm_ops) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_ops = obj->dev->driver->gem_vm_ops;
-	vma->vm_private_data = map->handle;
-	vma->vm_page_prot =  pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
-
-	/* Take a ref for this mapping of the object, so that the fault
-	 * handler can dereference the mmap offset's pointer to the object.
-	 * This reference is cleaned up by the corresponding vm_close
-	 * (which should happen whether the vma was created by this call, or
-	 * by a vm_open due to mremap or partial unmap or whatever).
-	 */
-	drm_gem_object_reference(obj);
-
-	drm_vm_open_locked(dev, vma);
+	ret = drm_gem_mmap_obj(map->handle, map->size, vma);
 
 out_unlock:
 	mutex_unlock(&dev->struct_mutex);
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index b06f5af..79fb4c7 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -1616,6 +1616,8 @@ int drm_gem_private_object_init(struct drm_device *dev,
 void drm_gem_object_handle_free(struct drm_gem_object *obj);
 void drm_gem_vm_open(struct vm_area_struct *vma);
 void drm_gem_vm_close(struct vm_area_struct *vma);
+int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
+		     struct vm_area_struct *vma);
 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
 #include <drm/drm_global.h>
-- 
1.8.1.5

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap
  2013-06-04  2:20 [PATCH v2 0/5] GEM CMA DMA-BUF support Laurent Pinchart
  2013-06-04  2:20 ` [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping Laurent Pinchart
@ 2013-06-04  2:20 ` Laurent Pinchart
  2013-06-04 11:33   ` Rob Clark
  2013-06-04  2:20 ` [PATCH v2 3/5] drm: GEM CMA: Split object creation into object alloc and DMA memory alloc Laurent Pinchart
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04  2:20 UTC (permalink / raw)
  To: dri-devel

The dma-buf mmap code was copied from the GEM mmap implementation.
Replace it with the new drm_gem_mmap_obj() function.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 32 +++----------------------------
 1 file changed, 3 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
index be7cd97..3256693 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
@@ -136,10 +136,6 @@ static void omap_gem_dmabuf_kunmap(struct dma_buf *buffer,
 	kunmap(pages[page_num]);
 }
 
-/*
- * TODO maybe we can split up drm_gem_mmap to avoid duplicating
- * some here.. or at least have a drm_dmabuf_mmap helper.
- */
 static int omap_gem_dmabuf_mmap(struct dma_buf *buffer,
 		struct vm_area_struct *vma)
 {
@@ -149,31 +145,9 @@ static int omap_gem_dmabuf_mmap(struct dma_buf *buffer,
 	if (WARN_ON(!obj->filp))
 		return -EINVAL;
 
-	/* Check for valid size. */
-	if (omap_gem_mmap_size(obj) < vma->vm_end - vma->vm_start) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-
-	if (!obj->dev->driver->gem_vm_ops) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-
-	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
-	vma->vm_ops = obj->dev->driver->gem_vm_ops;
-	vma->vm_private_data = obj;
-	vma->vm_page_prot =  pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
-
-	/* Take a ref for this mapping of the object, so that the fault
-	 * handler can dereference the mmap offset's pointer to the object.
-	 * This reference is cleaned up by the corresponding vm_close
-	 * (which should happen whether the vma was created by this call, or
-	 * by a vm_open due to mremap or partial unmap or whatever).
-	 */
-	vma->vm_ops->open(vma);
-
-out_unlock:
+	ret = drm_gem_mmap_obj(obj, omap_gem_mmap_size(obj), vma);
+	if (ret < 0)
+		return ret;
 
 	return omap_gem_mmap_obj(obj, vma);
 }
-- 
1.8.1.5

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 3/5] drm: GEM CMA: Split object creation into object alloc and DMA memory alloc
  2013-06-04  2:20 [PATCH v2 0/5] GEM CMA DMA-BUF support Laurent Pinchart
  2013-06-04  2:20 ` [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping Laurent Pinchart
  2013-06-04  2:20 ` [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap Laurent Pinchart
@ 2013-06-04  2:20 ` Laurent Pinchart
  2013-06-04 20:07   ` Rob Clark
  2013-06-04  2:20 ` [PATCH v2 4/5] drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping Laurent Pinchart
  2013-06-04  2:20 ` [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support Laurent Pinchart
  4 siblings, 1 reply; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04  2:20 UTC (permalink / raw)
  To: dri-devel

This allows creating a GEM CMA object without an associated DMA memory
buffer, and will be used to implement DRM PRIME support.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 83 +++++++++++++++++++++---------------
 1 file changed, 48 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 0a7e011..8cce330 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -32,62 +32,73 @@ static unsigned int get_gem_mmap_offset(struct drm_gem_object *obj)
 	return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT;
 }
 
-static void drm_gem_cma_buf_destroy(struct drm_device *drm,
-		struct drm_gem_cma_object *cma_obj)
-{
-	dma_free_writecombine(drm->dev, cma_obj->base.size, cma_obj->vaddr,
-			cma_obj->paddr);
-}
-
 /*
- * drm_gem_cma_create - allocate an object with the given size
+ * __drm_gem_cma_create - Create a GEM CMA object without allocating memory
+ * @drm: The drm device
+ * @size: The GEM object size
  *
- * returns a struct drm_gem_cma_object* on success or ERR_PTR values
- * on failure.
+ * This function creates and initializes a GEM CMA object of the given size, but
+ * doesn't allocate any memory to back the object.
+ *
+ * Return a struct drm_gem_cma_object* on success or ERR_PTR values on failure.
  */
-struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
-		unsigned int size)
+static struct drm_gem_cma_object *
+__drm_gem_cma_create(struct drm_device *drm, unsigned int size)
 {
 	struct drm_gem_cma_object *cma_obj;
 	struct drm_gem_object *gem_obj;
 	int ret;
 
-	size = round_up(size, PAGE_SIZE);
-
 	cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
 	if (!cma_obj)
 		return ERR_PTR(-ENOMEM);
 
-	cma_obj->vaddr = dma_alloc_writecombine(drm->dev, size,
-			&cma_obj->paddr, GFP_KERNEL | __GFP_NOWARN);
-	if (!cma_obj->vaddr) {
-		dev_err(drm->dev, "failed to allocate buffer with size %d\n", size);
-		ret = -ENOMEM;
-		goto err_dma_alloc;
-	}
-
 	gem_obj = &cma_obj->base;
 
 	ret = drm_gem_object_init(drm, gem_obj, size);
 	if (ret)
-		goto err_obj_init;
+		goto error;
 
 	ret = drm_gem_create_mmap_offset(gem_obj);
-	if (ret)
-		goto err_create_mmap_offset;
+	if (ret) {
+		drm_gem_object_release(gem_obj);
+		goto error;
+	}
 
 	return cma_obj;
 
-err_create_mmap_offset:
-	drm_gem_object_release(gem_obj);
+error:
+	kfree(cma_obj);
+	return ERR_PTR(ret);
+}
 
-err_obj_init:
-	drm_gem_cma_buf_destroy(drm, cma_obj);
+/*
+ * drm_gem_cma_create - allocate an object with the given size
+ *
+ * returns a struct drm_gem_cma_object* on success or ERR_PTR values
+ * on failure.
+ */
+struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
+		unsigned int size)
+{
+	struct drm_gem_cma_object *cma_obj;
 
-err_dma_alloc:
-	kfree(cma_obj);
+	size = round_up(size, PAGE_SIZE);
 
-	return ERR_PTR(ret);
+	cma_obj = __drm_gem_cma_create(drm, size);
+	if (IS_ERR(cma_obj))
+		return cma_obj;
+
+	cma_obj->vaddr = dma_alloc_writecombine(drm->dev, size,
+			&cma_obj->paddr, GFP_KERNEL | __GFP_NOWARN);
+	if (!cma_obj->vaddr) {
+		dev_err(drm->dev, "failed to allocate buffer with size %d\n",
+			size);
+		drm_gem_cma_free_object(&cma_obj->base);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return cma_obj;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_create);
 
@@ -143,11 +154,13 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 	if (gem_obj->map_list.map)
 		drm_gem_free_mmap_offset(gem_obj);
 
-	drm_gem_object_release(gem_obj);
-
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
-	drm_gem_cma_buf_destroy(gem_obj->dev, cma_obj);
+	if (cma_obj->vaddr)
+		dma_free_writecombine(gem_obj->dev->dev, cma_obj->base.size,
+				      cma_obj->vaddr, cma_obj->paddr);
+
+	drm_gem_object_release(gem_obj);
 
 	kfree(cma_obj);
 }
-- 
1.8.1.5

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 4/5] drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping
  2013-06-04  2:20 [PATCH v2 0/5] GEM CMA DMA-BUF support Laurent Pinchart
                   ` (2 preceding siblings ...)
  2013-06-04  2:20 ` [PATCH v2 3/5] drm: GEM CMA: Split object creation into object alloc and DMA memory alloc Laurent Pinchart
@ 2013-06-04  2:20 ` Laurent Pinchart
  2013-06-04 20:19   ` Rob Clark
  2013-06-04  2:20 ` [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support Laurent Pinchart
  4 siblings, 1 reply; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04  2:20 UTC (permalink / raw)
  To: dri-devel

The CMA-specific mapping code will be used to implement dma-buf mmap
support.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 8cce330..7a4db4e 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -228,13 +228,26 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
 };
 EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
 
+static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
+				struct vm_area_struct *vma)
+{
+	int ret;
+
+	ret = remap_pfn_range(vma, vma->vm_start, cma_obj->paddr >> PAGE_SHIFT,
+			vma->vm_end - vma->vm_start, vma->vm_page_prot);
+	if (ret)
+		drm_gem_vm_close(vma);
+
+	return ret;
+}
+
 /*
  * drm_gem_cma_mmap - (struct file_operation)->mmap callback function
  */
 int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
 {
-	struct drm_gem_object *gem_obj;
 	struct drm_gem_cma_object *cma_obj;
+	struct drm_gem_object *gem_obj;
 	int ret;
 
 	ret = drm_gem_mmap(filp, vma);
@@ -244,12 +257,7 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
 	gem_obj = vma->vm_private_data;
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
-	ret = remap_pfn_range(vma, vma->vm_start, cma_obj->paddr >> PAGE_SHIFT,
-			vma->vm_end - vma->vm_start, vma->vm_page_prot);
-	if (ret)
-		drm_gem_vm_close(vma);
-
-	return ret;
+	return drm_gem_cma_mmap_obj(cma_obj, vma);
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
 
-- 
1.8.1.5

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-04  2:20 [PATCH v2 0/5] GEM CMA DMA-BUF support Laurent Pinchart
                   ` (3 preceding siblings ...)
  2013-06-04  2:20 ` [PATCH v2 4/5] drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping Laurent Pinchart
@ 2013-06-04  2:20 ` Laurent Pinchart
  2013-06-04 21:56   ` Rob Clark
  4 siblings, 1 reply; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04  2:20 UTC (permalink / raw)
  To: dri-devel

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 321 ++++++++++++++++++++++++++++++++++-
 include/drm/drm_gem_cma_helper.h     |   9 +
 2 files changed, 327 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 7a4db4e..1dc2157 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -21,6 +21,9 @@
 #include <linux/slab.h>
 #include <linux/mutex.h>
 #include <linux/export.h>
+#if CONFIG_DMA_SHARED_BUFFER
+#include <linux/dma-buf.h>
+#endif
 #include <linux/dma-mapping.h>
 
 #include <drm/drmP.h>
@@ -82,6 +85,8 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
 		unsigned int size)
 {
 	struct drm_gem_cma_object *cma_obj;
+	struct sg_table *sgt = NULL;
+	int ret;
 
 	size = round_up(size, PAGE_SIZE);
 
@@ -94,11 +99,29 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
 	if (!cma_obj->vaddr) {
 		dev_err(drm->dev, "failed to allocate buffer with size %d\n",
 			size);
-		drm_gem_cma_free_object(&cma_obj->base);
-		return ERR_PTR(-ENOMEM);
+		ret = -ENOMEM;
+		goto error;
 	}
 
+	sgt = kzalloc(sizeof(*cma_obj->sgt), GFP_KERNEL);
+	if (sgt == NULL) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	ret = dma_get_sgtable(drm->dev, sgt, cma_obj->vaddr,
+			      cma_obj->paddr, size);
+	if (ret < 0)
+		goto error;
+
+	cma_obj->sgt = sgt;
+
 	return cma_obj;
+
+error:
+	kfree(sgt);
+	drm_gem_cma_free_object(&cma_obj->base);
+	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_create);
 
@@ -156,9 +179,16 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 
 	cma_obj = to_drm_gem_cma_obj(gem_obj);
 
-	if (cma_obj->vaddr)
+	if (cma_obj->vaddr) {
 		dma_free_writecombine(gem_obj->dev->dev, cma_obj->base.size,
 				      cma_obj->vaddr, cma_obj->paddr);
+		if (cma_obj->sgt) {
+			sg_free_table(cma_obj->sgt);
+			kfree(cma_obj->sgt);
+		}
+	} else if (gem_obj->import_attach) {
+		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
+	}
 
 	drm_gem_object_release(gem_obj);
 
@@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
 #endif
+
+/* -----------------------------------------------------------------------------
+ * DMA-BUF
+ */
+
+#if CONFIG_DMA_SHARED_BUFFER
+struct drm_gem_cma_dmabuf_attachment {
+	struct sg_table sgt;
+	enum dma_data_direction dir;
+};
+
+static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct device *dev,
+				     struct dma_buf_attachment *attach)
+{
+	struct drm_gem_cma_dmabuf_attachment *cma_attach;
+
+	cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
+	if (!cma_attach)
+		return -ENOMEM;
+
+	cma_attach->dir = DMA_NONE;
+	attach->priv = cma_attach;
+
+	return 0;
+}
+
+static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
+				      struct dma_buf_attachment *attach)
+{
+	struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
+	struct sg_table *sgt;
+
+	if (cma_attach == NULL)
+		return;
+
+	sgt = &cma_attach->sgt;
+
+	if (cma_attach->dir != DMA_NONE)
+		dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
+				cma_attach->dir);
+
+	sg_free_table(sgt);
+	kfree(cma_attach);
+	attach->priv = NULL;
+}
+
+static struct sg_table *
+drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
+		       enum dma_data_direction dir)
+{
+	struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
+	struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
+	struct drm_device *drm = cma_obj->base.dev;
+	struct scatterlist *rd, *wr;
+	struct sg_table *sgt;
+	unsigned int i;
+	int nents, ret;
+
+	DRM_DEBUG_PRIME("\n");
+
+	if (WARN_ON(dir == DMA_NONE))
+		return ERR_PTR(-EINVAL);
+
+	/* Return the cached mapping when possible. */
+	if (cma_attach->dir == dir)
+		return &cma_attach->sgt;
+
+	/* Two mappings with different directions for the same attachment are
+	 * not allowed.
+	 */
+	if (WARN_ON(cma_attach->dir != DMA_NONE))
+		return ERR_PTR(-EBUSY);
+
+	sgt = &cma_attach->sgt;
+
+	ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents, GFP_KERNEL);
+	if (ret) {
+		DRM_ERROR("failed to alloc sgt.\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	mutex_lock(&drm->struct_mutex);
+
+	rd = cma_obj->sgt->sgl;
+	wr = sgt->sgl;
+	for (i = 0; i < sgt->orig_nents; ++i) {
+		sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
+		rd = sg_next(rd);
+		wr = sg_next(wr);
+	}
+
+	nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
+	if (!nents) {
+		DRM_ERROR("failed to map sgl with iommu.\n");
+		sg_free_table(sgt);
+		sgt = ERR_PTR(-EIO);
+		goto done;
+	}
+
+	cma_attach->dir = dir;
+	attach->priv = cma_attach;
+
+	DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
+
+done:
+	mutex_unlock(&drm->struct_mutex);
+	return sgt;
+}
+
+static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment *attach,
+				     struct sg_table *sgt,
+				     enum dma_data_direction dir)
+{
+	/* Nothing to do. */
+}
+
+static void drm_gem_cma_dmabuf_release(struct dma_buf *dmabuf)
+{
+	struct drm_gem_cma_object *cma_obj = dmabuf->priv;
+
+	DRM_DEBUG_PRIME("%s\n", __FILE__);
+
+	/*
+	 * drm_gem_cma_dmabuf_release() call means that file object's
+	 * f_count is 0 and it calls drm_gem_object_handle_unreference()
+	 * to drop the references that these values had been increased
+	 * at drm_prime_handle_to_fd()
+	 */
+	if (cma_obj->base.export_dma_buf == dmabuf) {
+		cma_obj->base.export_dma_buf = NULL;
+
+		/*
+		 * drop this gem object refcount to release allocated buffer
+		 * and resources.
+		 */
+		drm_gem_object_unreference_unlocked(&cma_obj->base);
+	}
+}
+
+static void *drm_gem_cma_dmabuf_kmap_atomic(struct dma_buf *dmabuf,
+					    unsigned long page_num)
+{
+	/* TODO */
+
+	return NULL;
+}
+
+static void drm_gem_cma_dmabuf_kunmap_atomic(struct dma_buf *dmabuf,
+					     unsigned long page_num, void *addr)
+{
+	/* TODO */
+}
+
+static void *drm_gem_cma_dmabuf_kmap(struct dma_buf *dmabuf,
+				     unsigned long page_num)
+{
+	/* TODO */
+
+	return NULL;
+}
+
+static void drm_gem_cma_dmabuf_kunmap(struct dma_buf *dmabuf,
+				      unsigned long page_num, void *addr)
+{
+	/* TODO */
+}
+
+static int drm_gem_cma_dmabuf_mmap(struct dma_buf *dmabuf,
+				   struct vm_area_struct *vma)
+{
+	struct drm_gem_cma_object *cma_obj = dmabuf->priv;
+	struct drm_gem_object *gem_obj = &cma_obj->base;
+	int ret;
+
+	ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
+	if (ret < 0)
+		return ret;
+
+	return drm_gem_cma_mmap_obj(cma_obj, vma);
+}
+
+static void *drm_gem_cma_dmabuf_vmap(struct dma_buf *dmabuf)
+{
+	struct drm_gem_cma_object *cma_obj = dmabuf->priv;
+
+	return cma_obj->vaddr;
+}
+
+static struct dma_buf_ops drm_gem_cma_dmabuf_ops = {
+	.attach			= drm_gem_cma_dmabuf_attach,
+	.detach			= drm_gem_cma_dmabuf_detach,
+	.map_dma_buf		= drm_gem_cma_dmabuf_map,
+	.unmap_dma_buf		= drm_gem_cma_dmabuf_unmap,
+	.kmap			= drm_gem_cma_dmabuf_kmap,
+	.kmap_atomic		= drm_gem_cma_dmabuf_kmap_atomic,
+	.kunmap			= drm_gem_cma_dmabuf_kunmap,
+	.kunmap_atomic		= drm_gem_cma_dmabuf_kunmap_atomic,
+	.mmap			= drm_gem_cma_dmabuf_mmap,
+	.vmap			= drm_gem_cma_dmabuf_vmap,
+	.release		= drm_gem_cma_dmabuf_release,
+};
+
+struct dma_buf *drm_gem_cma_dmabuf_export(struct drm_device *drm,
+					  struct drm_gem_object *obj, int flags)
+{
+	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
+
+	return dma_buf_export(cma_obj, &drm_gem_cma_dmabuf_ops,
+			      cma_obj->base.size, flags);
+}
+EXPORT_SYMBOL_GPL(drm_gem_cma_dmabuf_export);
+
+struct drm_gem_object *drm_gem_cma_dmabuf_import(struct drm_device *drm,
+						 struct dma_buf *dma_buf)
+{
+	struct drm_gem_cma_object *cma_obj;
+	struct dma_buf_attachment *attach;
+	struct sg_table *sgt;
+	int ret;
+
+	DRM_DEBUG_PRIME("%s\n", __FILE__);
+
+	/* is this one of own objects? */
+	if (dma_buf->ops == &drm_gem_cma_dmabuf_ops) {
+		struct drm_gem_object *obj;
+
+		cma_obj = dma_buf->priv;
+		obj = &cma_obj->base;
+
+		/* is it from our device? */
+		if (obj->dev == drm) {
+			/*
+			 * Importing dmabuf exported from out own gem increases
+			 * refcount on gem itself instead of f_count of dmabuf.
+			 */
+			drm_gem_object_reference(obj);
+			dma_buf_put(dma_buf);
+			return obj;
+		}
+	}
+
+	/* Create a CMA GEM buffer. */
+	cma_obj = __drm_gem_cma_create(drm, dma_buf->size);
+	if (IS_ERR(cma_obj))
+		return ERR_PTR(PTR_ERR(cma_obj));
+
+	/* Attach to the buffer and map it. Make sure the mapping is contiguous
+	 * on the device memory bus, as that's all we support.
+	 */
+	attach = dma_buf_attach(dma_buf, drm->dev);
+	if (IS_ERR(attach)) {
+		ret = -EINVAL;
+		goto error_gem_free;
+	}
+
+	sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+	if (IS_ERR_OR_NULL(sgt)) {
+		ret = sgt ? PTR_ERR(sgt) : -ENOMEM;
+		goto error_buf_detach;
+	}
+
+	if (sgt->nents != 1) {
+		ret = -EINVAL;
+		goto error_buf_unmap;
+	}
+
+	cma_obj->base.import_attach = attach;
+	cma_obj->paddr = sg_dma_address(sgt->sgl);
+	cma_obj->sgt = sgt;
+
+	DRM_DEBUG_PRIME("dma_addr = 0x%x, size = %zu\n", cma_obj->paddr,
+			dma_buf->size);
+
+	return &cma_obj->base;
+
+error_buf_unmap:
+	dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
+error_buf_detach:
+	dma_buf_detach(dma_buf, attach);
+error_gem_free:
+	drm_gem_cma_free_object(&cma_obj->base);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(drm_gem_cma_dmabuf_import);
+#endif
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 63397ce..6e17251 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -4,6 +4,9 @@
 struct drm_gem_cma_object {
 	struct drm_gem_object base;
 	dma_addr_t paddr;
+	struct sg_table *sgt;
+
+	/* For objects with DMA memory allocated by GEM CMA */
 	void *vaddr;
 };
 
@@ -45,4 +48,10 @@ extern const struct vm_operations_struct drm_gem_cma_vm_ops;
 void drm_gem_cma_describe(struct drm_gem_cma_object *obj, struct seq_file *m);
 #endif
 
+struct dma_buf *drm_gem_cma_dmabuf_export(struct drm_device *drm_dev,
+					  struct drm_gem_object *obj,
+					  int flags);
+struct drm_gem_object *drm_gem_cma_dmabuf_import(struct drm_device *drm_dev,
+						 struct dma_buf *dma_buf);
+
 #endif /* __DRM_GEM_CMA_HELPER_H__ */
-- 
1.8.1.5

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping
  2013-06-04  2:20 ` [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping Laurent Pinchart
@ 2013-06-04 11:33   ` Rob Clark
  0 siblings, 0 replies; 17+ messages in thread
From: Rob Clark @ 2013-06-04 11:33 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: dri-devel

On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart
<laurent.pinchart+renesas@ideasonboard.com> wrote:
> The drm_gem_mmap() function first finds the GEM object to be mapped
> based on the fake mmap offset and then maps the object. Split the object
> mapping code into a standalone drm_gem_mmap_obj() function that can be
> used to implement dma-buf mmap() operations.
>
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>

Cool, thanks.. this was an old TODO of mine that I apparently forgot about :-)

Reviewed-by: Rob Clark <robdclark@gmail.com>

> ---
>  drivers/gpu/drm/drm_gem.c | 83 +++++++++++++++++++++++++++++------------------
>  include/drm/drmP.h        |  2 ++
>  2 files changed, 54 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index cf919e3..4321713 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -644,6 +644,55 @@ void drm_gem_vm_close(struct vm_area_struct *vma)
>  }
>  EXPORT_SYMBOL(drm_gem_vm_close);
>
> +/**
> + * drm_gem_mmap_obj - memory map a GEM object
> + * @obj: the GEM object to map
> + * @obj_size: the object size to be mapped, in bytes
> + * @vma: VMA for the area to be mapped
> + *
> + * Set up the VMA to prepare mapping of the GEM object using the gem_vm_ops
> + * provided by the driver. Depending on their requirements, drivers can either
> + * provide a fault handler in their gem_vm_ops (in which case any accesses to
> + * the object will be trapped, to perform migration, GTT binding, surface
> + * register allocation, or performance monitoring), or mmap the buffer memory
> + * synchronously after calling drm_gem_mmap_obj.
> + *
> + * This function is mainly intended to implement the DMABUF mmap operation, when
> + * the GEM object is not looked up based on its fake offset. To implement the
> + * DRM mmap operation, drivers should use the drm_gem_mmap() function.
> + *
> + * Return 0 or success or -EINVAL if the object size is smaller than the VMA
> + * size, or if no gem_vm_ops are provided.
> + */
> +int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
> +                    struct vm_area_struct *vma)
> +{
> +       struct drm_device *dev = obj->dev;
> +
> +       /* Check for valid size. */
> +       if (obj_size < vma->vm_end - vma->vm_start)
> +               return -EINVAL;
> +
> +       if (!dev->driver->gem_vm_ops)
> +               return -EINVAL;
> +
> +       vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> +       vma->vm_ops = dev->driver->gem_vm_ops;
> +       vma->vm_private_data = obj;
> +       vma->vm_page_prot =  pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> +
> +       /* Take a ref for this mapping of the object, so that the fault
> +        * handler can dereference the mmap offset's pointer to the object.
> +        * This reference is cleaned up by the corresponding vm_close
> +        * (which should happen whether the vma was created by this call, or
> +        * by a vm_open due to mremap or partial unmap or whatever).
> +        */
> +       drm_gem_object_reference(obj);
> +
> +       drm_vm_open_locked(dev, vma);
> +       return 0;
> +}
> +EXPORT_SYMBOL(drm_gem_mmap_obj);
>
>  /**
>   * drm_gem_mmap - memory map routine for GEM objects
> @@ -653,11 +702,9 @@ EXPORT_SYMBOL(drm_gem_vm_close);
>   * If a driver supports GEM object mapping, mmap calls on the DRM file
>   * descriptor will end up here.
>   *
> - * If we find the object based on the offset passed in (vma->vm_pgoff will
> + * Look up the GEM object based on the offset passed in (vma->vm_pgoff will
>   * contain the fake offset we created when the GTT map ioctl was called on
> - * the object), we set up the driver fault handler so that any accesses
> - * to the object can be trapped, to perform migration, GTT binding, surface
> - * register allocation, or performance monitoring.
> + * the object) and map it with a call to drm_gem_mmap_obj().
>   */
>  int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>  {
> @@ -665,7 +712,6 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>         struct drm_device *dev = priv->minor->dev;
>         struct drm_gem_mm *mm = dev->mm_private;
>         struct drm_local_map *map = NULL;
> -       struct drm_gem_object *obj;
>         struct drm_hash_item *hash;
>         int ret = 0;
>
> @@ -686,32 +732,7 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>                 goto out_unlock;
>         }
>
> -       /* Check for valid size. */
> -       if (map->size < vma->vm_end - vma->vm_start) {
> -               ret = -EINVAL;
> -               goto out_unlock;
> -       }
> -
> -       obj = map->handle;
> -       if (!obj->dev->driver->gem_vm_ops) {
> -               ret = -EINVAL;
> -               goto out_unlock;
> -       }
> -
> -       vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> -       vma->vm_ops = obj->dev->driver->gem_vm_ops;
> -       vma->vm_private_data = map->handle;
> -       vma->vm_page_prot =  pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> -
> -       /* Take a ref for this mapping of the object, so that the fault
> -        * handler can dereference the mmap offset's pointer to the object.
> -        * This reference is cleaned up by the corresponding vm_close
> -        * (which should happen whether the vma was created by this call, or
> -        * by a vm_open due to mremap or partial unmap or whatever).
> -        */
> -       drm_gem_object_reference(obj);
> -
> -       drm_vm_open_locked(dev, vma);
> +       ret = drm_gem_mmap_obj(map->handle, map->size, vma);
>
>  out_unlock:
>         mutex_unlock(&dev->struct_mutex);
> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> index b06f5af..79fb4c7 100644
> --- a/include/drm/drmP.h
> +++ b/include/drm/drmP.h
> @@ -1616,6 +1616,8 @@ int drm_gem_private_object_init(struct drm_device *dev,
>  void drm_gem_object_handle_free(struct drm_gem_object *obj);
>  void drm_gem_vm_open(struct vm_area_struct *vma);
>  void drm_gem_vm_close(struct vm_area_struct *vma);
> +int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
> +                    struct vm_area_struct *vma);
>  int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>
>  #include <drm/drm_global.h>
> --
> 1.8.1.5
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap
  2013-06-04  2:20 ` [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap Laurent Pinchart
@ 2013-06-04 11:33   ` Rob Clark
  2013-06-04 18:03     ` Laurent Pinchart
  0 siblings, 1 reply; 17+ messages in thread
From: Rob Clark @ 2013-06-04 11:33 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: dri-devel

On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart
<laurent.pinchart+renesas@ideasonboard.com> wrote:
> The dma-buf mmap code was copied from the GEM mmap implementation.
> Replace it with the new drm_gem_mmap_obj() function.
>
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>

Reviewed-by: Rob Clark <robdclark@gmail.com>

> ---
>  drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 32 +++----------------------------
>  1 file changed, 3 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> index be7cd97..3256693 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> @@ -136,10 +136,6 @@ static void omap_gem_dmabuf_kunmap(struct dma_buf *buffer,
>         kunmap(pages[page_num]);
>  }
>
> -/*
> - * TODO maybe we can split up drm_gem_mmap to avoid duplicating
> - * some here.. or at least have a drm_dmabuf_mmap helper.
> - */
>  static int omap_gem_dmabuf_mmap(struct dma_buf *buffer,
>                 struct vm_area_struct *vma)
>  {
> @@ -149,31 +145,9 @@ static int omap_gem_dmabuf_mmap(struct dma_buf *buffer,
>         if (WARN_ON(!obj->filp))
>                 return -EINVAL;
>
> -       /* Check for valid size. */
> -       if (omap_gem_mmap_size(obj) < vma->vm_end - vma->vm_start) {
> -               ret = -EINVAL;
> -               goto out_unlock;
> -       }
> -
> -       if (!obj->dev->driver->gem_vm_ops) {
> -               ret = -EINVAL;
> -               goto out_unlock;
> -       }
> -
> -       vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> -       vma->vm_ops = obj->dev->driver->gem_vm_ops;
> -       vma->vm_private_data = obj;
> -       vma->vm_page_prot =  pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> -
> -       /* Take a ref for this mapping of the object, so that the fault
> -        * handler can dereference the mmap offset's pointer to the object.
> -        * This reference is cleaned up by the corresponding vm_close
> -        * (which should happen whether the vma was created by this call, or
> -        * by a vm_open due to mremap or partial unmap or whatever).
> -        */
> -       vma->vm_ops->open(vma);
> -
> -out_unlock:
> +       ret = drm_gem_mmap_obj(obj, omap_gem_mmap_size(obj), vma);
> +       if (ret < 0)
> +               return ret;
>
>         return omap_gem_mmap_obj(obj, vma);
>  }
> --
> 1.8.1.5
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap
  2013-06-04 11:33   ` Rob Clark
@ 2013-06-04 18:03     ` Laurent Pinchart
  0 siblings, 0 replies; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-04 18:03 UTC (permalink / raw)
  To: Rob Clark; +Cc: Laurent Pinchart, dri-devel

Hi Rob,

On Tuesday 04 June 2013 07:33:42 Rob Clark wrote:
> On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart wrote:
> > The dma-buf mmap code was copied from the GEM mmap implementation.
> > Replace it with the new drm_gem_mmap_obj() function.
> > 
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> 
> Reviewed-by: Rob Clark <robdclark@gmail.com>

Thanks. Any chance you would have time to review 3/5 to 5/5 ? :-)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/5] drm: GEM CMA: Split object creation into object alloc and DMA memory alloc
  2013-06-04  2:20 ` [PATCH v2 3/5] drm: GEM CMA: Split object creation into object alloc and DMA memory alloc Laurent Pinchart
@ 2013-06-04 20:07   ` Rob Clark
  0 siblings, 0 replies; 17+ messages in thread
From: Rob Clark @ 2013-06-04 20:07 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: dri-devel

On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart
<laurent.pinchart+renesas@ideasonboard.com> wrote:
> This allows creating a GEM CMA object without an associated DMA memory
> buffer, and will be used to implement DRM PRIME support.
>
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>

Reviewed-by: Rob Clark <robdclark@gmail.com>

> ---
>  drivers/gpu/drm/drm_gem_cma_helper.c | 83 +++++++++++++++++++++---------------
>  1 file changed, 48 insertions(+), 35 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 0a7e011..8cce330 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -32,62 +32,73 @@ static unsigned int get_gem_mmap_offset(struct drm_gem_object *obj)
>         return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT;
>  }
>
> -static void drm_gem_cma_buf_destroy(struct drm_device *drm,
> -               struct drm_gem_cma_object *cma_obj)
> -{
> -       dma_free_writecombine(drm->dev, cma_obj->base.size, cma_obj->vaddr,
> -                       cma_obj->paddr);
> -}
> -
>  /*
> - * drm_gem_cma_create - allocate an object with the given size
> + * __drm_gem_cma_create - Create a GEM CMA object without allocating memory
> + * @drm: The drm device
> + * @size: The GEM object size
>   *
> - * returns a struct drm_gem_cma_object* on success or ERR_PTR values
> - * on failure.
> + * This function creates and initializes a GEM CMA object of the given size, but
> + * doesn't allocate any memory to back the object.
> + *
> + * Return a struct drm_gem_cma_object* on success or ERR_PTR values on failure.
>   */
> -struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
> -               unsigned int size)
> +static struct drm_gem_cma_object *
> +__drm_gem_cma_create(struct drm_device *drm, unsigned int size)
>  {
>         struct drm_gem_cma_object *cma_obj;
>         struct drm_gem_object *gem_obj;
>         int ret;
>
> -       size = round_up(size, PAGE_SIZE);
> -
>         cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
>         if (!cma_obj)
>                 return ERR_PTR(-ENOMEM);
>
> -       cma_obj->vaddr = dma_alloc_writecombine(drm->dev, size,
> -                       &cma_obj->paddr, GFP_KERNEL | __GFP_NOWARN);
> -       if (!cma_obj->vaddr) {
> -               dev_err(drm->dev, "failed to allocate buffer with size %d\n", size);
> -               ret = -ENOMEM;
> -               goto err_dma_alloc;
> -       }
> -
>         gem_obj = &cma_obj->base;
>
>         ret = drm_gem_object_init(drm, gem_obj, size);
>         if (ret)
> -               goto err_obj_init;
> +               goto error;
>
>         ret = drm_gem_create_mmap_offset(gem_obj);
> -       if (ret)
> -               goto err_create_mmap_offset;
> +       if (ret) {
> +               drm_gem_object_release(gem_obj);
> +               goto error;
> +       }
>
>         return cma_obj;
>
> -err_create_mmap_offset:
> -       drm_gem_object_release(gem_obj);
> +error:
> +       kfree(cma_obj);
> +       return ERR_PTR(ret);
> +}
>
> -err_obj_init:
> -       drm_gem_cma_buf_destroy(drm, cma_obj);
> +/*
> + * drm_gem_cma_create - allocate an object with the given size
> + *
> + * returns a struct drm_gem_cma_object* on success or ERR_PTR values
> + * on failure.
> + */
> +struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
> +               unsigned int size)
> +{
> +       struct drm_gem_cma_object *cma_obj;
>
> -err_dma_alloc:
> -       kfree(cma_obj);
> +       size = round_up(size, PAGE_SIZE);
>
> -       return ERR_PTR(ret);
> +       cma_obj = __drm_gem_cma_create(drm, size);
> +       if (IS_ERR(cma_obj))
> +               return cma_obj;
> +
> +       cma_obj->vaddr = dma_alloc_writecombine(drm->dev, size,
> +                       &cma_obj->paddr, GFP_KERNEL | __GFP_NOWARN);
> +       if (!cma_obj->vaddr) {
> +               dev_err(drm->dev, "failed to allocate buffer with size %d\n",
> +                       size);
> +               drm_gem_cma_free_object(&cma_obj->base);
> +               return ERR_PTR(-ENOMEM);
> +       }
> +
> +       return cma_obj;
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_create);
>
> @@ -143,11 +154,13 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
>         if (gem_obj->map_list.map)
>                 drm_gem_free_mmap_offset(gem_obj);
>
> -       drm_gem_object_release(gem_obj);
> -
>         cma_obj = to_drm_gem_cma_obj(gem_obj);
>
> -       drm_gem_cma_buf_destroy(gem_obj->dev, cma_obj);
> +       if (cma_obj->vaddr)
> +               dma_free_writecombine(gem_obj->dev->dev, cma_obj->base.size,
> +                                     cma_obj->vaddr, cma_obj->paddr);
> +
> +       drm_gem_object_release(gem_obj);
>
>         kfree(cma_obj);
>  }
> --
> 1.8.1.5
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 4/5] drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping
  2013-06-04  2:20 ` [PATCH v2 4/5] drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping Laurent Pinchart
@ 2013-06-04 20:19   ` Rob Clark
  0 siblings, 0 replies; 17+ messages in thread
From: Rob Clark @ 2013-06-04 20:19 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: dri-devel

On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart
<laurent.pinchart+renesas@ideasonboard.com> wrote:
> The CMA-specific mapping code will be used to implement dma-buf mmap
> support.
>
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>

Signed-off-by: Rob Clark <robdclark@gmail.com>

> ---
>  drivers/gpu/drm/drm_gem_cma_helper.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 8cce330..7a4db4e 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -228,13 +228,26 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
>  };
>  EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>
> +static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
> +                               struct vm_area_struct *vma)
> +{
> +       int ret;
> +
> +       ret = remap_pfn_range(vma, vma->vm_start, cma_obj->paddr >> PAGE_SHIFT,
> +                       vma->vm_end - vma->vm_start, vma->vm_page_prot);
> +       if (ret)
> +               drm_gem_vm_close(vma);
> +
> +       return ret;
> +}
> +
>  /*
>   * drm_gem_cma_mmap - (struct file_operation)->mmap callback function
>   */
>  int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>  {
> -       struct drm_gem_object *gem_obj;
>         struct drm_gem_cma_object *cma_obj;
> +       struct drm_gem_object *gem_obj;
>         int ret;
>
>         ret = drm_gem_mmap(filp, vma);
> @@ -244,12 +257,7 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
>         gem_obj = vma->vm_private_data;
>         cma_obj = to_drm_gem_cma_obj(gem_obj);
>
> -       ret = remap_pfn_range(vma, vma->vm_start, cma_obj->paddr >> PAGE_SHIFT,
> -                       vma->vm_end - vma->vm_start, vma->vm_page_prot);
> -       if (ret)
> -               drm_gem_vm_close(vma);
> -
> -       return ret;
> +       return drm_gem_cma_mmap_obj(cma_obj, vma);
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
>
> --
> 1.8.1.5
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-04  2:20 ` [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support Laurent Pinchart
@ 2013-06-04 21:56   ` Rob Clark
  2013-06-05  1:22     ` Laurent Pinchart
  0 siblings, 1 reply; 17+ messages in thread
From: Rob Clark @ 2013-06-04 21:56 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: dri-devel

couple small comments, other than those it looks ok

BR,
-R

On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart
<laurent.pinchart+renesas@ideasonboard.com> wrote:
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> ---
>  drivers/gpu/drm/drm_gem_cma_helper.c | 321 ++++++++++++++++++++++++++++++++++-
>  include/drm/drm_gem_cma_helper.h     |   9 +
>  2 files changed, 327 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 7a4db4e..1dc2157 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -21,6 +21,9 @@
>  #include <linux/slab.h>
>  #include <linux/mutex.h>
>  #include <linux/export.h>
> +#if CONFIG_DMA_SHARED_BUFFER
> +#include <linux/dma-buf.h>
> +#endif

I don't think we need the #if, since drm selects DMA_SHARED_BUFFER

and same for other spot below

>  #include <linux/dma-mapping.h>
>
>  #include <drm/drmP.h>
> @@ -82,6 +85,8 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>                 unsigned int size)
>  {
>         struct drm_gem_cma_object *cma_obj;
> +       struct sg_table *sgt = NULL;
> +       int ret;
>
>         size = round_up(size, PAGE_SIZE);
>
> @@ -94,11 +99,29 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
>         if (!cma_obj->vaddr) {
>                 dev_err(drm->dev, "failed to allocate buffer with size %d\n",
>                         size);
> -               drm_gem_cma_free_object(&cma_obj->base);
> -               return ERR_PTR(-ENOMEM);
> +               ret = -ENOMEM;
> +               goto error;
>         }
>
> +       sgt = kzalloc(sizeof(*cma_obj->sgt), GFP_KERNEL);
> +       if (sgt == NULL) {
> +               ret = -ENOMEM;
> +               goto error;
> +       }
> +
> +       ret = dma_get_sgtable(drm->dev, sgt, cma_obj->vaddr,
> +                             cma_obj->paddr, size);
> +       if (ret < 0)
> +               goto error;
> +
> +       cma_obj->sgt = sgt;
> +
>         return cma_obj;
> +
> +error:
> +       kfree(sgt);
> +       drm_gem_cma_free_object(&cma_obj->base);
> +       return ERR_PTR(ret);
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_create);
>
> @@ -156,9 +179,16 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
>
>         cma_obj = to_drm_gem_cma_obj(gem_obj);
>
> -       if (cma_obj->vaddr)
> +       if (cma_obj->vaddr) {
>                 dma_free_writecombine(gem_obj->dev->dev, cma_obj->base.size,
>                                       cma_obj->vaddr, cma_obj->paddr);
> +               if (cma_obj->sgt) {
> +                       sg_free_table(cma_obj->sgt);
> +                       kfree(cma_obj->sgt);
> +               }
> +       } else if (gem_obj->import_attach) {
> +               drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
> +       }
>
>         drm_gem_object_release(gem_obj);
>
> @@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
>  #endif
> +
> +/* -----------------------------------------------------------------------------
> + * DMA-BUF
> + */
> +
> +#if CONFIG_DMA_SHARED_BUFFER
> +struct drm_gem_cma_dmabuf_attachment {
> +       struct sg_table sgt;
> +       enum dma_data_direction dir;
> +};
> +
> +static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct device *dev,
> +                                    struct dma_buf_attachment *attach)
> +{
> +       struct drm_gem_cma_dmabuf_attachment *cma_attach;
> +
> +       cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
> +       if (!cma_attach)
> +               return -ENOMEM;
> +
> +       cma_attach->dir = DMA_NONE;
> +       attach->priv = cma_attach;
> +
> +       return 0;
> +}
> +
> +static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
> +                                     struct dma_buf_attachment *attach)
> +{
> +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
> +       struct sg_table *sgt;
> +
> +       if (cma_attach == NULL)
> +               return;
> +
> +       sgt = &cma_attach->sgt;
> +
> +       if (cma_attach->dir != DMA_NONE)
> +               dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
> +                               cma_attach->dir);
> +
> +       sg_free_table(sgt);
> +       kfree(cma_attach);
> +       attach->priv = NULL;
> +}
> +
> +static struct sg_table *
> +drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
> +                      enum dma_data_direction dir)
> +{
> +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
> +       struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
> +       struct drm_device *drm = cma_obj->base.dev;
> +       struct scatterlist *rd, *wr;
> +       struct sg_table *sgt;
> +       unsigned int i;
> +       int nents, ret;
> +
> +       DRM_DEBUG_PRIME("\n");
> +
> +       if (WARN_ON(dir == DMA_NONE))
> +               return ERR_PTR(-EINVAL);
> +
> +       /* Return the cached mapping when possible. */
> +       if (cma_attach->dir == dir)
> +               return &cma_attach->sgt;
> +
> +       /* Two mappings with different directions for the same attachment are
> +        * not allowed.
> +        */
> +       if (WARN_ON(cma_attach->dir != DMA_NONE))
> +               return ERR_PTR(-EBUSY);
> +
> +       sgt = &cma_attach->sgt;
> +
> +       ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents, GFP_KERNEL);
> +       if (ret) {
> +               DRM_ERROR("failed to alloc sgt.\n");
> +               return ERR_PTR(-ENOMEM);
> +       }
> +
> +       mutex_lock(&drm->struct_mutex);
> +
> +       rd = cma_obj->sgt->sgl;
> +       wr = sgt->sgl;
> +       for (i = 0; i < sgt->orig_nents; ++i) {
> +               sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
> +               rd = sg_next(rd);
> +               wr = sg_next(wr);
> +       }
> +
> +       nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
> +       if (!nents) {
> +               DRM_ERROR("failed to map sgl with iommu.\n");
> +               sg_free_table(sgt);
> +               sgt = ERR_PTR(-EIO);
> +               goto done;
> +       }
> +
> +       cma_attach->dir = dir;
> +       attach->priv = cma_attach;
> +
> +       DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
> +
> +done:
> +       mutex_unlock(&drm->struct_mutex);
> +       return sgt;
> +}
> +
> +static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment *attach,
> +                                    struct sg_table *sgt,
> +                                    enum dma_data_direction dir)
> +{
> +       /* Nothing to do. */

I kinda think that if you don't support multiple mappings with
different direction, that you should at least support unmap and then
let someone else map with other direction

> +}
> +
> +static void drm_gem_cma_dmabuf_release(struct dma_buf *dmabuf)
> +{
> +       struct drm_gem_cma_object *cma_obj = dmabuf->priv;
> +
> +       DRM_DEBUG_PRIME("%s\n", __FILE__);
> +
> +       /*
> +        * drm_gem_cma_dmabuf_release() call means that file object's
> +        * f_count is 0 and it calls drm_gem_object_handle_unreference()
> +        * to drop the references that these values had been increased
> +        * at drm_prime_handle_to_fd()
> +        */
> +       if (cma_obj->base.export_dma_buf == dmabuf) {
> +               cma_obj->base.export_dma_buf = NULL;
> +
> +               /*
> +                * drop this gem object refcount to release allocated buffer
> +                * and resources.
> +                */
> +               drm_gem_object_unreference_unlocked(&cma_obj->base);
> +       }
> +}
> +
> +static void *drm_gem_cma_dmabuf_kmap_atomic(struct dma_buf *dmabuf,
> +                                           unsigned long page_num)
> +{
> +       /* TODO */
> +
> +       return NULL;
> +}
> +
> +static void drm_gem_cma_dmabuf_kunmap_atomic(struct dma_buf *dmabuf,
> +                                            unsigned long page_num, void *addr)
> +{
> +       /* TODO */
> +}
> +
> +static void *drm_gem_cma_dmabuf_kmap(struct dma_buf *dmabuf,
> +                                    unsigned long page_num)
> +{
> +       /* TODO */
> +
> +       return NULL;
> +}
> +
> +static void drm_gem_cma_dmabuf_kunmap(struct dma_buf *dmabuf,
> +                                     unsigned long page_num, void *addr)
> +{
> +       /* TODO */
> +}
> +
> +static int drm_gem_cma_dmabuf_mmap(struct dma_buf *dmabuf,
> +                                  struct vm_area_struct *vma)
> +{
> +       struct drm_gem_cma_object *cma_obj = dmabuf->priv;
> +       struct drm_gem_object *gem_obj = &cma_obj->base;
> +       int ret;
> +
> +       ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
> +       if (ret < 0)
> +               return ret;
> +
> +       return drm_gem_cma_mmap_obj(cma_obj, vma);
> +}
> +
> +static void *drm_gem_cma_dmabuf_vmap(struct dma_buf *dmabuf)
> +{
> +       struct drm_gem_cma_object *cma_obj = dmabuf->priv;
> +
> +       return cma_obj->vaddr;
> +}
> +
> +static struct dma_buf_ops drm_gem_cma_dmabuf_ops = {
> +       .attach                 = drm_gem_cma_dmabuf_attach,
> +       .detach                 = drm_gem_cma_dmabuf_detach,
> +       .map_dma_buf            = drm_gem_cma_dmabuf_map,
> +       .unmap_dma_buf          = drm_gem_cma_dmabuf_unmap,
> +       .kmap                   = drm_gem_cma_dmabuf_kmap,
> +       .kmap_atomic            = drm_gem_cma_dmabuf_kmap_atomic,
> +       .kunmap                 = drm_gem_cma_dmabuf_kunmap,
> +       .kunmap_atomic          = drm_gem_cma_dmabuf_kunmap_atomic,
> +       .mmap                   = drm_gem_cma_dmabuf_mmap,
> +       .vmap                   = drm_gem_cma_dmabuf_vmap,
> +       .release                = drm_gem_cma_dmabuf_release,
> +};
> +
> +struct dma_buf *drm_gem_cma_dmabuf_export(struct drm_device *drm,
> +                                         struct drm_gem_object *obj, int flags)
> +{
> +       struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
> +
> +       return dma_buf_export(cma_obj, &drm_gem_cma_dmabuf_ops,
> +                             cma_obj->base.size, flags);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_cma_dmabuf_export);
> +
> +struct drm_gem_object *drm_gem_cma_dmabuf_import(struct drm_device *drm,
> +                                                struct dma_buf *dma_buf)
> +{
> +       struct drm_gem_cma_object *cma_obj;
> +       struct dma_buf_attachment *attach;
> +       struct sg_table *sgt;
> +       int ret;
> +
> +       DRM_DEBUG_PRIME("%s\n", __FILE__);
> +
> +       /* is this one of own objects? */
> +       if (dma_buf->ops == &drm_gem_cma_dmabuf_ops) {
> +               struct drm_gem_object *obj;
> +
> +               cma_obj = dma_buf->priv;
> +               obj = &cma_obj->base;
> +
> +               /* is it from our device? */
> +               if (obj->dev == drm) {
> +                       /*
> +                        * Importing dmabuf exported from out own gem increases
> +                        * refcount on gem itself instead of f_count of dmabuf.
> +                        */
> +                       drm_gem_object_reference(obj);
> +                       dma_buf_put(dma_buf);
> +                       return obj;
> +               }
> +       }
> +
> +       /* Create a CMA GEM buffer. */
> +       cma_obj = __drm_gem_cma_create(drm, dma_buf->size);
> +       if (IS_ERR(cma_obj))
> +               return ERR_PTR(PTR_ERR(cma_obj));
> +
> +       /* Attach to the buffer and map it. Make sure the mapping is contiguous
> +        * on the device memory bus, as that's all we support.
> +        */
> +       attach = dma_buf_attach(dma_buf, drm->dev);
> +       if (IS_ERR(attach)) {
> +               ret = -EINVAL;
> +               goto error_gem_free;
> +       }
> +
> +       sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> +       if (IS_ERR_OR_NULL(sgt)) {
> +               ret = sgt ? PTR_ERR(sgt) : -ENOMEM;
> +               goto error_buf_detach;
> +       }
> +
> +       if (sgt->nents != 1) {
> +               ret = -EINVAL;
> +               goto error_buf_unmap;
> +       }
> +
> +       cma_obj->base.import_attach = attach;
> +       cma_obj->paddr = sg_dma_address(sgt->sgl);
> +       cma_obj->sgt = sgt;
> +
> +       DRM_DEBUG_PRIME("dma_addr = 0x%x, size = %zu\n", cma_obj->paddr,
> +                       dma_buf->size);
> +
> +       return &cma_obj->base;
> +
> +error_buf_unmap:
> +       dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
> +error_buf_detach:
> +       dma_buf_detach(dma_buf, attach);
> +error_gem_free:
> +       drm_gem_cma_free_object(&cma_obj->base);
> +       return ERR_PTR(ret);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_cma_dmabuf_import);
> +#endif
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 63397ce..6e17251 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -4,6 +4,9 @@
>  struct drm_gem_cma_object {
>         struct drm_gem_object base;
>         dma_addr_t paddr;
> +       struct sg_table *sgt;
> +
> +       /* For objects with DMA memory allocated by GEM CMA */
>         void *vaddr;
>  };
>
> @@ -45,4 +48,10 @@ extern const struct vm_operations_struct drm_gem_cma_vm_ops;
>  void drm_gem_cma_describe(struct drm_gem_cma_object *obj, struct seq_file *m);
>  #endif
>
> +struct dma_buf *drm_gem_cma_dmabuf_export(struct drm_device *drm_dev,
> +                                         struct drm_gem_object *obj,
> +                                         int flags);
> +struct drm_gem_object *drm_gem_cma_dmabuf_import(struct drm_device *drm_dev,
> +                                                struct dma_buf *dma_buf);
> +
>  #endif /* __DRM_GEM_CMA_HELPER_H__ */
> --
> 1.8.1.5
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-04 21:56   ` Rob Clark
@ 2013-06-05  1:22     ` Laurent Pinchart
  2013-06-05  3:05       ` Rob Clark
  0 siblings, 1 reply; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-05  1:22 UTC (permalink / raw)
  To: Rob Clark; +Cc: Laurent Pinchart, dri-devel

Hi Rob,

On Tuesday 04 June 2013 17:56:36 Rob Clark wrote:
> couple small comments, other than those it looks ok

Thanks for the review.

> On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart wrote:
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> > ---
> > 
> >  drivers/gpu/drm/drm_gem_cma_helper.c | 321 +++++++++++++++++++++++++++++-
> >  include/drm/drm_gem_cma_helper.h     |   9 +
> >  2 files changed, 327 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
> > b/drivers/gpu/drm/drm_gem_cma_helper.c index 7a4db4e..1dc2157 100644
> > --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> > @@ -21,6 +21,9 @@
> >  #include <linux/slab.h>
> >  #include <linux/mutex.h>
> >  #include <linux/export.h>
> > +#if CONFIG_DMA_SHARED_BUFFER
> > +#include <linux/dma-buf.h>
> > +#endif
> 
> I don't think we need the #if, since drm selects DMA_SHARED_BUFFER
> 
> and same for other spot below

Indeed. Will be fixed in the next version.

> >  #include <linux/dma-mapping.h>
> >  
> >  #include <drm/drmP.h>

[snip]

> > @@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct drm_gem_cma_object
> > *cma_obj, struct seq_file *m> 
> >  }
> >  EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
> >  #endif
> > 
> > +
> > +/*
> > -------------------------------------------------------------------------
> > ---- + * DMA-BUF
> > + */
> > +
> > +#if CONFIG_DMA_SHARED_BUFFER
> > +struct drm_gem_cma_dmabuf_attachment {
> > +       struct sg_table sgt;
> > +       enum dma_data_direction dir;
> > +};
> > +
> > +static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct
> > device *dev, +                                    struct
> > dma_buf_attachment *attach) +{
> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach;
> > +
> > +       cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
> > +       if (!cma_attach)
> > +               return -ENOMEM;
> > +
> > +       cma_attach->dir = DMA_NONE;
> > +       attach->priv = cma_attach;
> > +
> > +       return 0;
> > +}
> > +
> > +static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
> > +                                     struct dma_buf_attachment *attach)
> > +{
> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
> > +       struct sg_table *sgt;
> > +
> > +       if (cma_attach == NULL)
> > +               return;
> > +
> > +       sgt = &cma_attach->sgt;
> > +
> > +       if (cma_attach->dir != DMA_NONE)
> > +               dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
> > +                               cma_attach->dir);
> > +
> > +       sg_free_table(sgt);
> > +       kfree(cma_attach);
> > +       attach->priv = NULL;
> > +}
> > +
> > +static struct sg_table *
> > +drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
> > +                      enum dma_data_direction dir)
> > +{
> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
> > +       struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
> > +       struct drm_device *drm = cma_obj->base.dev;
> > +       struct scatterlist *rd, *wr;
> > +       struct sg_table *sgt;
> > +       unsigned int i;
> > +       int nents, ret;
> > +
> > +       DRM_DEBUG_PRIME("\n");
> > +
> > +       if (WARN_ON(dir == DMA_NONE))
> > +               return ERR_PTR(-EINVAL);
> > +
> > +       /* Return the cached mapping when possible. */
> > +       if (cma_attach->dir == dir)
> > +               return &cma_attach->sgt;
> > +
> > +       /* Two mappings with different directions for the same attachment
> > are +        * not allowed.
> > +        */
> > +       if (WARN_ON(cma_attach->dir != DMA_NONE))
> > +               return ERR_PTR(-EBUSY);
> > +
> > +       sgt = &cma_attach->sgt;
> > +
> > +       ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents, GFP_KERNEL);
> > +       if (ret) {
> > +               DRM_ERROR("failed to alloc sgt.\n");
> > +               return ERR_PTR(-ENOMEM);
> > +       }
> > +
> > +       mutex_lock(&drm->struct_mutex);
> > +
> > +       rd = cma_obj->sgt->sgl;
> > +       wr = sgt->sgl;
> > +       for (i = 0; i < sgt->orig_nents; ++i) {
> > +               sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
> > +               rd = sg_next(rd);
> > +               wr = sg_next(wr);
> > +       }
> > +
> > +       nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
> > +       if (!nents) {
> > +               DRM_ERROR("failed to map sgl with iommu.\n");
> > +               sg_free_table(sgt);
> > +               sgt = ERR_PTR(-EIO);
> > +               goto done;
> > +       }
> > +
> > +       cma_attach->dir = dir;
> > +       attach->priv = cma_attach;
> > +
> > +       DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
> > +
> > +done:
> > +       mutex_unlock(&drm->struct_mutex);
> > +       return sgt;
> > +}
> > +
> > +static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment *attach,
> > +                                    struct sg_table *sgt,
> > +                                    enum dma_data_direction dir)
> > +{
> > +       /* Nothing to do. */
> 
> I kinda think that if you don't support multiple mappings with
> different direction, that you should at least support unmap and then
> let someone else map with other direction

That would make sense, but in that case I wouldn't be able to cache the 
mapping. It would probably be better to add proper support for multiple 
mappings with different directions.

Given that the mapping is cached in the attachment, this will only be an issue 
if a driver tries to map the same attachment twice with different directions. 
Isn't that an uncommon use case that could be fixed later ? :-) I'd like to 
get this set in v3.11 if possible.

> > +}

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-05  1:22     ` Laurent Pinchart
@ 2013-06-05  3:05       ` Rob Clark
  2013-06-05  8:44         ` Daniel Vetter
  0 siblings, 1 reply; 17+ messages in thread
From: Rob Clark @ 2013-06-05  3:05 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Laurent Pinchart, dri-devel

On Tue, Jun 4, 2013 at 9:22 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi Rob,
>
> On Tuesday 04 June 2013 17:56:36 Rob Clark wrote:
>> couple small comments, other than those it looks ok
>
> Thanks for the review.
>
>> On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart wrote:
>> > Signed-off-by: Laurent Pinchart
>> > <laurent.pinchart+renesas@ideasonboard.com>
>> > ---
>> >
>> >  drivers/gpu/drm/drm_gem_cma_helper.c | 321 +++++++++++++++++++++++++++++-
>> >  include/drm/drm_gem_cma_helper.h     |   9 +
>> >  2 files changed, 327 insertions(+), 3 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>> > b/drivers/gpu/drm/drm_gem_cma_helper.c index 7a4db4e..1dc2157 100644
>> > --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>> > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>> > @@ -21,6 +21,9 @@
>> >  #include <linux/slab.h>
>> >  #include <linux/mutex.h>
>> >  #include <linux/export.h>
>> > +#if CONFIG_DMA_SHARED_BUFFER
>> > +#include <linux/dma-buf.h>
>> > +#endif
>>
>> I don't think we need the #if, since drm selects DMA_SHARED_BUFFER
>>
>> and same for other spot below
>
> Indeed. Will be fixed in the next version.
>
>> >  #include <linux/dma-mapping.h>
>> >
>> >  #include <drm/drmP.h>
>
> [snip]
>
>> > @@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct drm_gem_cma_object
>> > *cma_obj, struct seq_file *m>
>> >  }
>> >  EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
>> >  #endif
>> >
>> > +
>> > +/*
>> > -------------------------------------------------------------------------
>> > ---- + * DMA-BUF
>> > + */
>> > +
>> > +#if CONFIG_DMA_SHARED_BUFFER
>> > +struct drm_gem_cma_dmabuf_attachment {
>> > +       struct sg_table sgt;
>> > +       enum dma_data_direction dir;
>> > +};
>> > +
>> > +static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct
>> > device *dev, +                                    struct
>> > dma_buf_attachment *attach) +{
>> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach;
>> > +
>> > +       cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
>> > +       if (!cma_attach)
>> > +               return -ENOMEM;
>> > +
>> > +       cma_attach->dir = DMA_NONE;
>> > +       attach->priv = cma_attach;
>> > +
>> > +       return 0;
>> > +}
>> > +
>> > +static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
>> > +                                     struct dma_buf_attachment *attach)
>> > +{
>> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
>> > +       struct sg_table *sgt;
>> > +
>> > +       if (cma_attach == NULL)
>> > +               return;
>> > +
>> > +       sgt = &cma_attach->sgt;
>> > +
>> > +       if (cma_attach->dir != DMA_NONE)
>> > +               dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
>> > +                               cma_attach->dir);
>> > +
>> > +       sg_free_table(sgt);
>> > +       kfree(cma_attach);
>> > +       attach->priv = NULL;
>> > +}
>> > +
>> > +static struct sg_table *
>> > +drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
>> > +                      enum dma_data_direction dir)
>> > +{
>> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
>> > +       struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
>> > +       struct drm_device *drm = cma_obj->base.dev;
>> > +       struct scatterlist *rd, *wr;
>> > +       struct sg_table *sgt;
>> > +       unsigned int i;
>> > +       int nents, ret;
>> > +
>> > +       DRM_DEBUG_PRIME("\n");
>> > +
>> > +       if (WARN_ON(dir == DMA_NONE))
>> > +               return ERR_PTR(-EINVAL);
>> > +
>> > +       /* Return the cached mapping when possible. */
>> > +       if (cma_attach->dir == dir)
>> > +               return &cma_attach->sgt;
>> > +
>> > +       /* Two mappings with different directions for the same attachment
>> > are +        * not allowed.
>> > +        */
>> > +       if (WARN_ON(cma_attach->dir != DMA_NONE))
>> > +               return ERR_PTR(-EBUSY);
>> > +
>> > +       sgt = &cma_attach->sgt;
>> > +
>> > +       ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents, GFP_KERNEL);
>> > +       if (ret) {
>> > +               DRM_ERROR("failed to alloc sgt.\n");
>> > +               return ERR_PTR(-ENOMEM);
>> > +       }
>> > +
>> > +       mutex_lock(&drm->struct_mutex);
>> > +
>> > +       rd = cma_obj->sgt->sgl;
>> > +       wr = sgt->sgl;
>> > +       for (i = 0; i < sgt->orig_nents; ++i) {
>> > +               sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
>> > +               rd = sg_next(rd);
>> > +               wr = sg_next(wr);
>> > +       }
>> > +
>> > +       nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
>> > +       if (!nents) {
>> > +               DRM_ERROR("failed to map sgl with iommu.\n");
>> > +               sg_free_table(sgt);
>> > +               sgt = ERR_PTR(-EIO);
>> > +               goto done;
>> > +       }
>> > +
>> > +       cma_attach->dir = dir;
>> > +       attach->priv = cma_attach;
>> > +
>> > +       DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
>> > +
>> > +done:
>> > +       mutex_unlock(&drm->struct_mutex);
>> > +       return sgt;
>> > +}
>> > +
>> > +static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment *attach,
>> > +                                    struct sg_table *sgt,
>> > +                                    enum dma_data_direction dir)
>> > +{
>> > +       /* Nothing to do. */
>>
>> I kinda think that if you don't support multiple mappings with
>> different direction, that you should at least support unmap and then
>> let someone else map with other direction
>
> That would make sense, but in that case I wouldn't be able to cache the
> mapping. It would probably be better to add proper support for multiple
> mappings with different directions.

well, you could still cache it, you'd just have to invalidate that
cache on transition in direction

> Given that the mapping is cached in the attachment, this will only be an issue
> if a driver tries to map the same attachment twice with different directions.
> Isn't that an uncommon use case that could be fixed later ? :-) I'd like to
> get this set in v3.11 if possible.

I don't feel strongly that this should block merging, vs fixing at a
(not too much later) date

BR,
-R

>> > +}
>
> --
> Regards,
>
> Laurent Pinchart
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-05  3:05       ` Rob Clark
@ 2013-06-05  8:44         ` Daniel Vetter
  2013-06-05 11:00           ` Rob Clark
  0 siblings, 1 reply; 17+ messages in thread
From: Daniel Vetter @ 2013-06-05  8:44 UTC (permalink / raw)
  To: Rob Clark; +Cc: Laurent Pinchart, Laurent Pinchart, dri-devel

On Tue, Jun 04, 2013 at 11:05:36PM -0400, Rob Clark wrote:
> On Tue, Jun 4, 2013 at 9:22 PM, Laurent Pinchart
> <laurent.pinchart@ideasonboard.com> wrote:
> > Hi Rob,
> >
> > On Tuesday 04 June 2013 17:56:36 Rob Clark wrote:
> >> couple small comments, other than those it looks ok
> >
> > Thanks for the review.
> >
> >> On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart wrote:
> >> > Signed-off-by: Laurent Pinchart
> >> > <laurent.pinchart+renesas@ideasonboard.com>
> >> > ---
> >> >
> >> >  drivers/gpu/drm/drm_gem_cma_helper.c | 321 +++++++++++++++++++++++++++++-
> >> >  include/drm/drm_gem_cma_helper.h     |   9 +
> >> >  2 files changed, 327 insertions(+), 3 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
> >> > b/drivers/gpu/drm/drm_gem_cma_helper.c index 7a4db4e..1dc2157 100644
> >> > --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> >> > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> >> > @@ -21,6 +21,9 @@
> >> >  #include <linux/slab.h>
> >> >  #include <linux/mutex.h>
> >> >  #include <linux/export.h>
> >> > +#if CONFIG_DMA_SHARED_BUFFER
> >> > +#include <linux/dma-buf.h>
> >> > +#endif
> >>
> >> I don't think we need the #if, since drm selects DMA_SHARED_BUFFER
> >>
> >> and same for other spot below
> >
> > Indeed. Will be fixed in the next version.
> >
> >> >  #include <linux/dma-mapping.h>
> >> >
> >> >  #include <drm/drmP.h>
> >
> > [snip]
> >
> >> > @@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct drm_gem_cma_object
> >> > *cma_obj, struct seq_file *m>
> >> >  }
> >> >  EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
> >> >  #endif
> >> >
> >> > +
> >> > +/*
> >> > -------------------------------------------------------------------------
> >> > ---- + * DMA-BUF
> >> > + */
> >> > +
> >> > +#if CONFIG_DMA_SHARED_BUFFER
> >> > +struct drm_gem_cma_dmabuf_attachment {
> >> > +       struct sg_table sgt;
> >> > +       enum dma_data_direction dir;
> >> > +};
> >> > +
> >> > +static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct
> >> > device *dev, +                                    struct
> >> > dma_buf_attachment *attach) +{
> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach;
> >> > +
> >> > +       cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
> >> > +       if (!cma_attach)
> >> > +               return -ENOMEM;
> >> > +
> >> > +       cma_attach->dir = DMA_NONE;
> >> > +       attach->priv = cma_attach;
> >> > +
> >> > +       return 0;
> >> > +}
> >> > +
> >> > +static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
> >> > +                                     struct dma_buf_attachment *attach)
> >> > +{
> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
> >> > +       struct sg_table *sgt;
> >> > +
> >> > +       if (cma_attach == NULL)
> >> > +               return;
> >> > +
> >> > +       sgt = &cma_attach->sgt;
> >> > +
> >> > +       if (cma_attach->dir != DMA_NONE)
> >> > +               dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
> >> > +                               cma_attach->dir);
> >> > +
> >> > +       sg_free_table(sgt);
> >> > +       kfree(cma_attach);
> >> > +       attach->priv = NULL;
> >> > +}
> >> > +
> >> > +static struct sg_table *
> >> > +drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
> >> > +                      enum dma_data_direction dir)
> >> > +{
> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
> >> > +       struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
> >> > +       struct drm_device *drm = cma_obj->base.dev;
> >> > +       struct scatterlist *rd, *wr;
> >> > +       struct sg_table *sgt;
> >> > +       unsigned int i;
> >> > +       int nents, ret;
> >> > +
> >> > +       DRM_DEBUG_PRIME("\n");
> >> > +
> >> > +       if (WARN_ON(dir == DMA_NONE))
> >> > +               return ERR_PTR(-EINVAL);
> >> > +
> >> > +       /* Return the cached mapping when possible. */
> >> > +       if (cma_attach->dir == dir)
> >> > +               return &cma_attach->sgt;
> >> > +
> >> > +       /* Two mappings with different directions for the same attachment
> >> > are +        * not allowed.
> >> > +        */
> >> > +       if (WARN_ON(cma_attach->dir != DMA_NONE))
> >> > +               return ERR_PTR(-EBUSY);
> >> > +
> >> > +       sgt = &cma_attach->sgt;
> >> > +
> >> > +       ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents, GFP_KERNEL);
> >> > +       if (ret) {
> >> > +               DRM_ERROR("failed to alloc sgt.\n");
> >> > +               return ERR_PTR(-ENOMEM);
> >> > +       }
> >> > +
> >> > +       mutex_lock(&drm->struct_mutex);
> >> > +
> >> > +       rd = cma_obj->sgt->sgl;
> >> > +       wr = sgt->sgl;
> >> > +       for (i = 0; i < sgt->orig_nents; ++i) {
> >> > +               sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
> >> > +               rd = sg_next(rd);
> >> > +               wr = sg_next(wr);
> >> > +       }
> >> > +
> >> > +       nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
> >> > +       if (!nents) {
> >> > +               DRM_ERROR("failed to map sgl with iommu.\n");
> >> > +               sg_free_table(sgt);
> >> > +               sgt = ERR_PTR(-EIO);
> >> > +               goto done;
> >> > +       }
> >> > +
> >> > +       cma_attach->dir = dir;
> >> > +       attach->priv = cma_attach;
> >> > +
> >> > +       DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
> >> > +
> >> > +done:
> >> > +       mutex_unlock(&drm->struct_mutex);
> >> > +       return sgt;
> >> > +}
> >> > +
> >> > +static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment *attach,
> >> > +                                    struct sg_table *sgt,
> >> > +                                    enum dma_data_direction dir)
> >> > +{
> >> > +       /* Nothing to do. */
> >>
> >> I kinda think that if you don't support multiple mappings with
> >> different direction, that you should at least support unmap and then
> >> let someone else map with other direction
> >
> > That would make sense, but in that case I wouldn't be able to cache the
> > mapping. It would probably be better to add proper support for multiple
> > mappings with different directions.
> 
> well, you could still cache it, you'd just have to invalidate that
> cache on transition in direction
> 
> > Given that the mapping is cached in the attachment, this will only be an issue
> > if a driver tries to map the same attachment twice with different directions.
> > Isn't that an uncommon use case that could be fixed later ? :-) I'd like to
> > get this set in v3.11 if possible.
> 
> I don't feel strongly that this should block merging, vs fixing at a
> (not too much later) date

I'm not sure whether we should even allow this at all. If an importer
wants to switch dma directions he should probably map it bidirectional and
we should finally bite the bullet and add a attachment_sync callback to
flush caches without dropping the mapping on the floor ...

Tbh I'm not even sure whether multiple mappings make any sense at all for
one attachment. Imo that sounds like the importer seriously lost track of
things ;-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-05  8:44         ` Daniel Vetter
@ 2013-06-05 11:00           ` Rob Clark
  2013-06-07 19:25             ` Laurent Pinchart
  0 siblings, 1 reply; 17+ messages in thread
From: Rob Clark @ 2013-06-05 11:00 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Laurent Pinchart, Laurent Pinchart, dri-devel

On Wed, Jun 5, 2013 at 4:44 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Tue, Jun 04, 2013 at 11:05:36PM -0400, Rob Clark wrote:
>> On Tue, Jun 4, 2013 at 9:22 PM, Laurent Pinchart
>> <laurent.pinchart@ideasonboard.com> wrote:
>> > Hi Rob,
>> >
>> > On Tuesday 04 June 2013 17:56:36 Rob Clark wrote:
>> >> couple small comments, other than those it looks ok
>> >
>> > Thanks for the review.
>> >
>> >> On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart wrote:
>> >> > Signed-off-by: Laurent Pinchart
>> >> > <laurent.pinchart+renesas@ideasonboard.com>
>> >> > ---
>> >> >
>> >> >  drivers/gpu/drm/drm_gem_cma_helper.c | 321 +++++++++++++++++++++++++++++-
>> >> >  include/drm/drm_gem_cma_helper.h     |   9 +
>> >> >  2 files changed, 327 insertions(+), 3 deletions(-)
>> >> >
>> >> > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>> >> > b/drivers/gpu/drm/drm_gem_cma_helper.c index 7a4db4e..1dc2157 100644
>> >> > --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>> >> > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>> >> > @@ -21,6 +21,9 @@
>> >> >  #include <linux/slab.h>
>> >> >  #include <linux/mutex.h>
>> >> >  #include <linux/export.h>
>> >> > +#if CONFIG_DMA_SHARED_BUFFER
>> >> > +#include <linux/dma-buf.h>
>> >> > +#endif
>> >>
>> >> I don't think we need the #if, since drm selects DMA_SHARED_BUFFER
>> >>
>> >> and same for other spot below
>> >
>> > Indeed. Will be fixed in the next version.
>> >
>> >> >  #include <linux/dma-mapping.h>
>> >> >
>> >> >  #include <drm/drmP.h>
>> >
>> > [snip]
>> >
>> >> > @@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct drm_gem_cma_object
>> >> > *cma_obj, struct seq_file *m>
>> >> >  }
>> >> >  EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
>> >> >  #endif
>> >> >
>> >> > +
>> >> > +/*
>> >> > -------------------------------------------------------------------------
>> >> > ---- + * DMA-BUF
>> >> > + */
>> >> > +
>> >> > +#if CONFIG_DMA_SHARED_BUFFER
>> >> > +struct drm_gem_cma_dmabuf_attachment {
>> >> > +       struct sg_table sgt;
>> >> > +       enum dma_data_direction dir;
>> >> > +};
>> >> > +
>> >> > +static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct
>> >> > device *dev, +                                    struct
>> >> > dma_buf_attachment *attach) +{
>> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach;
>> >> > +
>> >> > +       cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
>> >> > +       if (!cma_attach)
>> >> > +               return -ENOMEM;
>> >> > +
>> >> > +       cma_attach->dir = DMA_NONE;
>> >> > +       attach->priv = cma_attach;
>> >> > +
>> >> > +       return 0;
>> >> > +}
>> >> > +
>> >> > +static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
>> >> > +                                     struct dma_buf_attachment *attach)
>> >> > +{
>> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
>> >> > +       struct sg_table *sgt;
>> >> > +
>> >> > +       if (cma_attach == NULL)
>> >> > +               return;
>> >> > +
>> >> > +       sgt = &cma_attach->sgt;
>> >> > +
>> >> > +       if (cma_attach->dir != DMA_NONE)
>> >> > +               dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
>> >> > +                               cma_attach->dir);
>> >> > +
>> >> > +       sg_free_table(sgt);
>> >> > +       kfree(cma_attach);
>> >> > +       attach->priv = NULL;
>> >> > +}
>> >> > +
>> >> > +static struct sg_table *
>> >> > +drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
>> >> > +                      enum dma_data_direction dir)
>> >> > +{
>> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach = attach->priv;
>> >> > +       struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
>> >> > +       struct drm_device *drm = cma_obj->base.dev;
>> >> > +       struct scatterlist *rd, *wr;
>> >> > +       struct sg_table *sgt;
>> >> > +       unsigned int i;
>> >> > +       int nents, ret;
>> >> > +
>> >> > +       DRM_DEBUG_PRIME("\n");
>> >> > +
>> >> > +       if (WARN_ON(dir == DMA_NONE))
>> >> > +               return ERR_PTR(-EINVAL);
>> >> > +
>> >> > +       /* Return the cached mapping when possible. */
>> >> > +       if (cma_attach->dir == dir)
>> >> > +               return &cma_attach->sgt;
>> >> > +
>> >> > +       /* Two mappings with different directions for the same attachment
>> >> > are +        * not allowed.
>> >> > +        */
>> >> > +       if (WARN_ON(cma_attach->dir != DMA_NONE))
>> >> > +               return ERR_PTR(-EBUSY);
>> >> > +
>> >> > +       sgt = &cma_attach->sgt;
>> >> > +
>> >> > +       ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents, GFP_KERNEL);
>> >> > +       if (ret) {
>> >> > +               DRM_ERROR("failed to alloc sgt.\n");
>> >> > +               return ERR_PTR(-ENOMEM);
>> >> > +       }
>> >> > +
>> >> > +       mutex_lock(&drm->struct_mutex);
>> >> > +
>> >> > +       rd = cma_obj->sgt->sgl;
>> >> > +       wr = sgt->sgl;
>> >> > +       for (i = 0; i < sgt->orig_nents; ++i) {
>> >> > +               sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
>> >> > +               rd = sg_next(rd);
>> >> > +               wr = sg_next(wr);
>> >> > +       }
>> >> > +
>> >> > +       nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
>> >> > +       if (!nents) {
>> >> > +               DRM_ERROR("failed to map sgl with iommu.\n");
>> >> > +               sg_free_table(sgt);
>> >> > +               sgt = ERR_PTR(-EIO);
>> >> > +               goto done;
>> >> > +       }
>> >> > +
>> >> > +       cma_attach->dir = dir;
>> >> > +       attach->priv = cma_attach;
>> >> > +
>> >> > +       DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
>> >> > +
>> >> > +done:
>> >> > +       mutex_unlock(&drm->struct_mutex);
>> >> > +       return sgt;
>> >> > +}
>> >> > +
>> >> > +static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment *attach,
>> >> > +                                    struct sg_table *sgt,
>> >> > +                                    enum dma_data_direction dir)
>> >> > +{
>> >> > +       /* Nothing to do. */
>> >>
>> >> I kinda think that if you don't support multiple mappings with
>> >> different direction, that you should at least support unmap and then
>> >> let someone else map with other direction
>> >
>> > That would make sense, but in that case I wouldn't be able to cache the
>> > mapping. It would probably be better to add proper support for multiple
>> > mappings with different directions.
>>
>> well, you could still cache it, you'd just have to invalidate that
>> cache on transition in direction
>>
>> > Given that the mapping is cached in the attachment, this will only be an issue
>> > if a driver tries to map the same attachment twice with different directions.
>> > Isn't that an uncommon use case that could be fixed later ? :-) I'd like to
>> > get this set in v3.11 if possible.
>>
>> I don't feel strongly that this should block merging, vs fixing at a
>> (not too much later) date
>
> I'm not sure whether we should even allow this at all. If an importer
> wants to switch dma directions he should probably map it bidirectional and
> we should finally bite the bullet and add a attachment_sync callback to
> flush caches without dropping the mapping on the floor ...
>
> Tbh I'm not even sure whether multiple mappings make any sense at all for
> one attachment. Imo that sounds like the importer seriously lost track of
> things ;-)

oh, good point.. I was thinking more the case of multiple importers,
but you are right, there would be multiple attachments in that case so
this wouldn't be the problem

BR,
-R

> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support
  2013-06-05 11:00           ` Rob Clark
@ 2013-06-07 19:25             ` Laurent Pinchart
  0 siblings, 0 replies; 17+ messages in thread
From: Laurent Pinchart @ 2013-06-07 19:25 UTC (permalink / raw)
  To: Rob Clark; +Cc: Laurent Pinchart, dri-devel

On Wednesday 05 June 2013 07:00:45 Rob Clark wrote:
> On Wed, Jun 5, 2013 at 4:44 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > On Tue, Jun 04, 2013 at 11:05:36PM -0400, Rob Clark wrote:
> >> On Tue, Jun 4, 2013 at 9:22 PM, Laurent Pinchart wrote:
> >> > On Tuesday 04 June 2013 17:56:36 Rob Clark wrote:
> >> >> couple small comments, other than those it looks ok
> >> > 
> >> > Thanks for the review.
> >> > 
> >> >> On Mon, Jun 3, 2013 at 10:20 PM, Laurent Pinchart wrote:
> >> >> > Signed-off-by: Laurent Pinchart
> >> >> > <laurent.pinchart+renesas@ideasonboard.com>
> >> >> > ---
> >> >> > 
> >> >> >  drivers/gpu/drm/drm_gem_cma_helper.c | 321
> >> >> >  +++++++++++++++++++++++++++++-
> >> >> >  include/drm/drm_gem_cma_helper.h     |   9 +
> >> >> >  2 files changed, 327 insertions(+), 3 deletions(-)
> >> >> > 
> >> >> > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
> >> >> > b/drivers/gpu/drm/drm_gem_cma_helper.c index 7a4db4e..1dc2157 100644
> >> >> > --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> >> >> > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> >> >> > @@ -21,6 +21,9 @@
> >> >> > 
> >> >> >  #include <linux/slab.h>
> >> >> >  #include <linux/mutex.h>
> >> >> >  #include <linux/export.h>
> >> >> > 
> >> >> > +#if CONFIG_DMA_SHARED_BUFFER
> >> >> > +#include <linux/dma-buf.h>
> >> >> > +#endif
> >> >> 
> >> >> I don't think we need the #if, since drm selects DMA_SHARED_BUFFER
> >> >> 
> >> >> and same for other spot below
> >> > 
> >> > Indeed. Will be fixed in the next version.
> >> > 
> >> >> >  #include <linux/dma-mapping.h>
> >> >> >  
> >> >> >  #include <drm/drmP.h>
> >> > 
> >> > [snip]
> >> > 
> >> >> > @@ -291,3 +321,288 @@ void drm_gem_cma_describe(struct
> >> >> > drm_gem_cma_object
> >> >> > *cma_obj, struct seq_file *m>
> >> >> > 
> >> >> >  }
> >> >> >  EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
> >> >> >  #endif
> >> >> > 
> >> >> > +
> >> >> > +/*
> >> >> > --------------------------------------------------------------------
> >> >> > -----
> >> >> > ---- + * DMA-BUF
> >> >> > + */
> >> >> > +
> >> >> > +#if CONFIG_DMA_SHARED_BUFFER
> >> >> > +struct drm_gem_cma_dmabuf_attachment {
> >> >> > +       struct sg_table sgt;
> >> >> > +       enum dma_data_direction dir;
> >> >> > +};
> >> >> > +
> >> >> > +static int drm_gem_cma_dmabuf_attach(struct dma_buf *dmabuf, struct
> >> >> > device *dev, +                                    struct
> >> >> > dma_buf_attachment *attach) +{
> >> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach;
> >> >> > +
> >> >> > +       cma_attach = kzalloc(sizeof(*cma_attach), GFP_KERNEL);
> >> >> > +       if (!cma_attach)
> >> >> > +               return -ENOMEM;
> >> >> > +
> >> >> > +       cma_attach->dir = DMA_NONE;
> >> >> > +       attach->priv = cma_attach;
> >> >> > +
> >> >> > +       return 0;
> >> >> > +}
> >> >> > +
> >> >> > +static void drm_gem_cma_dmabuf_detach(struct dma_buf *dmabuf,
> >> >> > +                                     struct dma_buf_attachment
> >> >> > *attach)
> >> >> > +{
> >> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach =
> >> >> > attach->priv;
> >> >> > +       struct sg_table *sgt;
> >> >> > +
> >> >> > +       if (cma_attach == NULL)
> >> >> > +               return;
> >> >> > +
> >> >> > +       sgt = &cma_attach->sgt;
> >> >> > +
> >> >> > +       if (cma_attach->dir != DMA_NONE)
> >> >> > +               dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
> >> >> > +                               cma_attach->dir);
> >> >> > +
> >> >> > +       sg_free_table(sgt);
> >> >> > +       kfree(cma_attach);
> >> >> > +       attach->priv = NULL;
> >> >> > +}
> >> >> > +
> >> >> > +static struct sg_table *
> >> >> > +drm_gem_cma_dmabuf_map(struct dma_buf_attachment *attach,
> >> >> > +                      enum dma_data_direction dir)
> >> >> > +{
> >> >> > +       struct drm_gem_cma_dmabuf_attachment *cma_attach =
> >> >> > attach->priv;
> >> >> > +       struct drm_gem_cma_object *cma_obj = attach->dmabuf->priv;
> >> >> > +       struct drm_device *drm = cma_obj->base.dev;
> >> >> > +       struct scatterlist *rd, *wr;
> >> >> > +       struct sg_table *sgt;
> >> >> > +       unsigned int i;
> >> >> > +       int nents, ret;
> >> >> > +
> >> >> > +       DRM_DEBUG_PRIME("\n");
> >> >> > +
> >> >> > +       if (WARN_ON(dir == DMA_NONE))
> >> >> > +               return ERR_PTR(-EINVAL);
> >> >> > +
> >> >> > +       /* Return the cached mapping when possible. */
> >> >> > +       if (cma_attach->dir == dir)
> >> >> > +               return &cma_attach->sgt;
> >> >> > +
> >> >> > +       /* Two mappings with different directions for the same
> >> >> > attachment
> >> >> > are +        * not allowed.
> >> >> > +        */
> >> >> > +       if (WARN_ON(cma_attach->dir != DMA_NONE))
> >> >> > +               return ERR_PTR(-EBUSY);
> >> >> > +
> >> >> > +       sgt = &cma_attach->sgt;
> >> >> > +
> >> >> > +       ret = sg_alloc_table(sgt, cma_obj->sgt->orig_nents,
> >> >> > GFP_KERNEL);
> >> >> > +       if (ret) {
> >> >> > +               DRM_ERROR("failed to alloc sgt.\n");
> >> >> > +               return ERR_PTR(-ENOMEM);
> >> >> > +       }
> >> >> > +
> >> >> > +       mutex_lock(&drm->struct_mutex);
> >> >> > +
> >> >> > +       rd = cma_obj->sgt->sgl;
> >> >> > +       wr = sgt->sgl;
> >> >> > +       for (i = 0; i < sgt->orig_nents; ++i) {
> >> >> > +               sg_set_page(wr, sg_page(rd), rd->length,
> >> >> > rd->offset);
> >> >> > +               rd = sg_next(rd);
> >> >> > +               wr = sg_next(wr);
> >> >> > +       }
> >> >> > +
> >> >> > +       nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents,
> >> >> > dir);
> >> >> > +       if (!nents) {
> >> >> > +               DRM_ERROR("failed to map sgl with iommu.\n");
> >> >> > +               sg_free_table(sgt);
> >> >> > +               sgt = ERR_PTR(-EIO);
> >> >> > +               goto done;
> >> >> > +       }
> >> >> > +
> >> >> > +       cma_attach->dir = dir;
> >> >> > +       attach->priv = cma_attach;
> >> >> > +
> >> >> > +       DRM_DEBUG_PRIME("buffer size = %zu\n", cma_obj->base.size);
> >> >> > +
> >> >> > +done:
> >> >> > +       mutex_unlock(&drm->struct_mutex);
> >> >> > +       return sgt;
> >> >> > +}
> >> >> > +
> >> >> > +static void drm_gem_cma_dmabuf_unmap(struct dma_buf_attachment
> >> >> > *attach,
> >> >> > +                                    struct sg_table *sgt,
> >> >> > +                                    enum dma_data_direction dir)
> >> >> > +{
> >> >> > +       /* Nothing to do. */
> >> >> 
> >> >> I kinda think that if you don't support multiple mappings with
> >> >> different direction, that you should at least support unmap and then
> >> >> let someone else map with other direction
> >> > 
> >> > That would make sense, but in that case I wouldn't be able to cache the
> >> > mapping. It would probably be better to add proper support for multiple
> >> > mappings with different directions.
> >> 
> >> well, you could still cache it, you'd just have to invalidate that
> >> cache on transition in direction
> >> 
> >> > Given that the mapping is cached in the attachment, this will only be
> >> > an issue if a driver tries to map the same attachment twice with
> >> > different directions. Isn't that an uncommon use case that could be
> >> > fixed later ? :-) I'd like to get this set in v3.11 if possible.
> >> 
> >> I don't feel strongly that this should block merging, vs fixing at a
> >> (not too much later) date
> > 
> > I'm not sure whether we should even allow this at all. If an importer
> > wants to switch dma directions he should probably map it bidirectional and
> > we should finally bite the bullet and add a attachment_sync callback to
> > flush caches without dropping the mapping on the floor ...
> > 
> > Tbh I'm not even sure whether multiple mappings make any sense at all for
> > one attachment. Imo that sounds like the importer seriously lost track of
> > things ;-)
> 
> oh, good point.. I was thinking more the case of multiple importers,
> but you are right, there would be multiple attachments in that case so
> this wouldn't be the problem

Problem solved, great :-) I've posted v3.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2013-06-07 19:25 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-04  2:20 [PATCH v2 0/5] GEM CMA DMA-BUF support Laurent Pinchart
2013-06-04  2:20 ` [PATCH v2 1/5] drm/gem: Split drm_gem_mmap() into object search and object mapping Laurent Pinchart
2013-06-04 11:33   ` Rob Clark
2013-06-04  2:20 ` [PATCH v2 2/5] drm/omap: Use drm_gem_mmap_obj() to implement dma-buf mmap Laurent Pinchart
2013-06-04 11:33   ` Rob Clark
2013-06-04 18:03     ` Laurent Pinchart
2013-06-04  2:20 ` [PATCH v2 3/5] drm: GEM CMA: Split object creation into object alloc and DMA memory alloc Laurent Pinchart
2013-06-04 20:07   ` Rob Clark
2013-06-04  2:20 ` [PATCH v2 4/5] drm: GEM CMA: Split object mapping into GEM mapping and CMA mapping Laurent Pinchart
2013-06-04 20:19   ` Rob Clark
2013-06-04  2:20 ` [PATCH v2 5/5] drm: GEM CMA: Add DRM PRIME support Laurent Pinchart
2013-06-04 21:56   ` Rob Clark
2013-06-05  1:22     ` Laurent Pinchart
2013-06-05  3:05       ` Rob Clark
2013-06-05  8:44         ` Daniel Vetter
2013-06-05 11:00           ` Rob Clark
2013-06-07 19:25             ` Laurent Pinchart

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.