dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] drm/lima: simplify driver by using more drm helpers
@ 2019-09-26 14:10 Qiang Yu
  2019-09-26 14:10 ` [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup Qiang Yu
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel; +Cc: lima, Maxime Ripard, David Airlie, Qiang Yu, Sean Paul

By using shared drm helpers:
1. drm_gem_objects_lookup
2. drm_gem_(un)lock_reservations
3. drm_gem_shmem_helpers
we can simplify lima driver a lot and benifit from updates to
these functions.

drm_gem_objects_lookup need a refine in order to be used by lima.

Note:
1. changes to panfrost and v3d are just compile tested.
2. patch series is based on drm-misc-next branch

Qiang Yu (6):
  drm/gem: refine drm_gem_objects_lookup
  drm/v3d: use drm_gem_objects_lookup
  drm/lima: use drm_gem_objects_lookup
  drm/lima: use drm_gem_shmem_helpers
  drm/lima: use drm_gem_(un)lock_reservations
  drm/lima: add __GFP_NOWARN flag to all dma_alloc_wc

 drivers/gpu/drm/drm_gem.c               |  28 +--
 drivers/gpu/drm/lima/Kconfig            |   1 +
 drivers/gpu/drm/lima/Makefile           |   4 +-
 drivers/gpu/drm/lima/lima_device.c      |   2 +-
 drivers/gpu/drm/lima/lima_drv.c         |  27 +--
 drivers/gpu/drm/lima/lima_gem.c         | 254 ++++++++++--------------
 drivers/gpu/drm/lima/lima_gem.h         |  32 ++-
 drivers/gpu/drm/lima/lima_gem_prime.c   |  46 -----
 drivers/gpu/drm/lima/lima_gem_prime.h   |  13 --
 drivers/gpu/drm/lima/lima_mmu.c         |   1 -
 drivers/gpu/drm/lima/lima_object.c      | 119 -----------
 drivers/gpu/drm/lima/lima_object.h      |  35 ----
 drivers/gpu/drm/lima/lima_sched.c       |   6 +-
 drivers/gpu/drm/lima/lima_vm.c          |  87 ++++----
 drivers/gpu/drm/panfrost/panfrost_drv.c |  23 ++-
 drivers/gpu/drm/v3d/v3d_gem.c           |  35 +---
 include/drm/drm_gem.h                   |   2 +-
 17 files changed, 222 insertions(+), 493 deletions(-)
 delete mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c
 delete mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h
 delete mode 100644 drivers/gpu/drm/lima/lima_object.c
 delete mode 100644 drivers/gpu/drm/lima/lima_object.h

-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
@ 2019-09-26 14:10 ` Qiang Yu
  2019-09-26 15:01   ` Rob Herring
  2019-09-26 14:10 ` [PATCH 2/6] drm/v3d: use drm_gem_objects_lookup Qiang Yu
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel
  Cc: Tomeu Vizoso, lima, Maxime Ripard, Steven Price, David Airlie,
	Qiang Yu, Sean Paul, Alyssa Rosenzweig

Do not use user space bo handles directly and left the user
to kernel copy work to drivers calling this function.

This is for driver like lima which does not pass gem bo
handles continously in an array in ioctl argument.

Cc: Rob Herring <robh@kernel.org>
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
---
 drivers/gpu/drm/drm_gem.c               | 28 +++++++------------------
 drivers/gpu/drm/panfrost/panfrost_drv.c | 23 +++++++++++++++++---
 include/drm/drm_gem.h                   |  2 +-
 3 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 6854f5867d51..9f73e5f3b53f 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -679,11 +679,11 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
 /**
  * drm_gem_objects_lookup - look up GEM objects from an array of handles
  * @filp: DRM file private date
- * @bo_handles: user pointer to array of userspace handle
+ * @bo_handles: array of GEM object handles
  * @count: size of handle array
  * @objs_out: returned pointer to array of drm_gem_object pointers
  *
- * Takes an array of userspace handles and returns a newly allocated array of
+ * Takes an array of GEM object handles and returns a newly allocated array of
  * GEM objects.
  *
  * For a single handle lookup, use drm_gem_object_lookup().
@@ -695,11 +695,10 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
  * failure. 0 is returned on success.
  *
  */
-int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
+int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
 			   int count, struct drm_gem_object ***objs_out)
 {
 	int ret;
-	u32 *handles;
 	struct drm_gem_object **objs;
 
 	if (!count)
@@ -710,23 +709,12 @@ int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
 	if (!objs)
 		return -ENOMEM;
 
-	handles = kvmalloc_array(count, sizeof(u32), GFP_KERNEL);
-	if (!handles) {
-		ret = -ENOMEM;
-		goto out;
-	}
-
-	if (copy_from_user(handles, bo_handles, count * sizeof(u32))) {
-		ret = -EFAULT;
-		DRM_DEBUG("Failed to copy in GEM handles\n");
-		goto out;
-	}
-
-	ret = objects_lookup(filp, handles, count, objs);
-	*objs_out = objs;
+	ret = objects_lookup(filp, bo_handles, count, objs);
+	if (ret)
+		kvfree(objs);
+	else
+		*objs_out = objs;
 
-out:
-	kvfree(handles);
 	return ret;
 
 }
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index d74442d71048..616f8fcc4652 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -119,6 +119,9 @@ panfrost_lookup_bos(struct drm_device *dev,
 		  struct drm_panfrost_submit *args,
 		  struct panfrost_job *job)
 {
+	u32 *handles;
+	int ret;
+
 	job->bo_count = args->bo_handle_count;
 
 	if (!job->bo_count)
@@ -130,9 +133,23 @@ panfrost_lookup_bos(struct drm_device *dev,
 	if (!job->implicit_fences)
 		return -ENOMEM;
 
-	return drm_gem_objects_lookup(file_priv,
-				      (void __user *)(uintptr_t)args->bo_handles,
-				      job->bo_count, &job->bos);
+	handles = kvmalloc_array(job->bo_count, sizeof(u32), GFP_KERNEL);
+	if (!handles)
+		return -ENOMEM;
+
+	if (copy_from_user(handles, (void __user *)(uintptr_t)args->bo_handles,
+			   job->bo_count * sizeof(u32))) {
+		DRM_DEBUG("Failed to copy in GEM handles\n");
+		ret = -EFAULT;
+		goto out;
+	}
+
+	ret = drm_gem_objects_lookup(file_priv, handles,
+				     job->bo_count, &job->bos);
+
+out:
+	kvfree(handles);
+	return ret;
 }
 
 /**
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 6aaba14f5972..c23ac36231ba 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -387,7 +387,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj);
 void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
 		bool dirty, bool accessed);
 
-int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
+int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
 			   int count, struct drm_gem_object ***objs_out);
 struct drm_gem_object *drm_gem_object_lookup(struct drm_file *filp, u32 handle);
 long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/6] drm/v3d: use drm_gem_objects_lookup
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
  2019-09-26 14:10 ` [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup Qiang Yu
@ 2019-09-26 14:10 ` Qiang Yu
  2019-09-26 14:10 ` [PATCH 3/6] drm/lima: " Qiang Yu
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel; +Cc: lima, Maxime Ripard, David Airlie, Qiang Yu, Sean Paul

Cc: Eric Anholt <eric@anholt.net>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
---
 drivers/gpu/drm/v3d/v3d_gem.c | 35 ++++++-----------------------------
 1 file changed, 6 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
index 4c4b59ae2c81..b7c4e56dafd2 100644
--- a/drivers/gpu/drm/v3d/v3d_gem.c
+++ b/drivers/gpu/drm/v3d/v3d_gem.c
@@ -291,8 +291,7 @@ v3d_lookup_bos(struct drm_device *dev,
 	       u32 bo_count)
 {
 	u32 *handles;
-	int ret = 0;
-	int i;
+	int ret;
 
 	job->bo_count = bo_count;
 
@@ -304,19 +303,10 @@ v3d_lookup_bos(struct drm_device *dev,
 		return -EINVAL;
 	}
 
-	job->bo = kvmalloc_array(job->bo_count,
-				 sizeof(struct drm_gem_cma_object *),
-				 GFP_KERNEL | __GFP_ZERO);
-	if (!job->bo) {
-		DRM_DEBUG("Failed to allocate validated BO pointers\n");
-		return -ENOMEM;
-	}
-
 	handles = kvmalloc_array(job->bo_count, sizeof(u32), GFP_KERNEL);
 	if (!handles) {
-		ret = -ENOMEM;
 		DRM_DEBUG("Failed to allocate incoming GEM handles\n");
-		goto fail;
+		return -ENOMEM;
 	}
 
 	if (copy_from_user(handles,
@@ -324,26 +314,13 @@ v3d_lookup_bos(struct drm_device *dev,
 			   job->bo_count * sizeof(u32))) {
 		ret = -EFAULT;
 		DRM_DEBUG("Failed to copy in GEM handles\n");
-		goto fail;
+		goto out;
 	}
 
-	spin_lock(&file_priv->table_lock);
-	for (i = 0; i < job->bo_count; i++) {
-		struct drm_gem_object *bo = idr_find(&file_priv->object_idr,
-						     handles[i]);
-		if (!bo) {
-			DRM_DEBUG("Failed to look up GEM BO %d: %d\n",
-				  i, handles[i]);
-			ret = -ENOENT;
-			spin_unlock(&file_priv->table_lock);
-			goto fail;
-		}
-		drm_gem_object_get(bo);
-		job->bo[i] = bo;
-	}
-	spin_unlock(&file_priv->table_lock);
+	ret = drm_gem_objects_lookup(file_priv, handles,
+				     job->bo_count, &job->bo);
 
-fail:
+out:
 	kvfree(handles);
 	return ret;
 }
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/6] drm/lima: use drm_gem_objects_lookup
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
  2019-09-26 14:10 ` [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup Qiang Yu
  2019-09-26 14:10 ` [PATCH 2/6] drm/v3d: use drm_gem_objects_lookup Qiang Yu
@ 2019-09-26 14:10 ` Qiang Yu
  2019-09-26 14:10 ` [PATCH 4/6] drm/lima: use drm_gem_shmem_helpers Qiang Yu
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel; +Cc: lima, Maxime Ripard, David Airlie, Qiang Yu, Sean Paul

Signed-off-by: Qiang Yu <yuq825@gmail.com>
---
 drivers/gpu/drm/lima/lima_drv.c |  5 ++-
 drivers/gpu/drm/lima/lima_gem.c | 73 +++++++++++++++++++--------------
 2 files changed, 46 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
index 75ec703d22e0..cc850a522fac 100644
--- a/drivers/gpu/drm/lima/lima_drv.c
+++ b/drivers/gpu/drm/lima/lima_drv.c
@@ -108,7 +108,7 @@ static int lima_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_
 	if (args->frame_size != pipe->frame_size)
 		return -EINVAL;
 
-	bos = kvcalloc(args->nr_bos, sizeof(*submit.bos) + sizeof(*submit.lbos), GFP_KERNEL);
+	bos = kvcalloc(args->nr_bos, sizeof(*submit.bos), GFP_KERNEL);
 	if (!bos)
 		return -ENOMEM;
 
@@ -142,7 +142,6 @@ static int lima_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_
 
 	submit.pipe = args->pipe;
 	submit.bos = bos;
-	submit.lbos = (void *)bos + size;
 	submit.nr_bos = args->nr_bos;
 	submit.task = task;
 	submit.ctx = ctx;
@@ -159,6 +158,8 @@ static int lima_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_
 		kmem_cache_free(pipe->task_slab, task);
 out0:
 	kvfree(bos);
+	if (submit.lbos)
+		kvfree(submit.lbos);
 	return err;
 }
 
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index ff3d9acc24fc..f5aeaa1f61c8 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -130,6 +130,25 @@ int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return 0;
 }
 
+static int lima_gem_lookup_bos(struct drm_file *file, struct lima_submit *submit)
+{
+	int i, ret;
+	u32 *handles;
+
+	handles = kvmalloc_array(submit->nr_bos, sizeof(u32), GFP_KERNEL);
+	if (!handles)
+		return -ENOMEM;
+
+	for (i = 0; i < submit->nr_bos; i++)
+		handles[i] = submit->bos[i].handle;
+
+	ret = drm_gem_objects_lookup(file, handles, submit->nr_bos,
+				     (struct drm_gem_object ***)&submit->lbos);
+
+	kvfree(handles);
+	return ret;
+}
+
 static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
 			    bool write, bool explicit)
 {
@@ -236,7 +255,7 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 	struct lima_vm *vm = priv->vm;
 	struct drm_syncobj *out_sync = NULL;
 	struct dma_fence *fence;
-	struct lima_bo **bos = submit->lbos;
+	struct lima_bo **bos;
 
 	if (submit->out_sync) {
 		out_sync = drm_syncobj_find(file, submit->out_sync);
@@ -244,43 +263,37 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 			return -ENOENT;
 	}
 
-	for (i = 0; i < submit->nr_bos; i++) {
-		struct drm_gem_object *obj;
-		struct lima_bo *bo;
-
-		obj = drm_gem_object_lookup(file, submit->bos[i].handle);
-		if (!obj) {
-			err = -ENOENT;
-			goto err_out0;
-		}
+	err = lima_gem_lookup_bos(file, submit);
+	if (err)
+		goto err_out0;
 
-		bo = to_lima_bo(obj);
+	bos = submit->lbos;
 
-		/* increase refcnt of gpu va map to prevent unmapped when executing,
-		 * will be decreased when task done
-		 */
-		err = lima_vm_bo_add(vm, bo, false);
+	/* increase refcnt of gpu va map to prevent unmapped when executing,
+	 * will be decreased when task done
+	 */
+	for (i = 0; i < submit->nr_bos; i++) {
+		err = lima_vm_bo_add(vm, bos[i], false);
 		if (err) {
-			drm_gem_object_put_unlocked(obj);
-			goto err_out0;
+			for (i--; i >= 0; i--)
+				lima_vm_bo_del(vm, bos[i]);
+			goto err_out1;
 		}
-
-		bos[i] = bo;
 	}
 
 	err = lima_gem_lock_bos(bos, submit->nr_bos, &ctx);
 	if (err)
-		goto err_out0;
+		goto err_out2;
 
 	err = lima_sched_task_init(
 		submit->task, submit->ctx->context + submit->pipe,
 		bos, submit->nr_bos, vm);
 	if (err)
-		goto err_out1;
+		goto err_out3;
 
 	err = lima_gem_add_deps(file, submit);
 	if (err)
-		goto err_out2;
+		goto err_out4;
 
 	for (i = 0; i < submit->nr_bos; i++) {
 		err = lima_gem_sync_bo(
@@ -288,7 +301,7 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 			submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE,
 			submit->flags & LIMA_SUBMIT_FLAG_EXPLICIT_FENCE);
 		if (err)
-			goto err_out2;
+			goto err_out4;
 	}
 
 	fence = lima_sched_context_queue_task(
@@ -315,17 +328,17 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 
 	return 0;
 
-err_out2:
+err_out4:
 	lima_sched_task_fini(submit->task);
-err_out1:
+err_out3:
 	lima_gem_unlock_bos(bos, submit->nr_bos, &ctx);
-err_out0:
-	for (i = 0; i < submit->nr_bos; i++) {
-		if (!bos[i])
-			break;
+err_out2:
+	for (i = 0; i < submit->nr_bos; i++)
 		lima_vm_bo_del(vm, bos[i]);
+err_out1:
+	for (i = 0; i < submit->nr_bos; i++)
 		drm_gem_object_put_unlocked(&bos[i]->gem);
-	}
+err_out0:
 	if (out_sync)
 		drm_syncobj_put(out_sync);
 	return err;
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/6] drm/lima: use drm_gem_shmem_helpers
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
                   ` (2 preceding siblings ...)
  2019-09-26 14:10 ` [PATCH 3/6] drm/lima: " Qiang Yu
@ 2019-09-26 14:10 ` Qiang Yu
  2019-09-26 14:10 ` [PATCH 5/6] drm/lima: use drm_gem_(un)lock_reservations Qiang Yu
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel; +Cc: lima, Maxime Ripard, David Airlie, Qiang Yu, Sean Paul

Do not need to maintain our own shmem memory management
code as drm_gem_shmem_helpers provides it. And we can
also benifit from the work of others with shared code.

This is also a preparation for implementing buffer madv.

Signed-off-by: Qiang Yu <yuq825@gmail.com>
---
 drivers/gpu/drm/lima/Kconfig          |   1 +
 drivers/gpu/drm/lima/Makefile         |   4 +-
 drivers/gpu/drm/lima/lima_drv.c       |  22 +---
 drivers/gpu/drm/lima/lima_gem.c       | 141 +++++++++++++-------------
 drivers/gpu/drm/lima/lima_gem.h       |  32 ++++--
 drivers/gpu/drm/lima/lima_gem_prime.c |  46 ---------
 drivers/gpu/drm/lima/lima_gem_prime.h |  13 ---
 drivers/gpu/drm/lima/lima_mmu.c       |   1 -
 drivers/gpu/drm/lima/lima_object.c    | 119 ----------------------
 drivers/gpu/drm/lima/lima_object.h    |  35 -------
 drivers/gpu/drm/lima/lima_sched.c     |   6 +-
 drivers/gpu/drm/lima/lima_vm.c        |  85 ++++++++--------
 12 files changed, 145 insertions(+), 360 deletions(-)
 delete mode 100644 drivers/gpu/drm/lima/lima_gem_prime.c
 delete mode 100644 drivers/gpu/drm/lima/lima_gem_prime.h
 delete mode 100644 drivers/gpu/drm/lima/lima_object.c
 delete mode 100644 drivers/gpu/drm/lima/lima_object.h

diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig
index bb4ddc6bb0a6..571dc369a7e9 100644
--- a/drivers/gpu/drm/lima/Kconfig
+++ b/drivers/gpu/drm/lima/Kconfig
@@ -9,5 +9,6 @@ config DRM_LIMA
        depends on COMMON_CLK
        depends on OF
        select DRM_SCHED
+       select DRM_GEM_SHMEM_HELPER
        help
          DRM driver for ARM Mali 400/450 GPUs.
diff --git a/drivers/gpu/drm/lima/Makefile b/drivers/gpu/drm/lima/Makefile
index 38cc70281ba5..a85444b0a1d4 100644
--- a/drivers/gpu/drm/lima/Makefile
+++ b/drivers/gpu/drm/lima/Makefile
@@ -13,9 +13,7 @@ lima-y := \
 	lima_vm.o \
 	lima_sched.o \
 	lima_ctx.o \
-	lima_gem_prime.o \
 	lima_dlbu.o \
-	lima_bcast.o \
-	lima_object.o
+	lima_bcast.o
 
 obj-$(CONFIG_DRM_LIMA) += lima.o
diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
index cc850a522fac..5d551bfdb60b 100644
--- a/drivers/gpu/drm/lima/lima_drv.c
+++ b/drivers/gpu/drm/lima/lima_drv.c
@@ -12,7 +12,6 @@
 
 #include "lima_drv.h"
 #include "lima_gem.h"
-#include "lima_gem_prime.h"
 #include "lima_vm.h"
 
 int lima_sched_timeout_ms;
@@ -241,16 +240,7 @@ static const struct drm_ioctl_desc lima_drm_driver_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(LIMA_CTX_FREE, lima_ioctl_ctx_free, DRM_RENDER_ALLOW),
 };
 
-static const struct file_operations lima_drm_driver_fops = {
-	.owner              = THIS_MODULE,
-	.open               = drm_open,
-	.release            = drm_release,
-	.unlocked_ioctl     = drm_ioctl,
-#ifdef CONFIG_COMPAT
-	.compat_ioctl       = drm_compat_ioctl,
-#endif
-	.mmap               = lima_gem_mmap,
-};
+DEFINE_DRM_GEM_SHMEM_FOPS(lima_drm_driver_fops);
 
 static struct drm_driver lima_drm_driver = {
 	.driver_features    = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ,
@@ -259,10 +249,6 @@ static struct drm_driver lima_drm_driver = {
 	.ioctls             = lima_drm_driver_ioctls,
 	.num_ioctls         = ARRAY_SIZE(lima_drm_driver_ioctls),
 	.fops               = &lima_drm_driver_fops,
-	.gem_free_object_unlocked = lima_gem_free_object,
-	.gem_open_object    = lima_gem_object_open,
-	.gem_close_object   = lima_gem_object_close,
-	.gem_vm_ops         = &lima_gem_vm_ops,
 	.name               = "lima",
 	.desc               = "lima DRM",
 	.date               = "20190217",
@@ -270,11 +256,11 @@ static struct drm_driver lima_drm_driver = {
 	.minor              = 0,
 	.patchlevel         = 0,
 
+	.gem_create_object  = lima_gem_create_object,
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-	.gem_prime_import_sg_table = lima_gem_prime_import_sg_table,
+	.gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
 	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
-	.gem_prime_get_sg_table = lima_gem_prime_get_sg_table,
-	.gem_prime_mmap = lima_gem_prime_mmap,
+	.gem_prime_mmap = drm_gem_prime_mmap,
 };
 
 static int lima_pdev_probe(struct platform_device *pdev)
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index f5aeaa1f61c8..eb70e4429588 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -3,7 +3,7 @@
 
 #include <linux/mm.h>
 #include <linux/sync_file.h>
-#include <linux/pfn_t.h>
+#include <linux/pagemap.h>
 
 #include <drm/drm_file.h>
 #include <drm/drm_syncobj.h>
@@ -13,40 +13,55 @@
 
 #include "lima_drv.h"
 #include "lima_gem.h"
-#include "lima_gem_prime.h"
 #include "lima_vm.h"
-#include "lima_object.h"
 
 int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file,
 			   u32 size, u32 flags, u32 *handle)
 {
 	int err;
-	struct lima_bo *bo;
-	struct lima_device *ldev = to_lima_dev(dev);
+	gfp_t mask;
+	struct drm_gem_shmem_object *shmem;
+	struct drm_gem_object *obj;
+	struct sg_table *sgt;
+
+	shmem = drm_gem_shmem_create(dev, size);
+	if (IS_ERR(shmem))
+		return PTR_ERR(shmem);
 
-	bo = lima_bo_create(ldev, size, flags, NULL);
-	if (IS_ERR(bo))
-		return PTR_ERR(bo);
+	obj = &shmem->base;
 
-	err = drm_gem_handle_create(file, &bo->gem, handle);
+	/* Mali Utgard GPU can only support 32bit address space */
+	mask = mapping_gfp_mask(obj->filp->f_mapping);
+	mask &= ~__GFP_HIGHMEM;
+	mask |= __GFP_DMA32;
+	mapping_set_gfp_mask(obj->filp->f_mapping, mask);
+
+	sgt = drm_gem_shmem_get_pages_sgt(obj);
+	if (IS_ERR(sgt)) {
+		err = PTR_ERR(sgt);
+		goto out;
+	}
 
+	err = drm_gem_handle_create(file, obj, handle);
+
+out:
 	/* drop reference from allocate - handle holds it now */
-	drm_gem_object_put_unlocked(&bo->gem);
+	drm_gem_object_put_unlocked(obj);
 
 	return err;
 }
 
-void lima_gem_free_object(struct drm_gem_object *obj)
+static void lima_gem_free_object(struct drm_gem_object *obj)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
 	if (!list_empty(&bo->va))
 		dev_err(obj->dev->dev, "lima gem free bo still has va\n");
 
-	lima_bo_destroy(bo);
+	drm_gem_shmem_free_object(obj);
 }
 
-int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file)
+static int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 	struct lima_drm_priv *priv = to_lima_drm_priv(file);
@@ -55,7 +70,7 @@ int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file)
 	return lima_vm_bo_add(vm, bo, true);
 }
 
-void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file)
+static void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 	struct lima_drm_priv *priv = to_lima_drm_priv(file);
@@ -64,13 +79,41 @@ void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file)
 	lima_vm_bo_del(vm, bo);
 }
 
+static const struct drm_gem_object_funcs lima_gem_funcs = {
+	.free = lima_gem_free_object,
+	.open = lima_gem_object_open,
+	.close = lima_gem_object_close,
+	.print_info = drm_gem_shmem_print_info,
+	.pin = drm_gem_shmem_pin,
+	.unpin = drm_gem_shmem_unpin,
+	.get_sg_table = drm_gem_shmem_get_sg_table,
+	.vmap = drm_gem_shmem_vmap,
+	.vunmap = drm_gem_shmem_vunmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+};
+
+struct drm_gem_object *lima_gem_create_object(struct drm_device *dev, size_t size)
+{
+	struct lima_bo *bo;
+
+	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+	if (!bo)
+		return NULL;
+
+	mutex_init(&bo->lock);
+	INIT_LIST_HEAD(&bo->va);
+
+	bo->base.base.funcs = &lima_gem_funcs;
+
+	return &bo->base.base;
+}
+
 int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset)
 {
 	struct drm_gem_object *obj;
 	struct lima_bo *bo;
 	struct lima_drm_priv *priv = to_lima_drm_priv(file);
 	struct lima_vm *vm = priv->vm;
-	int err;
 
 	obj = drm_gem_object_lookup(file, handle);
 	if (!obj)
@@ -80,53 +123,9 @@ int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset)
 
 	*va = lima_vm_get_va(vm, bo);
 
-	err = drm_gem_create_mmap_offset(obj);
-	if (!err)
-		*offset = drm_vma_node_offset_addr(&obj->vma_node);
+	*offset = drm_vma_node_offset_addr(&obj->vma_node);
 
 	drm_gem_object_put_unlocked(obj);
-	return err;
-}
-
-static vm_fault_t lima_gem_fault(struct vm_fault *vmf)
-{
-	struct vm_area_struct *vma = vmf->vma;
-	struct drm_gem_object *obj = vma->vm_private_data;
-	struct lima_bo *bo = to_lima_bo(obj);
-	pfn_t pfn;
-	pgoff_t pgoff;
-
-	/* We don't use vmf->pgoff since that has the fake offset: */
-	pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
-	pfn = __pfn_to_pfn_t(page_to_pfn(bo->pages[pgoff]), PFN_DEV);
-
-	return vmf_insert_mixed(vma, vmf->address, pfn);
-}
-
-const struct vm_operations_struct lima_gem_vm_ops = {
-	.fault = lima_gem_fault,
-	.open = drm_gem_vm_open,
-	.close = drm_gem_vm_close,
-};
-
-void lima_set_vma_flags(struct vm_area_struct *vma)
-{
-	pgprot_t prot = vm_get_page_prot(vma->vm_flags);
-
-	vma->vm_flags |= VM_MIXEDMAP;
-	vma->vm_flags &= ~VM_PFNMAP;
-	vma->vm_page_prot = pgprot_writecombine(prot);
-}
-
-int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap(filp, vma);
-	if (ret)
-		return ret;
-
-	lima_set_vma_flags(vma);
 	return 0;
 }
 
@@ -155,7 +154,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
 	int err = 0;
 
 	if (!write) {
-		err = dma_resv_reserve_shared(bo->gem.resv, 1);
+		err = dma_resv_reserve_shared(lima_bo_resv(bo), 1);
 		if (err)
 			return err;
 	}
@@ -164,7 +163,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
 	if (explicit)
 		return 0;
 
-	return drm_gem_fence_array_add_implicit(&task->deps, &bo->gem, write);
+	return drm_gem_fence_array_add_implicit(&task->deps, &bo->base.base, write);
 }
 
 static int lima_gem_lock_bos(struct lima_bo **bos, u32 nr_bos,
@@ -181,7 +180,7 @@ static int lima_gem_lock_bos(struct lima_bo **bos, u32 nr_bos,
 			continue;
 		}
 
-		ret = ww_mutex_lock_interruptible(&bos[i]->gem.resv->lock, ctx);
+		ret = ww_mutex_lock_interruptible(&lima_bo_resv(bos[i])->lock, ctx);
 		if (ret < 0) {
 			contended = i;
 			goto err;
@@ -193,15 +192,15 @@ static int lima_gem_lock_bos(struct lima_bo **bos, u32 nr_bos,
 
 err:
 	for (i--; i >= 0; i--)
-		ww_mutex_unlock(&bos[i]->gem.resv->lock);
+		ww_mutex_unlock(&lima_bo_resv(bos[i])->lock);
 
 	if (slow_locked >= 0)
-		ww_mutex_unlock(&bos[slow_locked]->gem.resv->lock);
+		ww_mutex_unlock(&lima_bo_resv(bos[slow_locked])->lock);
 
 	if (ret == -EDEADLK) {
 		/* we lost out in a seqno race, lock and retry.. */
 		ret = ww_mutex_lock_slow_interruptible(
-			&bos[contended]->gem.resv->lock, ctx);
+			&lima_bo_resv(bos[contended])->lock, ctx);
 		if (!ret) {
 			slow_locked = contended;
 			goto retry;
@@ -218,7 +217,7 @@ static void lima_gem_unlock_bos(struct lima_bo **bos, u32 nr_bos,
 	int i;
 
 	for (i = 0; i < nr_bos; i++)
-		ww_mutex_unlock(&bos[i]->gem.resv->lock);
+		ww_mutex_unlock(&lima_bo_resv(bos[i])->lock);
 	ww_acquire_fini(ctx);
 }
 
@@ -309,15 +308,15 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 
 	for (i = 0; i < submit->nr_bos; i++) {
 		if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE)
-			dma_resv_add_excl_fence(bos[i]->gem.resv, fence);
+			dma_resv_add_excl_fence(lima_bo_resv(bos[i]), fence);
 		else
-			dma_resv_add_shared_fence(bos[i]->gem.resv, fence);
+			dma_resv_add_shared_fence(lima_bo_resv(bos[i]), fence);
 	}
 
 	lima_gem_unlock_bos(bos, submit->nr_bos, &ctx);
 
 	for (i = 0; i < submit->nr_bos; i++)
-		drm_gem_object_put_unlocked(&bos[i]->gem);
+		drm_gem_object_put_unlocked(&bos[i]->base.base);
 
 	if (out_sync) {
 		drm_syncobj_replace_fence(out_sync, fence);
@@ -337,7 +336,7 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 		lima_vm_bo_del(vm, bos[i]);
 err_out1:
 	for (i = 0; i < submit->nr_bos; i++)
-		drm_gem_object_put_unlocked(&bos[i]->gem);
+		drm_gem_object_put_unlocked(&bos[i]->base.base);
 err_out0:
 	if (out_sync)
 		drm_syncobj_put(out_sync);
diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h
index 556111a01135..1800feb3e47f 100644
--- a/drivers/gpu/drm/lima/lima_gem.h
+++ b/drivers/gpu/drm/lima/lima_gem.h
@@ -4,19 +4,37 @@
 #ifndef __LIMA_GEM_H__
 #define __LIMA_GEM_H__
 
-struct lima_bo;
+#include <drm/drm_gem_shmem_helper.h>
+
 struct lima_submit;
 
-extern const struct vm_operations_struct lima_gem_vm_ops;
+struct lima_bo {
+	struct drm_gem_shmem_object base;
+
+	struct mutex lock;
+	struct list_head va;
+};
+
+static inline struct lima_bo *
+to_lima_bo(struct drm_gem_object *obj)
+{
+	return container_of(to_drm_gem_shmem_obj(obj), struct lima_bo, base);
+}
+
+static inline size_t lima_bo_size(struct lima_bo *bo)
+{
+	return bo->base.base.size;
+}
+
+static inline struct dma_resv *lima_bo_resv(struct lima_bo *bo)
+{
+	return bo->base.base.resv;
+}
 
-struct lima_bo *lima_gem_create_bo(struct drm_device *dev, u32 size, u32 flags);
+struct drm_gem_object *lima_gem_create_object(struct drm_device *dev, size_t size);
 int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file,
 			   u32 size, u32 flags, u32 *handle);
-void lima_gem_free_object(struct drm_gem_object *obj);
-int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file);
-void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file);
 int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset);
-int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int lima_gem_submit(struct drm_file *file, struct lima_submit *submit);
 int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, s64 timeout_ns);
 
diff --git a/drivers/gpu/drm/lima/lima_gem_prime.c b/drivers/gpu/drm/lima/lima_gem_prime.c
deleted file mode 100644
index e3eb251e0a12..000000000000
--- a/drivers/gpu/drm/lima/lima_gem_prime.c
+++ /dev/null
@@ -1,46 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0 OR MIT
-/* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */
-
-#include <linux/dma-buf.h>
-#include <drm/drm_prime.h>
-#include <drm/drm_drv.h>
-#include <drm/drm_file.h>
-
-#include "lima_device.h"
-#include "lima_object.h"
-#include "lima_gem.h"
-#include "lima_gem_prime.h"
-
-struct drm_gem_object *lima_gem_prime_import_sg_table(
-	struct drm_device *dev, struct dma_buf_attachment *attach,
-	struct sg_table *sgt)
-{
-	struct lima_device *ldev = to_lima_dev(dev);
-	struct lima_bo *bo;
-
-	bo = lima_bo_create(ldev, attach->dmabuf->size, 0, sgt);
-	if (IS_ERR(bo))
-		return ERR_CAST(bo);
-
-	return &bo->gem;
-}
-
-struct sg_table *lima_gem_prime_get_sg_table(struct drm_gem_object *obj)
-{
-	struct lima_bo *bo = to_lima_bo(obj);
-	int npages = obj->size >> PAGE_SHIFT;
-
-	return drm_prime_pages_to_sg(bo->pages, npages);
-}
-
-int lima_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
-{
-	int ret;
-
-	ret = drm_gem_mmap_obj(obj, obj->size, vma);
-	if (ret)
-		return ret;
-
-	lima_set_vma_flags(vma);
-	return 0;
-}
diff --git a/drivers/gpu/drm/lima/lima_gem_prime.h b/drivers/gpu/drm/lima/lima_gem_prime.h
deleted file mode 100644
index 34b4d35c21e3..000000000000
--- a/drivers/gpu/drm/lima/lima_gem_prime.h
+++ /dev/null
@@ -1,13 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR MIT */
-/* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */
-
-#ifndef __LIMA_GEM_PRIME_H__
-#define __LIMA_GEM_PRIME_H__
-
-struct drm_gem_object *lima_gem_prime_import_sg_table(
-	struct drm_device *dev, struct dma_buf_attachment *attach,
-	struct sg_table *sgt);
-struct sg_table *lima_gem_prime_get_sg_table(struct drm_gem_object *obj);
-int lima_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
-
-#endif
diff --git a/drivers/gpu/drm/lima/lima_mmu.c b/drivers/gpu/drm/lima/lima_mmu.c
index 8e1651d6a61f..97ec09dee572 100644
--- a/drivers/gpu/drm/lima/lima_mmu.c
+++ b/drivers/gpu/drm/lima/lima_mmu.c
@@ -8,7 +8,6 @@
 #include "lima_device.h"
 #include "lima_mmu.h"
 #include "lima_vm.h"
-#include "lima_object.h"
 #include "lima_regs.h"
 
 #define mmu_write(reg, data) writel(data, ip->iomem + reg)
diff --git a/drivers/gpu/drm/lima/lima_object.c b/drivers/gpu/drm/lima/lima_object.c
deleted file mode 100644
index 87123b1d083c..000000000000
--- a/drivers/gpu/drm/lima/lima_object.c
+++ /dev/null
@@ -1,119 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0 OR MIT
-/* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */
-
-#include <drm/drm_prime.h>
-#include <linux/pagemap.h>
-#include <linux/dma-mapping.h>
-
-#include "lima_object.h"
-
-void lima_bo_destroy(struct lima_bo *bo)
-{
-	if (bo->sgt) {
-		kfree(bo->pages);
-		drm_prime_gem_destroy(&bo->gem, bo->sgt);
-	} else {
-		if (bo->pages_dma_addr) {
-			int i, npages = bo->gem.size >> PAGE_SHIFT;
-
-			for (i = 0; i < npages; i++) {
-				if (bo->pages_dma_addr[i])
-					dma_unmap_page(bo->gem.dev->dev,
-						       bo->pages_dma_addr[i],
-						       PAGE_SIZE, DMA_BIDIRECTIONAL);
-			}
-		}
-
-		if (bo->pages)
-			drm_gem_put_pages(&bo->gem, bo->pages, true, true);
-	}
-
-	kfree(bo->pages_dma_addr);
-	drm_gem_object_release(&bo->gem);
-	kfree(bo);
-}
-
-static struct lima_bo *lima_bo_create_struct(struct lima_device *dev, u32 size, u32 flags)
-{
-	struct lima_bo *bo;
-	int err;
-
-	size = PAGE_ALIGN(size);
-
-	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
-	if (!bo)
-		return ERR_PTR(-ENOMEM);
-
-	mutex_init(&bo->lock);
-	INIT_LIST_HEAD(&bo->va);
-
-	err = drm_gem_object_init(dev->ddev, &bo->gem, size);
-	if (err) {
-		kfree(bo);
-		return ERR_PTR(err);
-	}
-
-	return bo;
-}
-
-struct lima_bo *lima_bo_create(struct lima_device *dev, u32 size,
-			       u32 flags, struct sg_table *sgt)
-{
-	int i, err;
-	size_t npages;
-	struct lima_bo *bo, *ret;
-
-	bo = lima_bo_create_struct(dev, size, flags);
-	if (IS_ERR(bo))
-		return bo;
-
-	npages = bo->gem.size >> PAGE_SHIFT;
-
-	bo->pages_dma_addr = kcalloc(npages, sizeof(dma_addr_t), GFP_KERNEL);
-	if (!bo->pages_dma_addr) {
-		ret = ERR_PTR(-ENOMEM);
-		goto err_out;
-	}
-
-	if (sgt) {
-		bo->sgt = sgt;
-
-		bo->pages = kcalloc(npages, sizeof(*bo->pages), GFP_KERNEL);
-		if (!bo->pages) {
-			ret = ERR_PTR(-ENOMEM);
-			goto err_out;
-		}
-
-		err = drm_prime_sg_to_page_addr_arrays(
-			sgt, bo->pages, bo->pages_dma_addr, npages);
-		if (err) {
-			ret = ERR_PTR(err);
-			goto err_out;
-		}
-	} else {
-		mapping_set_gfp_mask(bo->gem.filp->f_mapping, GFP_DMA32);
-		bo->pages = drm_gem_get_pages(&bo->gem);
-		if (IS_ERR(bo->pages)) {
-			ret = ERR_CAST(bo->pages);
-			bo->pages = NULL;
-			goto err_out;
-		}
-
-		for (i = 0; i < npages; i++) {
-			dma_addr_t addr = dma_map_page(dev->dev, bo->pages[i], 0,
-						       PAGE_SIZE, DMA_BIDIRECTIONAL);
-			if (dma_mapping_error(dev->dev, addr)) {
-				ret = ERR_PTR(-EFAULT);
-				goto err_out;
-			}
-			bo->pages_dma_addr[i] = addr;
-		}
-
-	}
-
-	return bo;
-
-err_out:
-	lima_bo_destroy(bo);
-	return ret;
-}
diff --git a/drivers/gpu/drm/lima/lima_object.h b/drivers/gpu/drm/lima/lima_object.h
deleted file mode 100644
index 31ca2d8dc0a1..000000000000
--- a/drivers/gpu/drm/lima/lima_object.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 OR MIT */
-/* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */
-
-#ifndef __LIMA_OBJECT_H__
-#define __LIMA_OBJECT_H__
-
-#include <drm/drm_gem.h>
-
-#include "lima_device.h"
-
-struct lima_bo {
-	struct drm_gem_object gem;
-
-	struct page **pages;
-	dma_addr_t *pages_dma_addr;
-	struct sg_table *sgt;
-	void *vaddr;
-
-	struct mutex lock;
-	struct list_head va;
-};
-
-static inline struct lima_bo *
-to_lima_bo(struct drm_gem_object *obj)
-{
-	return container_of(obj, struct lima_bo, gem);
-}
-
-struct lima_bo *lima_bo_create(struct lima_device *dev, u32 size,
-			       u32 flags, struct sg_table *sgt);
-void lima_bo_destroy(struct lima_bo *bo);
-void *lima_bo_vmap(struct lima_bo *bo);
-void lima_bo_vunmap(struct lima_bo *bo);
-
-#endif
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index 4127cacac454..f522c5f99729 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -10,7 +10,7 @@
 #include "lima_vm.h"
 #include "lima_mmu.h"
 #include "lima_l2_cache.h"
-#include "lima_object.h"
+#include "lima_gem.h"
 
 struct lima_fence {
 	struct dma_fence base;
@@ -117,7 +117,7 @@ int lima_sched_task_init(struct lima_sched_task *task,
 		return -ENOMEM;
 
 	for (i = 0; i < num_bos; i++)
-		drm_gem_object_get(&bos[i]->gem);
+		drm_gem_object_get(&bos[i]->base.base);
 
 	err = drm_sched_job_init(&task->base, &context->base, vm);
 	if (err) {
@@ -148,7 +148,7 @@ void lima_sched_task_fini(struct lima_sched_task *task)
 
 	if (task->bos) {
 		for (i = 0; i < task->num_bos; i++)
-			drm_gem_object_put_unlocked(&task->bos[i]->gem);
+			drm_gem_object_put_unlocked(&task->bos[i]->base.base);
 		kfree(task->bos);
 	}
 
diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c
index 19e88ca16527..8e69c4540e8b 100644
--- a/drivers/gpu/drm/lima/lima_vm.c
+++ b/drivers/gpu/drm/lima/lima_vm.c
@@ -6,7 +6,7 @@
 
 #include "lima_device.h"
 #include "lima_vm.h"
-#include "lima_object.h"
+#include "lima_gem.h"
 #include "lima_regs.h"
 
 struct lima_bo_va {
@@ -32,7 +32,7 @@ struct lima_bo_va {
 #define LIMA_BTE(va) ((va & LIMA_VM_BT_MASK) >> LIMA_VM_BT_SHIFT)
 
 
-static void lima_vm_unmap_page_table(struct lima_vm *vm, u32 start, u32 end)
+static void lima_vm_unmap_range(struct lima_vm *vm, u32 start, u32 end)
 {
 	u32 addr;
 
@@ -44,41 +44,32 @@ static void lima_vm_unmap_page_table(struct lima_vm *vm, u32 start, u32 end)
 	}
 }
 
-static int lima_vm_map_page_table(struct lima_vm *vm, dma_addr_t *dma,
-				  u32 start, u32 end)
+static int lima_vm_map_page(struct lima_vm *vm, dma_addr_t pa, u32 va)
 {
-	u64 addr;
-	int i = 0;
-
-	for (addr = start; addr <= end; addr += LIMA_PAGE_SIZE) {
-		u32 pbe = LIMA_PBE(addr);
-		u32 bte = LIMA_BTE(addr);
-
-		if (!vm->bts[pbe].cpu) {
-			dma_addr_t pts;
-			u32 *pd;
-			int j;
-
-			vm->bts[pbe].cpu = dma_alloc_wc(
-				vm->dev->dev, LIMA_PAGE_SIZE << LIMA_VM_NUM_PT_PER_BT_SHIFT,
-				&vm->bts[pbe].dma, GFP_KERNEL | __GFP_ZERO);
-			if (!vm->bts[pbe].cpu) {
-				if (addr != start)
-					lima_vm_unmap_page_table(vm, start, addr - 1);
-				return -ENOMEM;
-			}
-
-			pts = vm->bts[pbe].dma;
-			pd = vm->pd.cpu + (pbe << LIMA_VM_NUM_PT_PER_BT_SHIFT);
-			for (j = 0; j < LIMA_VM_NUM_PT_PER_BT; j++) {
-				pd[j] = pts | LIMA_VM_FLAG_PRESENT;
-				pts += LIMA_PAGE_SIZE;
-			}
+	u32 pbe = LIMA_PBE(va);
+	u32 bte = LIMA_BTE(va);
+
+	if (!vm->bts[pbe].cpu) {
+		dma_addr_t pts;
+		u32 *pd;
+		int j;
+
+		vm->bts[pbe].cpu = dma_alloc_wc(
+			vm->dev->dev, LIMA_PAGE_SIZE << LIMA_VM_NUM_PT_PER_BT_SHIFT,
+			&vm->bts[pbe].dma, GFP_KERNEL | __GFP_ZERO);
+		if (!vm->bts[pbe].cpu)
+			return -ENOMEM;
+
+		pts = vm->bts[pbe].dma;
+		pd = vm->pd.cpu + (pbe << LIMA_VM_NUM_PT_PER_BT_SHIFT);
+		for (j = 0; j < LIMA_VM_NUM_PT_PER_BT; j++) {
+			pd[j] = pts | LIMA_VM_FLAG_PRESENT;
+			pts += LIMA_PAGE_SIZE;
 		}
-
-		vm->bts[pbe].cpu[bte] = dma[i++] | LIMA_VM_FLAGS_CACHE;
 	}
 
+	vm->bts[pbe].cpu[bte] = pa | LIMA_VM_FLAGS_CACHE;
+
 	return 0;
 }
 
@@ -100,7 +91,8 @@ lima_vm_bo_find(struct lima_vm *vm, struct lima_bo *bo)
 int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
 {
 	struct lima_bo_va *bo_va;
-	int err;
+	struct sg_dma_page_iter sg_iter;
+	int offset = 0, err;
 
 	mutex_lock(&bo->lock);
 
@@ -128,14 +120,18 @@ int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
 
 	mutex_lock(&vm->lock);
 
-	err = drm_mm_insert_node(&vm->mm, &bo_va->node, bo->gem.size);
+	err = drm_mm_insert_node(&vm->mm, &bo_va->node, lima_bo_size(bo));
 	if (err)
 		goto err_out1;
 
-	err = lima_vm_map_page_table(vm, bo->pages_dma_addr, bo_va->node.start,
-				     bo_va->node.start + bo_va->node.size - 1);
-	if (err)
-		goto err_out2;
+	for_each_sg_dma_page(bo->base.sgt->sgl, &sg_iter, bo->base.sgt->nents, 0) {
+		err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter),
+				       bo_va->node.start + offset);
+		if (err)
+			goto err_out2;
+
+		offset += PAGE_SIZE;
+	}
 
 	mutex_unlock(&vm->lock);
 
@@ -145,6 +141,8 @@ int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
 	return 0;
 
 err_out2:
+	if (offset)
+		lima_vm_unmap_range(vm, bo_va->node.start, bo_va->node.start + offset - 1);
 	drm_mm_remove_node(&bo_va->node);
 err_out1:
 	mutex_unlock(&vm->lock);
@@ -168,8 +166,8 @@ void lima_vm_bo_del(struct lima_vm *vm, struct lima_bo *bo)
 
 	mutex_lock(&vm->lock);
 
-	lima_vm_unmap_page_table(vm, bo_va->node.start,
-				 bo_va->node.start + bo_va->node.size - 1);
+	lima_vm_unmap_range(vm, bo_va->node.start,
+			    bo_va->node.start + bo_va->node.size - 1);
 
 	drm_mm_remove_node(&bo_va->node);
 
@@ -215,9 +213,8 @@ struct lima_vm *lima_vm_create(struct lima_device *dev)
 		goto err_out0;
 
 	if (dev->dlbu_cpu) {
-		int err = lima_vm_map_page_table(
-			vm, &dev->dlbu_dma, LIMA_VA_RESERVE_DLBU,
-			LIMA_VA_RESERVE_DLBU + LIMA_PAGE_SIZE - 1);
+		int err = lima_vm_map_page(
+			vm, dev->dlbu_dma, LIMA_VA_RESERVE_DLBU);
 		if (err)
 			goto err_out1;
 	}
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/6] drm/lima: use drm_gem_(un)lock_reservations
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
                   ` (3 preceding siblings ...)
  2019-09-26 14:10 ` [PATCH 4/6] drm/lima: use drm_gem_shmem_helpers Qiang Yu
@ 2019-09-26 14:10 ` Qiang Yu
  2019-09-26 14:10 ` [PATCH 6/6] drm/lima: add __GFP_NOWARN flag to all dma_alloc_wc Qiang Yu
  2019-09-26 15:00 ` [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Steven Price
  6 siblings, 0 replies; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel; +Cc: lima, Maxime Ripard, David Airlie, Qiang Yu, Sean Paul

Signed-off-by: Qiang Yu <yuq825@gmail.com>
---
 drivers/gpu/drm/lima/lima_gem.c | 64 ++++-----------------------------
 1 file changed, 6 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index eb70e4429588..3cc1870862c3 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -166,61 +166,6 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
 	return drm_gem_fence_array_add_implicit(&task->deps, &bo->base.base, write);
 }
 
-static int lima_gem_lock_bos(struct lima_bo **bos, u32 nr_bos,
-			     struct ww_acquire_ctx *ctx)
-{
-	int i, ret = 0, contended, slow_locked = -1;
-
-	ww_acquire_init(ctx, &reservation_ww_class);
-
-retry:
-	for (i = 0; i < nr_bos; i++) {
-		if (i == slow_locked) {
-			slow_locked = -1;
-			continue;
-		}
-
-		ret = ww_mutex_lock_interruptible(&lima_bo_resv(bos[i])->lock, ctx);
-		if (ret < 0) {
-			contended = i;
-			goto err;
-		}
-	}
-
-	ww_acquire_done(ctx);
-	return 0;
-
-err:
-	for (i--; i >= 0; i--)
-		ww_mutex_unlock(&lima_bo_resv(bos[i])->lock);
-
-	if (slow_locked >= 0)
-		ww_mutex_unlock(&lima_bo_resv(bos[slow_locked])->lock);
-
-	if (ret == -EDEADLK) {
-		/* we lost out in a seqno race, lock and retry.. */
-		ret = ww_mutex_lock_slow_interruptible(
-			&lima_bo_resv(bos[contended])->lock, ctx);
-		if (!ret) {
-			slow_locked = contended;
-			goto retry;
-		}
-	}
-	ww_acquire_fini(ctx);
-
-	return ret;
-}
-
-static void lima_gem_unlock_bos(struct lima_bo **bos, u32 nr_bos,
-				struct ww_acquire_ctx *ctx)
-{
-	int i;
-
-	for (i = 0; i < nr_bos; i++)
-		ww_mutex_unlock(&lima_bo_resv(bos[i])->lock);
-	ww_acquire_fini(ctx);
-}
-
 static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit)
 {
 	int i, err;
@@ -280,7 +225,8 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 		}
 	}
 
-	err = lima_gem_lock_bos(bos, submit->nr_bos, &ctx);
+	err = drm_gem_lock_reservations((struct drm_gem_object **)bos,
+					submit->nr_bos, &ctx);
 	if (err)
 		goto err_out2;
 
@@ -313,7 +259,8 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 			dma_resv_add_shared_fence(lima_bo_resv(bos[i]), fence);
 	}
 
-	lima_gem_unlock_bos(bos, submit->nr_bos, &ctx);
+	drm_gem_unlock_reservations((struct drm_gem_object **)bos,
+				    submit->nr_bos, &ctx);
 
 	for (i = 0; i < submit->nr_bos; i++)
 		drm_gem_object_put_unlocked(&bos[i]->base.base);
@@ -330,7 +277,8 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
 err_out4:
 	lima_sched_task_fini(submit->task);
 err_out3:
-	lima_gem_unlock_bos(bos, submit->nr_bos, &ctx);
+	drm_gem_unlock_reservations((struct drm_gem_object **)bos,
+				    submit->nr_bos, &ctx);
 err_out2:
 	for (i = 0; i < submit->nr_bos; i++)
 		lima_vm_bo_del(vm, bos[i]);
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/6] drm/lima: add __GFP_NOWARN flag to all dma_alloc_wc
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
                   ` (4 preceding siblings ...)
  2019-09-26 14:10 ` [PATCH 5/6] drm/lima: use drm_gem_(un)lock_reservations Qiang Yu
@ 2019-09-26 14:10 ` Qiang Yu
  2019-09-26 15:00 ` [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Steven Price
  6 siblings, 0 replies; 13+ messages in thread
From: Qiang Yu @ 2019-09-26 14:10 UTC (permalink / raw)
  To: dri-devel; +Cc: lima, Maxime Ripard, David Airlie, Qiang Yu, Sean Paul

This prevent CMA printing dumy "PFNs busy" info which is
caused by alloc fail re-try case.

Signed-off-by: Qiang Yu <yuq825@gmail.com>
---
 drivers/gpu/drm/lima/lima_device.c | 2 +-
 drivers/gpu/drm/lima/lima_vm.c     | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/lima/lima_device.c b/drivers/gpu/drm/lima/lima_device.c
index d86b8d81a483..3d1a2b4d1005 100644
--- a/drivers/gpu/drm/lima/lima_device.c
+++ b/drivers/gpu/drm/lima/lima_device.c
@@ -313,7 +313,7 @@ int lima_device_init(struct lima_device *ldev)
 		ldev->va_end = LIMA_VA_RESERVE_START;
 		ldev->dlbu_cpu = dma_alloc_wc(
 			ldev->dev, LIMA_PAGE_SIZE,
-			&ldev->dlbu_dma, GFP_KERNEL);
+			&ldev->dlbu_dma, GFP_KERNEL | __GFP_NOWARN);
 		if (!ldev->dlbu_cpu) {
 			err = -ENOMEM;
 			goto err_out2;
diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c
index 8e69c4540e8b..840e2350d872 100644
--- a/drivers/gpu/drm/lima/lima_vm.c
+++ b/drivers/gpu/drm/lima/lima_vm.c
@@ -56,7 +56,7 @@ static int lima_vm_map_page(struct lima_vm *vm, dma_addr_t pa, u32 va)
 
 		vm->bts[pbe].cpu = dma_alloc_wc(
 			vm->dev->dev, LIMA_PAGE_SIZE << LIMA_VM_NUM_PT_PER_BT_SHIFT,
-			&vm->bts[pbe].dma, GFP_KERNEL | __GFP_ZERO);
+			&vm->bts[pbe].dma, GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO);
 		if (!vm->bts[pbe].cpu)
 			return -ENOMEM;
 
@@ -208,7 +208,7 @@ struct lima_vm *lima_vm_create(struct lima_device *dev)
 	kref_init(&vm->refcount);
 
 	vm->pd.cpu = dma_alloc_wc(dev->dev, LIMA_PAGE_SIZE, &vm->pd.dma,
-				  GFP_KERNEL | __GFP_ZERO);
+				  GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO);
 	if (!vm->pd.cpu)
 		goto err_out0;
 
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/6] drm/lima: simplify driver by using more drm helpers
  2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
                   ` (5 preceding siblings ...)
  2019-09-26 14:10 ` [PATCH 6/6] drm/lima: add __GFP_NOWARN flag to all dma_alloc_wc Qiang Yu
@ 2019-09-26 15:00 ` Steven Price
  6 siblings, 0 replies; 13+ messages in thread
From: Steven Price @ 2019-09-26 15:00 UTC (permalink / raw)
  To: Qiang Yu, dri-devel; +Cc: Maxime Ripard, David Airlie, Sean Paul, lima

On 26/09/2019 15:10, Qiang Yu wrote:
> By using shared drm helpers:
> 1. drm_gem_objects_lookup
> 2. drm_gem_(un)lock_reservations
> 3. drm_gem_shmem_helpers
> we can simplify lima driver a lot and benifit from updates to
> these functions.
> 
> drm_gem_objects_lookup need a refine in order to be used by lima.

I'm not convinced this is actually a great idea. v3d looks like it could
already use the existing drm_gem_objects_lookup mechanism: see below
(untested) patch. So we're actually adding more code to Panfrost (and
v3d) just for the quirk in lima that the supplied array of BOs is
interleaved with flags. And the diffstat for the lima change adds lines
too! So it doesn't look like a net win to me.

I've not looked at patches 4-6, but the diffstat on those looks better!

Steve

---8<----
diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
index 4c4b59ae2c81..dbe366d35195 100644
--- a/drivers/gpu/drm/v3d/v3d_gem.c
+++ b/drivers/gpu/drm/v3d/v3d_gem.c
@@ -304,48 +304,9 @@ v3d_lookup_bos(struct drm_device *dev,
 		return -EINVAL;
 	}
 
-	job->bo = kvmalloc_array(job->bo_count,
-				 sizeof(struct drm_gem_cma_object *),
-				 GFP_KERNEL | __GFP_ZERO);
-	if (!job->bo) {
-		DRM_DEBUG("Failed to allocate validated BO pointers\n");
-		return -ENOMEM;
-	}
-
-	handles = kvmalloc_array(job->bo_count, sizeof(u32), GFP_KERNEL);
-	if (!handles) {
-		ret = -ENOMEM;
-		DRM_DEBUG("Failed to allocate incoming GEM handles\n");
-		goto fail;
-	}
-
-	if (copy_from_user(handles,
-			   (void __user *)(uintptr_t)bo_handles,
-			   job->bo_count * sizeof(u32))) {
-		ret = -EFAULT;
-		DRM_DEBUG("Failed to copy in GEM handles\n");
-		goto fail;
-	}
-
-	spin_lock(&file_priv->table_lock);
-	for (i = 0; i < job->bo_count; i++) {
-		struct drm_gem_object *bo = idr_find(&file_priv->object_idr,
-						     handles[i]);
-		if (!bo) {
-			DRM_DEBUG("Failed to look up GEM BO %d: %d\n",
-				  i, handles[i]);
-			ret = -ENOENT;
-			spin_unlock(&file_priv->table_lock);
-			goto fail;
-		}
-		drm_gem_object_get(bo);
-		job->bo[i] = bo;
-	}
-	spin_unlock(&file_priv->table_lock);
-
-fail:
-	kvfree(handles);
-	return ret;
+	return drm_gem_objects_lookup(file_priv,
+				      (void __user *)(uintptr_t)bo_handles,
+				      job->bo_count, &job->bo);
 }
 
 static void
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup
  2019-09-26 14:10 ` [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup Qiang Yu
@ 2019-09-26 15:01   ` Rob Herring
  2019-09-27  0:09     ` Qiang Yu
  0 siblings, 1 reply; 13+ messages in thread
From: Rob Herring @ 2019-09-26 15:01 UTC (permalink / raw)
  To: Qiang Yu
  Cc: Tomeu Vizoso, lima, Maxime Ripard, dri-devel, Steven Price,
	David Airlie, Alyssa Rosenzweig, Sean Paul

On Thu, Sep 26, 2019 at 9:12 AM Qiang Yu <yuq825@gmail.com> wrote:
>
> Do not use user space bo handles directly and left the user
> to kernel copy work to drivers calling this function.
>
> This is for driver like lima which does not pass gem bo
> handles continously in an array in ioctl argument.
>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> Signed-off-by: Qiang Yu <yuq825@gmail.com>
> ---
>  drivers/gpu/drm/drm_gem.c               | 28 +++++++------------------
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 23 +++++++++++++++++---

Please keep the current variant. While only panfrost is converted ATM,
vc4 and v3d will use this too.

>  include/drm/drm_gem.h                   |  2 +-
>  3 files changed, 29 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 6854f5867d51..9f73e5f3b53f 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -679,11 +679,11 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
>  /**
>   * drm_gem_objects_lookup - look up GEM objects from an array of handles
>   * @filp: DRM file private date
> - * @bo_handles: user pointer to array of userspace handle
> + * @bo_handles: array of GEM object handles
>   * @count: size of handle array
>   * @objs_out: returned pointer to array of drm_gem_object pointers
>   *
> - * Takes an array of userspace handles and returns a newly allocated array of
> + * Takes an array of GEM object handles and returns a newly allocated array of
>   * GEM objects.
>   *
>   * For a single handle lookup, use drm_gem_object_lookup().
> @@ -695,11 +695,10 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
>   * failure. 0 is returned on success.
>   *
>   */
> -int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
> +int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
>                            int count, struct drm_gem_object ***objs_out)

Either split into drm_gem_objects_lookup() and a
drm_gem_objects_lookup_user() with the latter calling the former or
just make the helper take both a user and u32* ptr with the
expectation that only one of them is non-NULL.

Rob
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup
  2019-09-26 15:01   ` Rob Herring
@ 2019-09-27  0:09     ` Qiang Yu
  2019-10-09 14:57       ` Daniel Vetter
  0 siblings, 1 reply; 13+ messages in thread
From: Qiang Yu @ 2019-09-27  0:09 UTC (permalink / raw)
  To: Rob Herring
  Cc: Tomeu Vizoso, lima, Maxime Ripard, dri-devel, Steven Price,
	David Airlie, Alyssa Rosenzweig, Sean Paul

On Thu, Sep 26, 2019 at 11:01 PM Rob Herring <robh@kernel.org> wrote:
>
> On Thu, Sep 26, 2019 at 9:12 AM Qiang Yu <yuq825@gmail.com> wrote:
> >
> > Do not use user space bo handles directly and left the user
> > to kernel copy work to drivers calling this function.
> >
> > This is for driver like lima which does not pass gem bo
> > handles continously in an array in ioctl argument.
> >
> > Cc: Rob Herring <robh@kernel.org>
> > Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> > Cc: Steven Price <steven.price@arm.com>
> > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> > Signed-off-by: Qiang Yu <yuq825@gmail.com>
> > ---
> >  drivers/gpu/drm/drm_gem.c               | 28 +++++++------------------
> >  drivers/gpu/drm/panfrost/panfrost_drv.c | 23 +++++++++++++++++---
>
> Please keep the current variant. While only panfrost is converted ATM,
> vc4 and v3d will use this too.
>
> >  include/drm/drm_gem.h                   |  2 +-
> >  3 files changed, 29 insertions(+), 24 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > index 6854f5867d51..9f73e5f3b53f 100644
> > --- a/drivers/gpu/drm/drm_gem.c
> > +++ b/drivers/gpu/drm/drm_gem.c
> > @@ -679,11 +679,11 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> >  /**
> >   * drm_gem_objects_lookup - look up GEM objects from an array of handles
> >   * @filp: DRM file private date
> > - * @bo_handles: user pointer to array of userspace handle
> > + * @bo_handles: array of GEM object handles
> >   * @count: size of handle array
> >   * @objs_out: returned pointer to array of drm_gem_object pointers
> >   *
> > - * Takes an array of userspace handles and returns a newly allocated array of
> > + * Takes an array of GEM object handles and returns a newly allocated array of
> >   * GEM objects.
> >   *
> >   * For a single handle lookup, use drm_gem_object_lookup().
> > @@ -695,11 +695,10 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> >   * failure. 0 is returned on success.
> >   *
> >   */
> > -int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
> > +int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
> >                            int count, struct drm_gem_object ***objs_out)
>
> Either split into drm_gem_objects_lookup() and a
> drm_gem_objects_lookup_user() with the latter calling the former or
> just make the helper take both a user and u32* ptr with the
> expectation that only one of them is non-NULL.
>
OK, I prefer the first way, will send v2 for it.

Thanks,
Qiang

> Rob
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup
  2019-09-27  0:09     ` Qiang Yu
@ 2019-10-09 14:57       ` Daniel Vetter
  2019-10-10  2:02         ` Qiang Yu
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Vetter @ 2019-10-09 14:57 UTC (permalink / raw)
  To: Qiang Yu
  Cc: Tomeu Vizoso, lima, Maxime Ripard, dri-devel, Steven Price,
	David Airlie, Alyssa Rosenzweig, Sean Paul

On Fri, Sep 27, 2019 at 08:09:52AM +0800, Qiang Yu wrote:
> On Thu, Sep 26, 2019 at 11:01 PM Rob Herring <robh@kernel.org> wrote:
> >
> > On Thu, Sep 26, 2019 at 9:12 AM Qiang Yu <yuq825@gmail.com> wrote:
> > >
> > > Do not use user space bo handles directly and left the user
> > > to kernel copy work to drivers calling this function.
> > >
> > > This is for driver like lima which does not pass gem bo
> > > handles continously in an array in ioctl argument.
> > >
> > > Cc: Rob Herring <robh@kernel.org>
> > > Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> > > Cc: Steven Price <steven.price@arm.com>
> > > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> > > Signed-off-by: Qiang Yu <yuq825@gmail.com>
> > > ---
> > >  drivers/gpu/drm/drm_gem.c               | 28 +++++++------------------
> > >  drivers/gpu/drm/panfrost/panfrost_drv.c | 23 +++++++++++++++++---
> >
> > Please keep the current variant. While only panfrost is converted ATM,
> > vc4 and v3d will use this too.
> >
> > >  include/drm/drm_gem.h                   |  2 +-
> > >  3 files changed, 29 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > > index 6854f5867d51..9f73e5f3b53f 100644
> > > --- a/drivers/gpu/drm/drm_gem.c
> > > +++ b/drivers/gpu/drm/drm_gem.c
> > > @@ -679,11 +679,11 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> > >  /**
> > >   * drm_gem_objects_lookup - look up GEM objects from an array of handles
> > >   * @filp: DRM file private date
> > > - * @bo_handles: user pointer to array of userspace handle
> > > + * @bo_handles: array of GEM object handles
> > >   * @count: size of handle array
> > >   * @objs_out: returned pointer to array of drm_gem_object pointers
> > >   *
> > > - * Takes an array of userspace handles and returns a newly allocated array of
> > > + * Takes an array of GEM object handles and returns a newly allocated array of
> > >   * GEM objects.
> > >   *
> > >   * For a single handle lookup, use drm_gem_object_lookup().
> > > @@ -695,11 +695,10 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> > >   * failure. 0 is returned on success.
> > >   *
> > >   */
> > > -int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
> > > +int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
> > >                            int count, struct drm_gem_object ***objs_out)
> >
> > Either split into drm_gem_objects_lookup() and a
> > drm_gem_objects_lookup_user() with the latter calling the former or
> > just make the helper take both a user and u32* ptr with the
> > expectation that only one of them is non-NULL.
> >
> OK, I prefer the first way, will send v2 for it.

Note that hopefully soon (Christian König is working on it) we'll have
ww_mutex locking helpers, which will introduce a drm_gem_operation_ctx.
Once we have that I think we can unify a lot of these helpers (Gerd
Hoffmann has looked into it) all while making them more flexible (i.e. not
only works with arrays). So might be worth to coordinate a bit here, and
maybe hold off on just this part for lima for a bit.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup
  2019-10-09 14:57       ` Daniel Vetter
@ 2019-10-10  2:02         ` Qiang Yu
  2019-10-10 13:42           ` Daniel Vetter
  0 siblings, 1 reply; 13+ messages in thread
From: Qiang Yu @ 2019-10-10  2:02 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Tomeu Vizoso, lima, Maxime Ripard, dri-devel, Steven Price,
	David Airlie, Alyssa Rosenzweig, Sean Paul

On Wed, Oct 9, 2019 at 10:57 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Fri, Sep 27, 2019 at 08:09:52AM +0800, Qiang Yu wrote:
> > On Thu, Sep 26, 2019 at 11:01 PM Rob Herring <robh@kernel.org> wrote:
> > >
> > > On Thu, Sep 26, 2019 at 9:12 AM Qiang Yu <yuq825@gmail.com> wrote:
> > > >
> > > > Do not use user space bo handles directly and left the user
> > > > to kernel copy work to drivers calling this function.
> > > >
> > > > This is for driver like lima which does not pass gem bo
> > > > handles continously in an array in ioctl argument.
> > > >
> > > > Cc: Rob Herring <robh@kernel.org>
> > > > Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> > > > Cc: Steven Price <steven.price@arm.com>
> > > > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> > > > Signed-off-by: Qiang Yu <yuq825@gmail.com>
> > > > ---
> > > >  drivers/gpu/drm/drm_gem.c               | 28 +++++++------------------
> > > >  drivers/gpu/drm/panfrost/panfrost_drv.c | 23 +++++++++++++++++---
> > >
> > > Please keep the current variant. While only panfrost is converted ATM,
> > > vc4 and v3d will use this too.
> > >
> > > >  include/drm/drm_gem.h                   |  2 +-
> > > >  3 files changed, 29 insertions(+), 24 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > > > index 6854f5867d51..9f73e5f3b53f 100644
> > > > --- a/drivers/gpu/drm/drm_gem.c
> > > > +++ b/drivers/gpu/drm/drm_gem.c
> > > > @@ -679,11 +679,11 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> > > >  /**
> > > >   * drm_gem_objects_lookup - look up GEM objects from an array of handles
> > > >   * @filp: DRM file private date
> > > > - * @bo_handles: user pointer to array of userspace handle
> > > > + * @bo_handles: array of GEM object handles
> > > >   * @count: size of handle array
> > > >   * @objs_out: returned pointer to array of drm_gem_object pointers
> > > >   *
> > > > - * Takes an array of userspace handles and returns a newly allocated array of
> > > > + * Takes an array of GEM object handles and returns a newly allocated array of
> > > >   * GEM objects.
> > > >   *
> > > >   * For a single handle lookup, use drm_gem_object_lookup().
> > > > @@ -695,11 +695,10 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> > > >   * failure. 0 is returned on success.
> > > >   *
> > > >   */
> > > > -int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
> > > > +int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
> > > >                            int count, struct drm_gem_object ***objs_out)
> > >
> > > Either split into drm_gem_objects_lookup() and a
> > > drm_gem_objects_lookup_user() with the latter calling the former or
> > > just make the helper take both a user and u32* ptr with the
> > > expectation that only one of them is non-NULL.
> > >
> > OK, I prefer the first way, will send v2 for it.
>
> Note that hopefully soon (Christian König is working on it) we'll have
> ww_mutex locking helpers, which will introduce a drm_gem_operation_ctx.
> Once we have that I think we can unify a lot of these helpers (Gerd
> Hoffmann has looked into it) all while making them more flexible (i.e. not
> only works with arrays). So might be worth to coordinate a bit here, and
> maybe hold off on just this part for lima for a bit.

I don't know the context of these works, so I think I'd better wait some time
for the new interface and send the rest of lima patches as v4.

Thanks,
Qiang

> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup
  2019-10-10  2:02         ` Qiang Yu
@ 2019-10-10 13:42           ` Daniel Vetter
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Vetter @ 2019-10-10 13:42 UTC (permalink / raw)
  To: Qiang Yu
  Cc: Tomeu Vizoso, lima, Maxime Ripard, dri-devel, Steven Price,
	David Airlie, Alyssa Rosenzweig, Sean Paul

On Thu, Oct 10, 2019 at 10:02:31AM +0800, Qiang Yu wrote:
> On Wed, Oct 9, 2019 at 10:57 PM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Fri, Sep 27, 2019 at 08:09:52AM +0800, Qiang Yu wrote:
> > > On Thu, Sep 26, 2019 at 11:01 PM Rob Herring <robh@kernel.org> wrote:
> > > >
> > > > On Thu, Sep 26, 2019 at 9:12 AM Qiang Yu <yuq825@gmail.com> wrote:
> > > > >
> > > > > Do not use user space bo handles directly and left the user
> > > > > to kernel copy work to drivers calling this function.
> > > > >
> > > > > This is for driver like lima which does not pass gem bo
> > > > > handles continously in an array in ioctl argument.
> > > > >
> > > > > Cc: Rob Herring <robh@kernel.org>
> > > > > Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> > > > > Cc: Steven Price <steven.price@arm.com>
> > > > > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> > > > > Signed-off-by: Qiang Yu <yuq825@gmail.com>
> > > > > ---
> > > > >  drivers/gpu/drm/drm_gem.c               | 28 +++++++------------------
> > > > >  drivers/gpu/drm/panfrost/panfrost_drv.c | 23 +++++++++++++++++---
> > > >
> > > > Please keep the current variant. While only panfrost is converted ATM,
> > > > vc4 and v3d will use this too.
> > > >
> > > > >  include/drm/drm_gem.h                   |  2 +-
> > > > >  3 files changed, 29 insertions(+), 24 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > > > > index 6854f5867d51..9f73e5f3b53f 100644
> > > > > --- a/drivers/gpu/drm/drm_gem.c
> > > > > +++ b/drivers/gpu/drm/drm_gem.c
> > > > > @@ -679,11 +679,11 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> > > > >  /**
> > > > >   * drm_gem_objects_lookup - look up GEM objects from an array of handles
> > > > >   * @filp: DRM file private date
> > > > > - * @bo_handles: user pointer to array of userspace handle
> > > > > + * @bo_handles: array of GEM object handles
> > > > >   * @count: size of handle array
> > > > >   * @objs_out: returned pointer to array of drm_gem_object pointers
> > > > >   *
> > > > > - * Takes an array of userspace handles and returns a newly allocated array of
> > > > > + * Takes an array of GEM object handles and returns a newly allocated array of
> > > > >   * GEM objects.
> > > > >   *
> > > > >   * For a single handle lookup, use drm_gem_object_lookup().
> > > > > @@ -695,11 +695,10 @@ static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
> > > > >   * failure. 0 is returned on success.
> > > > >   *
> > > > >   */
> > > > > -int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
> > > > > +int drm_gem_objects_lookup(struct drm_file *filp, u32 *bo_handles,
> > > > >                            int count, struct drm_gem_object ***objs_out)
> > > >
> > > > Either split into drm_gem_objects_lookup() and a
> > > > drm_gem_objects_lookup_user() with the latter calling the former or
> > > > just make the helper take both a user and u32* ptr with the
> > > > expectation that only one of them is non-NULL.
> > > >
> > > OK, I prefer the first way, will send v2 for it.
> >
> > Note that hopefully soon (Christian König is working on it) we'll have
> > ww_mutex locking helpers, which will introduce a drm_gem_operation_ctx.
> > Once we have that I think we can unify a lot of these helpers (Gerd
> > Hoffmann has looked into it) all while making them more flexible (i.e. not
> > only works with arrays). So might be worth to coordinate a bit here, and
> > maybe hold off on just this part for lima for a bit.
> 
> I don't know the context of these works, so I think I'd better wait some time
> for the new interface and send the rest of lima patches as v4.

Yeah I think with the new stuff we might be able to avoid the
kvmalloc_array, that's not really a great idea to plug into a fastpath
like execbuf. The other patches can imo still go ahead, I don't want to
hold up everything :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-10-10 13:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-26 14:10 [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Qiang Yu
2019-09-26 14:10 ` [PATCH 1/6] drm/gem: refine drm_gem_objects_lookup Qiang Yu
2019-09-26 15:01   ` Rob Herring
2019-09-27  0:09     ` Qiang Yu
2019-10-09 14:57       ` Daniel Vetter
2019-10-10  2:02         ` Qiang Yu
2019-10-10 13:42           ` Daniel Vetter
2019-09-26 14:10 ` [PATCH 2/6] drm/v3d: use drm_gem_objects_lookup Qiang Yu
2019-09-26 14:10 ` [PATCH 3/6] drm/lima: " Qiang Yu
2019-09-26 14:10 ` [PATCH 4/6] drm/lima: use drm_gem_shmem_helpers Qiang Yu
2019-09-26 14:10 ` [PATCH 5/6] drm/lima: use drm_gem_(un)lock_reservations Qiang Yu
2019-09-26 14:10 ` [PATCH 6/6] drm/lima: add __GFP_NOWARN flag to all dma_alloc_wc Qiang Yu
2019-09-26 15:00 ` [PATCH 0/6] drm/lima: simplify driver by using more drm helpers Steven Price

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).