All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl
@ 2021-07-05  8:29 Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 1/7] drm/panfrost: Pass a job to panfrost_{acquire, attach}_object_fences() Boris Brezillon
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

Hello,

This is an attempt at providing a new submit ioctl that's more
Vulkan-friendly than the existing one. This ioctl

1/ allows passing several out syncobjs so we can easily update
   several fence/semaphore in a single ioctl() call
2/ allows passing several jobs so we don't have to have one ioctl
   per job-chain recorded in the command buffer
3/ supports disabling implicit dependencies as well as 
   non-exclusive access to BOs, thus removing unnecessary
   synchronization

I've also been looking at adding {IN,OUT}_FENCE_FD support (allowing
one to pass at most one sync_file object in input and/or creating a
sync_file FD embedding the render out fence), but it's not entirely
clear to me when that's useful. Indeed, we can already do the
sync_file <-> syncobj conversion using the
SYNCOBJ_{FD_TO_HANDLE,HANDLE_TO_FD} ioctls if we have to.
Note that, unlike Turnip, PanVk is using syncobjs to implement
vkQueueWaitIdle(), so the syncobj -> sync_file conversion doesn't
have to happen for each submission, but maybe there's a good reason
to use sync_files for that too. Any feedback on that aspect would
be useful I guess.

Any feedback on this new ioctl is welcome, in particular, do you
think other things are missing/would be nice to have for Vulkan?

Regards,

Boris

P.S.: basic igt tests for these new ioctls re available there [1]

[1]https://gitlab.freedesktop.org/bbrezillon/igt-gpu-tools/-/tree/panfrost-batch-submit

Changes in v4:
* Replace the desc strides by a version field
* Change the submitqueue_create() prototype to return the queue id
  directly
* Implement the old submit ioctl() as a simple wrapper around
  panfrost_submit_job()

Changes in v3:
* Fix a deadlock in the submitqueue logic
* Limit the number of submitqueue per context to 16

*** BLURB HERE ***

Boris Brezillon (7):
  drm/panfrost: Pass a job to panfrost_{acquire,attach}_object_fences()
  drm/panfrost: Move the mappings collection out of
    panfrost_lookup_bos()
  drm/panfrost: Add BO access flags to relax dependencies between jobs
  drm/panfrost: Add the ability to create submit queues
  drm/panfrost: Add a new ioctl to submit batches
  drm/panfrost: Advertise the SYNCOBJ_TIMELINE feature
  drm/panfrost: Bump minor version to reflect the feature additions

 drivers/gpu/drm/panfrost/Makefile             |   3 +-
 drivers/gpu/drm/panfrost/panfrost_device.h    |   2 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c       | 611 +++++++++++++-----
 drivers/gpu/drm/panfrost/panfrost_job.c       |  89 ++-
 drivers/gpu/drm/panfrost/panfrost_job.h       |  10 +-
 .../gpu/drm/panfrost/panfrost_submitqueue.c   | 132 ++++
 .../gpu/drm/panfrost/panfrost_submitqueue.h   |  26 +
 include/uapi/drm/panfrost_drm.h               | 112 ++++
 8 files changed, 766 insertions(+), 219 deletions(-)
 create mode 100644 drivers/gpu/drm/panfrost/panfrost_submitqueue.c
 create mode 100644 drivers/gpu/drm/panfrost/panfrost_submitqueue.h

-- 
2.31.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/7] drm/panfrost: Pass a job to panfrost_{acquire, attach}_object_fences()
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 2/7] drm/panfrost: Move the mappings collection out of panfrost_lookup_bos() Boris Brezillon
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

So we don't have to change the prototype if we extend the function.

v3:
* Fix subject

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
---
 drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 71a72fb50e6b..fdc1bd7ecf12 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -240,15 +240,13 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
 	spin_unlock(&pfdev->js->job_lock);
 }
 
-static int panfrost_acquire_object_fences(struct drm_gem_object **bos,
-					  int bo_count,
-					  struct xarray *deps)
+static int panfrost_acquire_object_fences(struct panfrost_job *job)
 {
 	int i, ret;
 
-	for (i = 0; i < bo_count; i++) {
+	for (i = 0; i < job->bo_count; i++) {
 		/* panfrost always uses write mode in its current uapi */
-		ret = drm_gem_fence_array_add_implicit(deps, bos[i], true);
+		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i], true);
 		if (ret)
 			return ret;
 	}
@@ -256,14 +254,12 @@ static int panfrost_acquire_object_fences(struct drm_gem_object **bos,
 	return 0;
 }
 
-static void panfrost_attach_object_fences(struct drm_gem_object **bos,
-					  int bo_count,
-					  struct dma_fence *fence)
+static void panfrost_attach_object_fences(struct panfrost_job *job)
 {
 	int i;
 
-	for (i = 0; i < bo_count; i++)
-		dma_resv_add_excl_fence(bos[i]->resv, fence);
+	for (i = 0; i < job->bo_count; i++)
+		dma_resv_add_excl_fence(job->bos[i]->resv, job->render_done_fence);
 }
 
 int panfrost_job_push(struct panfrost_job *job)
@@ -290,8 +286,7 @@ int panfrost_job_push(struct panfrost_job *job)
 
 	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
 
-	ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
-					     &job->deps);
+	ret = panfrost_acquire_object_fences(job);
 	if (ret) {
 		mutex_unlock(&pfdev->sched_lock);
 		goto unlock;
@@ -303,8 +298,7 @@ int panfrost_job_push(struct panfrost_job *job)
 
 	mutex_unlock(&pfdev->sched_lock);
 
-	panfrost_attach_object_fences(job->bos, job->bo_count,
-				      job->render_done_fence);
+	panfrost_attach_object_fences(job);
 
 unlock:
 	drm_gem_unlock_reservations(job->bos, job->bo_count, &acquire_ctx);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/7] drm/panfrost: Move the mappings collection out of panfrost_lookup_bos()
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 1/7] drm/panfrost: Pass a job to panfrost_{acquire, attach}_object_fences() Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 3/7] drm/panfrost: Add BO access flags to relax dependencies between jobs Boris Brezillon
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

So we can re-use it from elsewhere.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
---
 drivers/gpu/drm/panfrost/panfrost_drv.c | 52 ++++++++++++++-----------
 1 file changed, 29 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 1ffaef5ec5ff..9bbc9e78cc85 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -109,6 +109,34 @@ static int panfrost_ioctl_create_bo(struct drm_device *dev, void *data,
 	return 0;
 }
 
+static int
+panfrost_get_job_mappings(struct drm_file *file_priv, struct panfrost_job *job)
+{
+	struct panfrost_file_priv *priv = file_priv->driver_priv;
+	unsigned int i;
+
+	job->mappings = kvmalloc_array(job->bo_count,
+				       sizeof(*job->mappings),
+				       GFP_KERNEL | __GFP_ZERO);
+	if (!job->mappings)
+		return -ENOMEM;
+
+	for (i = 0; i < job->bo_count; i++) {
+		struct panfrost_gem_mapping *mapping;
+		struct panfrost_gem_object *bo;
+
+		bo = to_panfrost_bo(job->bos[i]);
+		mapping = panfrost_gem_mapping_get(bo, priv);
+		if (!mapping)
+			return -EINVAL;
+
+		atomic_inc(&bo->gpu_usecount);
+		job->mappings[i] = mapping;
+	}
+
+	return 0;
+}
+
 /**
  * panfrost_lookup_bos() - Sets up job->bo[] with the GEM objects
  * referenced by the job.
@@ -128,8 +156,6 @@ panfrost_lookup_bos(struct drm_device *dev,
 		  struct drm_panfrost_submit *args,
 		  struct panfrost_job *job)
 {
-	struct panfrost_file_priv *priv = file_priv->driver_priv;
-	struct panfrost_gem_object *bo;
 	unsigned int i;
 	int ret;
 
@@ -144,27 +170,7 @@ panfrost_lookup_bos(struct drm_device *dev,
 	if (ret)
 		return ret;
 
-	job->mappings = kvmalloc_array(job->bo_count,
-				       sizeof(struct panfrost_gem_mapping *),
-				       GFP_KERNEL | __GFP_ZERO);
-	if (!job->mappings)
-		return -ENOMEM;
-
-	for (i = 0; i < job->bo_count; i++) {
-		struct panfrost_gem_mapping *mapping;
-
-		bo = to_panfrost_bo(job->bos[i]);
-		mapping = panfrost_gem_mapping_get(bo, priv);
-		if (!mapping) {
-			ret = -EINVAL;
-			break;
-		}
-
-		atomic_inc(&bo->gpu_usecount);
-		job->mappings[i] = mapping;
-	}
-
-	return ret;
+	return panfrost_get_job_mappings(file_priv, job);
 }
 
 /**
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/7] drm/panfrost: Add BO access flags to relax dependencies between jobs
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 1/7] drm/panfrost: Pass a job to panfrost_{acquire, attach}_object_fences() Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 2/7] drm/panfrost: Move the mappings collection out of panfrost_lookup_bos() Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 4/7] drm/panfrost: Add the ability to create submit queues Boris Brezillon
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

Jobs reading from the same BO should not be serialized. Add access
flags so we can relax the implicit dependencies in that case. We force
exclusive access for now to keep the behavior unchanged, but a new
SUBMIT ioctl taking explicit access flags will be introduced.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
---
 drivers/gpu/drm/panfrost/panfrost_drv.c |  9 +++++++++
 drivers/gpu/drm/panfrost/panfrost_job.c | 23 +++++++++++++++++++----
 drivers/gpu/drm/panfrost/panfrost_job.h |  1 +
 include/uapi/drm/panfrost_drm.h         |  3 +++
 4 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 9bbc9e78cc85..b6b5997c9366 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -164,6 +164,15 @@ panfrost_lookup_bos(struct drm_device *dev,
 	if (!job->bo_count)
 		return 0;
 
+	job->bo_flags = kvmalloc_array(job->bo_count,
+				       sizeof(*job->bo_flags),
+				       GFP_KERNEL | __GFP_ZERO);
+	if (!job->bo_flags)
+		return -ENOMEM;
+
+	for (i = 0; i < job->bo_count; i++)
+		job->bo_flags[i] = PANFROST_BO_REF_EXCLUSIVE;
+
 	ret = drm_gem_objects_lookup(file_priv,
 				     (void __user *)(uintptr_t)args->bo_handles,
 				     job->bo_count, &job->bos);
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index fdc1bd7ecf12..152245b122be 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -245,8 +245,16 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
 	int i, ret;
 
 	for (i = 0; i < job->bo_count; i++) {
-		/* panfrost always uses write mode in its current uapi */
-		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i], true);
+		bool exclusive = job->bo_flags[i] & PANFROST_BO_REF_EXCLUSIVE;
+
+		if (!exclusive) {
+			ret = dma_resv_reserve_shared(job->bos[i]->resv, 1);
+			if (ret)
+				return ret;
+		}
+
+		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i],
+						       exclusive);
 		if (ret)
 			return ret;
 	}
@@ -258,8 +266,14 @@ static void panfrost_attach_object_fences(struct panfrost_job *job)
 {
 	int i;
 
-	for (i = 0; i < job->bo_count; i++)
-		dma_resv_add_excl_fence(job->bos[i]->resv, job->render_done_fence);
+	for (i = 0; i < job->bo_count; i++) {
+		struct dma_resv *robj = job->bos[i]->resv;
+
+		if (job->bo_flags[i] & PANFROST_BO_REF_EXCLUSIVE)
+			dma_resv_add_excl_fence(robj, job->render_done_fence);
+		else
+			dma_resv_add_shared_fence(robj, job->render_done_fence);
+	}
 }
 
 int panfrost_job_push(struct panfrost_job *job)
@@ -340,6 +354,7 @@ static void panfrost_job_cleanup(struct kref *ref)
 		kvfree(job->bos);
 	}
 
+	kvfree(job->bo_flags);
 	kfree(job);
 }
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
index 82306a03b57e..1cbc3621b663 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.h
+++ b/drivers/gpu/drm/panfrost/panfrost_job.h
@@ -32,6 +32,7 @@ struct panfrost_job {
 
 	struct panfrost_gem_mapping **mappings;
 	struct drm_gem_object **bos;
+	u32 *bo_flags;
 	u32 bo_count;
 
 	/* Fence to be signaled by drm-sched once its done with the job */
diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
index 061e700dd06c..3723c9d231b5 100644
--- a/include/uapi/drm/panfrost_drm.h
+++ b/include/uapi/drm/panfrost_drm.h
@@ -224,6 +224,9 @@ struct drm_panfrost_madvise {
 	__u32 retained;       /* out, whether backing store still exists */
 };
 
+/* Exclusive (AKA write) access to the BO */
+#define PANFROST_BO_REF_EXCLUSIVE	0x1
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 4/7] drm/panfrost: Add the ability to create submit queues
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
                   ` (2 preceding siblings ...)
  2021-07-05  8:29 ` [PATCH v4 3/7] drm/panfrost: Add BO access flags to relax dependencies between jobs Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  2021-07-05  8:56   ` Steven Price
  2021-07-05  8:29 ` [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches Boris Brezillon
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

Needed to keep VkQueues isolated from each other.

v4:
* Make panfrost_ioctl_create_submitqueue() return the queue ID
  instead of a queue object

v3:
* Limit the number of submitqueue per context to 16
* Fix a deadlock

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
---
 drivers/gpu/drm/panfrost/Makefile             |   3 +-
 drivers/gpu/drm/panfrost/panfrost_device.h    |   2 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c       |  69 ++++++++-
 drivers/gpu/drm/panfrost/panfrost_job.c       |  47 ++-----
 drivers/gpu/drm/panfrost/panfrost_job.h       |   9 +-
 .../gpu/drm/panfrost/panfrost_submitqueue.c   | 132 ++++++++++++++++++
 .../gpu/drm/panfrost/panfrost_submitqueue.h   |  26 ++++
 include/uapi/drm/panfrost_drm.h               |  17 +++
 8 files changed, 260 insertions(+), 45 deletions(-)
 create mode 100644 drivers/gpu/drm/panfrost/panfrost_submitqueue.c
 create mode 100644 drivers/gpu/drm/panfrost/panfrost_submitqueue.h

diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile
index b71935862417..e99192b66ec9 100644
--- a/drivers/gpu/drm/panfrost/Makefile
+++ b/drivers/gpu/drm/panfrost/Makefile
@@ -9,6 +9,7 @@ panfrost-y := \
 	panfrost_gpu.o \
 	panfrost_job.o \
 	panfrost_mmu.o \
-	panfrost_perfcnt.o
+	panfrost_perfcnt.o \
+	panfrost_submitqueue.o
 
 obj-$(CONFIG_DRM_PANFROST) += panfrost.o
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
index 8b25278f34c8..51c0ba4e50f5 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -137,7 +137,7 @@ struct panfrost_mmu {
 struct panfrost_file_priv {
 	struct panfrost_device *pfdev;
 
-	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
+	struct idr queues;
 
 	struct panfrost_mmu *mmu;
 };
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index b6b5997c9366..8e28ef30310b 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -19,6 +19,7 @@
 #include "panfrost_job.h"
 #include "panfrost_gpu.h"
 #include "panfrost_perfcnt.h"
+#include "panfrost_submitqueue.h"
 
 static bool unstable_ioctls;
 module_param_unsafe(unstable_ioctls, bool, 0600);
@@ -250,6 +251,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
 	struct panfrost_device *pfdev = dev->dev_private;
 	struct drm_panfrost_submit *args = data;
 	struct drm_syncobj *sync_out = NULL;
+	struct panfrost_submitqueue *queue;
 	struct panfrost_job *job;
 	int ret = 0;
 
@@ -259,10 +261,16 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
 	if (args->requirements && args->requirements != PANFROST_JD_REQ_FS)
 		return -EINVAL;
 
+	queue = panfrost_submitqueue_get(file->driver_priv, 0);
+	if (IS_ERR(queue))
+		return PTR_ERR(queue);
+
 	if (args->out_sync > 0) {
 		sync_out = drm_syncobj_find(file, args->out_sync);
-		if (!sync_out)
-			return -ENODEV;
+		if (!sync_out) {
+			ret = -ENODEV;
+			goto fail_put_queue;
+		}
 	}
 
 	job = kzalloc(sizeof(*job), GFP_KERNEL);
@@ -289,7 +297,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
 	if (ret)
 		goto fail_job;
 
-	ret = panfrost_job_push(job);
+	ret = panfrost_job_push(queue, job);
 	if (ret)
 		goto fail_job;
 
@@ -302,6 +310,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
 fail_out_sync:
 	if (sync_out)
 		drm_syncobj_put(sync_out);
+fail_put_queue:
+	panfrost_submitqueue_put(queue);
 
 	return ret;
 }
@@ -451,6 +461,36 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
 	return ret;
 }
 
+static int
+panfrost_ioctl_create_submitqueue(struct drm_device *dev, void *data,
+				  struct drm_file *file_priv)
+{
+	struct panfrost_file_priv *priv = file_priv->driver_priv;
+	struct drm_panfrost_create_submitqueue *args = data;
+	int ret;
+
+	ret = panfrost_submitqueue_create(priv, args->priority, args->flags);
+	if (ret < 0)
+		return ret;
+
+	args->id = ret;
+	return 0;
+}
+
+static int
+panfrost_ioctl_destroy_submitqueue(struct drm_device *dev, void *data,
+				   struct drm_file *file_priv)
+{
+	struct panfrost_file_priv *priv = file_priv->driver_priv;
+	u32 id = *((u32 *)data);
+
+	/* Default queue can't be destroyed. */
+	if (!id)
+		return -ENOENT;
+
+	return panfrost_submitqueue_destroy(priv, id);
+}
+
 int panfrost_unstable_ioctl_check(void)
 {
 	if (!unstable_ioctls)
@@ -479,13 +519,22 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
 		goto err_free;
 	}
 
-	ret = panfrost_job_open(panfrost_priv);
+	idr_init(&panfrost_priv->queues);
+
+	ret = panfrost_submitqueue_create(panfrost_priv,
+					  PANFROST_SUBMITQUEUE_PRIORITY_MEDIUM,
+					  0);
+
+	/* We expect the default queue to get id 0, a positive queue id is
+	 * considered a failure in that case.
+	 */
 	if (ret)
-		goto err_job;
+		goto err_destroy_idr;
 
 	return 0;
 
-err_job:
+err_destroy_idr:
+	idr_destroy(&panfrost_priv->queues);
 	panfrost_mmu_ctx_put(panfrost_priv->mmu);
 err_free:
 	kfree(panfrost_priv);
@@ -496,11 +545,15 @@ static void
 panfrost_postclose(struct drm_device *dev, struct drm_file *file)
 {
 	struct panfrost_file_priv *panfrost_priv = file->driver_priv;
+	u32 id;
 
 	panfrost_perfcnt_close(file);
-	panfrost_job_close(panfrost_priv);
+
+	for (id = 0; idr_get_next(&panfrost_priv->queues, &id); id++)
+		panfrost_submitqueue_destroy(panfrost_priv, id);
 
 	panfrost_mmu_ctx_put(panfrost_priv->mmu);
+	idr_destroy(&panfrost_priv->queues);
 	kfree(panfrost_priv);
 }
 
@@ -517,6 +570,8 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
 	PANFROST_IOCTL(PERFCNT_ENABLE,	perfcnt_enable,	DRM_RENDER_ALLOW),
 	PANFROST_IOCTL(PERFCNT_DUMP,	perfcnt_dump,	DRM_RENDER_ALLOW),
 	PANFROST_IOCTL(MADVISE,		madvise,	DRM_RENDER_ALLOW),
+	PANFROST_IOCTL(CREATE_SUBMITQUEUE, create_submitqueue, DRM_RENDER_ALLOW),
+	PANFROST_IOCTL(DESTROY_SUBMITQUEUE, destroy_submitqueue, DRM_RENDER_ALLOW),
 };
 
 DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 152245b122be..56ae89272e19 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -20,6 +20,7 @@
 #include "panfrost_regs.h"
 #include "panfrost_gpu.h"
 #include "panfrost_mmu.h"
+#include "panfrost_submitqueue.h"
 
 #define JOB_TIMEOUT_MS 500
 
@@ -276,15 +277,15 @@ static void panfrost_attach_object_fences(struct panfrost_job *job)
 	}
 }
 
-int panfrost_job_push(struct panfrost_job *job)
+int panfrost_job_push(struct panfrost_submitqueue *queue,
+		      struct panfrost_job *job)
 {
 	struct panfrost_device *pfdev = job->pfdev;
 	int slot = panfrost_job_get_slot(job);
-	struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot];
+	struct drm_sched_entity *entity = &queue->sched_entity[slot];
 	struct ww_acquire_ctx acquire_ctx;
 	int ret = 0;
 
-
 	ret = drm_gem_lock_reservations(job->bos, job->bo_count,
 					    &acquire_ctx);
 	if (ret)
@@ -881,43 +882,18 @@ void panfrost_job_fini(struct panfrost_device *pfdev)
 	destroy_workqueue(pfdev->reset.wq);
 }
 
-int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
+void panfrost_job_kill_queue(struct panfrost_submitqueue *queue)
 {
-	struct panfrost_device *pfdev = panfrost_priv->pfdev;
-	struct panfrost_job_slot *js = pfdev->js;
-	struct drm_gpu_scheduler *sched;
-	int ret, i;
+	struct panfrost_device *pfdev = queue->pfdev;
+	int i, j;
 
-	for (i = 0; i < NUM_JOB_SLOTS; i++) {
-		sched = &js->queue[i].sched;
-		ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i],
-					    DRM_SCHED_PRIORITY_NORMAL, &sched,
-					    1, NULL);
-		if (WARN_ON(ret))
-			return ret;
-	}
-	return 0;
-}
-
-void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
-{
-	struct panfrost_device *pfdev = panfrost_priv->pfdev;
-	int i;
-
-	for (i = 0; i < NUM_JOB_SLOTS; i++)
-		drm_sched_entity_destroy(&panfrost_priv->sched_entity[i]);
-
-	/* Kill in-flight jobs */
 	spin_lock(&pfdev->js->job_lock);
 	for (i = 0; i < NUM_JOB_SLOTS; i++) {
-		struct drm_sched_entity *entity = &panfrost_priv->sched_entity[i];
-		int j;
-
 		for (j = ARRAY_SIZE(pfdev->jobs[0]) - 1; j >= 0; j--) {
 			struct panfrost_job *job = pfdev->jobs[i][j];
 			u32 cmd;
 
-			if (!job || job->base.entity != entity)
+			if (!job || job->base.entity != &queue->sched_entity[i])
 				continue;
 
 			if (j == 1) {
@@ -936,7 +912,6 @@ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
 			} else {
 				cmd = JS_COMMAND_HARD_STOP;
 			}
-
 			job_write(pfdev, JS_COMMAND(i), cmd);
 		}
 	}
@@ -956,3 +931,9 @@ int panfrost_job_is_idle(struct panfrost_device *pfdev)
 
 	return true;
 }
+
+struct drm_gpu_scheduler *
+panfrost_job_get_sched(struct panfrost_device *pfdev, unsigned int js)
+{
+	return &pfdev->js->queue[js].sched;
+}
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
index 1cbc3621b663..5c228bb431c0 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.h
+++ b/drivers/gpu/drm/panfrost/panfrost_job.h
@@ -10,6 +10,7 @@
 struct panfrost_device;
 struct panfrost_gem_object;
 struct panfrost_file_priv;
+struct panfrost_submitqueue;
 
 struct panfrost_job {
 	struct drm_sched_job base;
@@ -41,11 +42,13 @@ struct panfrost_job {
 
 int panfrost_job_init(struct panfrost_device *pfdev);
 void panfrost_job_fini(struct panfrost_device *pfdev);
-int panfrost_job_open(struct panfrost_file_priv *panfrost_priv);
-void panfrost_job_close(struct panfrost_file_priv *panfrost_priv);
-int panfrost_job_push(struct panfrost_job *job);
+int panfrost_job_push(struct panfrost_submitqueue *queue,
+		      struct panfrost_job *job);
 void panfrost_job_put(struct panfrost_job *job);
 void panfrost_job_enable_interrupts(struct panfrost_device *pfdev);
 int panfrost_job_is_idle(struct panfrost_device *pfdev);
+void panfrost_job_kill_queue(struct panfrost_submitqueue *queue);
+struct drm_gpu_scheduler *
+panfrost_job_get_sched(struct panfrost_device *pfdev, unsigned int js);
 
 #endif
diff --git a/drivers/gpu/drm/panfrost/panfrost_submitqueue.c b/drivers/gpu/drm/panfrost/panfrost_submitqueue.c
new file mode 100644
index 000000000000..8944b4410be3
--- /dev/null
+++ b/drivers/gpu/drm/panfrost/panfrost_submitqueue.c
@@ -0,0 +1,132 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright 2021 Collabora ltd. */
+
+#include <linux/idr.h>
+
+#include "panfrost_device.h"
+#include "panfrost_job.h"
+#include "panfrost_submitqueue.h"
+
+#define PAN_MAX_SUBMITQUEUES	16
+
+static enum drm_sched_priority
+to_sched_prio(enum panfrost_submitqueue_priority priority)
+{
+	switch (priority) {
+	case PANFROST_SUBMITQUEUE_PRIORITY_LOW:
+		return DRM_SCHED_PRIORITY_MIN;
+	case PANFROST_SUBMITQUEUE_PRIORITY_MEDIUM:
+		return DRM_SCHED_PRIORITY_NORMAL;
+	case PANFROST_SUBMITQUEUE_PRIORITY_HIGH:
+		return DRM_SCHED_PRIORITY_HIGH;
+	default:
+		break;
+	}
+
+	return DRM_SCHED_PRIORITY_UNSET;
+}
+
+static void
+panfrost_submitqueue_cleanup(struct kref *ref)
+{
+	struct panfrost_submitqueue *queue;
+	unsigned int i;
+
+	queue = container_of(ref, struct panfrost_submitqueue, refcount);
+
+	for (i = 0; i < NUM_JOB_SLOTS; i++)
+		drm_sched_entity_destroy(&queue->sched_entity[i]);
+
+	/* Kill in-flight jobs */
+	panfrost_job_kill_queue(queue);
+
+	kfree(queue);
+}
+
+void panfrost_submitqueue_put(struct panfrost_submitqueue *queue)
+{
+	if (!IS_ERR_OR_NULL(queue))
+		kref_put(&queue->refcount, panfrost_submitqueue_cleanup);
+}
+
+int
+panfrost_submitqueue_create(struct panfrost_file_priv *ctx,
+			    enum panfrost_submitqueue_priority priority,
+			    u32 flags)
+{
+	struct panfrost_submitqueue *queue;
+	enum drm_sched_priority sched_prio;
+	int ret, i;
+
+	if (flags || priority >= PANFROST_SUBMITQUEUE_PRIORITY_COUNT)
+		return -EINVAL;
+
+	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+	if (!queue)
+		return -ENOMEM;
+
+	queue->pfdev = ctx->pfdev;
+	sched_prio = to_sched_prio(priority);
+	for (i = 0; i < NUM_JOB_SLOTS; i++) {
+		struct drm_gpu_scheduler *sched;
+
+		sched = panfrost_job_get_sched(ctx->pfdev, i);
+		ret = drm_sched_entity_init(&queue->sched_entity[i],
+					    sched_prio, &sched, 1, NULL);
+		if (ret)
+			break;
+	}
+
+	if (ret) {
+		for (i--; i >= 0; i--)
+			drm_sched_entity_destroy(&queue->sched_entity[i]);
+
+		return ret;
+	}
+
+	kref_init(&queue->refcount);
+
+	idr_preload(GFP_KERNEL);
+	idr_lock(&ctx->queues);
+	ret = idr_alloc(&ctx->queues, queue, 0, PAN_MAX_SUBMITQUEUES,
+			GFP_NOWAIT);
+	idr_unlock(&ctx->queues);
+	idr_preload_end();
+
+	if (ret < 0)
+		panfrost_submitqueue_put(queue);
+
+	return ret;
+}
+
+int panfrost_submitqueue_destroy(struct panfrost_file_priv *ctx, u32 id)
+{
+	struct panfrost_submitqueue *queue;
+
+	idr_lock(&ctx->queues);
+	queue = idr_remove(&ctx->queues, id);
+	idr_unlock(&ctx->queues);
+
+	if (!queue)
+		return -ENOENT;
+
+	panfrost_submitqueue_put(queue);
+	return 0;
+}
+
+struct panfrost_submitqueue *
+panfrost_submitqueue_get(struct panfrost_file_priv *ctx, u32 id)
+{
+	struct panfrost_submitqueue *queue;
+
+	idr_lock(&ctx->queues);
+	queue = idr_find(&ctx->queues, id);
+	if (queue)
+		kref_get(&queue->refcount);
+	idr_unlock(&ctx->queues);
+
+	if (!queue)
+		return ERR_PTR(-ENOENT);
+
+	return queue;
+}
diff --git a/drivers/gpu/drm/panfrost/panfrost_submitqueue.h b/drivers/gpu/drm/panfrost/panfrost_submitqueue.h
new file mode 100644
index 000000000000..ade224725844
--- /dev/null
+++ b/drivers/gpu/drm/panfrost/panfrost_submitqueue.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright 2032 Collabora ltd. */
+
+#ifndef __PANFROST_SUBMITQUEUE_H__
+#define __PANFROST_SUBMITQUEUE_H__
+
+#include <drm/panfrost_drm.h>
+
+struct panfrost_submitqueue {
+	struct kref refcount;
+	struct panfrost_device *pfdev;
+	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
+};
+
+struct panfrost_file_priv;
+
+int
+panfrost_submitqueue_create(struct panfrost_file_priv *ctx,
+			    enum panfrost_submitqueue_priority priority,
+			    u32 flags);
+int panfrost_submitqueue_destroy(struct panfrost_file_priv *ctx, u32 id);
+struct panfrost_submitqueue *
+panfrost_submitqueue_get(struct panfrost_file_priv *ctx, u32 id);
+void panfrost_submitqueue_put(struct panfrost_submitqueue *queue);
+
+#endif
diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
index 3723c9d231b5..e31a22c176d9 100644
--- a/include/uapi/drm/panfrost_drm.h
+++ b/include/uapi/drm/panfrost_drm.h
@@ -21,6 +21,8 @@ extern "C" {
 #define DRM_PANFROST_PERFCNT_ENABLE		0x06
 #define DRM_PANFROST_PERFCNT_DUMP		0x07
 #define DRM_PANFROST_MADVISE			0x08
+#define DRM_PANFROST_CREATE_SUBMITQUEUE		0x09
+#define DRM_PANFROST_DESTROY_SUBMITQUEUE	0x0a
 
 #define DRM_IOCTL_PANFROST_SUBMIT		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_SUBMIT, struct drm_panfrost_submit)
 #define DRM_IOCTL_PANFROST_WAIT_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_WAIT_BO, struct drm_panfrost_wait_bo)
@@ -29,6 +31,8 @@ extern "C" {
 #define DRM_IOCTL_PANFROST_GET_PARAM		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_GET_PARAM, struct drm_panfrost_get_param)
 #define DRM_IOCTL_PANFROST_GET_BO_OFFSET	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_GET_BO_OFFSET, struct drm_panfrost_get_bo_offset)
 #define DRM_IOCTL_PANFROST_MADVISE		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_MADVISE, struct drm_panfrost_madvise)
+#define DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_CREATE_SUBMITQUEUE, struct drm_panfrost_create_submitqueue)
+#define DRM_IOCTL_PANFROST_DESTROY_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_DESTROY_SUBMITQUEUE, __u32)
 
 /*
  * Unstable ioctl(s): only exposed when the unsafe unstable_ioctls module
@@ -224,6 +228,19 @@ struct drm_panfrost_madvise {
 	__u32 retained;       /* out, whether backing store still exists */
 };
 
+enum panfrost_submitqueue_priority {
+	PANFROST_SUBMITQUEUE_PRIORITY_LOW = 0,
+	PANFROST_SUBMITQUEUE_PRIORITY_MEDIUM,
+	PANFROST_SUBMITQUEUE_PRIORITY_HIGH,
+	PANFROST_SUBMITQUEUE_PRIORITY_COUNT,
+};
+
+struct drm_panfrost_create_submitqueue {
+	__u32 flags;	/* in, PANFROST_SUBMITQUEUE_x */
+	__u32 priority;	/* in, enum panfrost_submitqueue_priority */
+	__u32 id;	/* out, identifier */
+};
+
 /* Exclusive (AKA write) access to the BO */
 #define PANFROST_BO_REF_EXCLUSIVE	0x1
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
                   ` (3 preceding siblings ...)
  2021-07-05  8:29 ` [PATCH v4 4/7] drm/panfrost: Add the ability to create submit queues Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  2021-07-05  9:32   ` Daniel Vetter
  2021-07-05  9:42   ` Steven Price
  2021-07-05  8:29 ` [PATCH v4 6/7] drm/panfrost: Advertise the SYNCOBJ_TIMELINE feature Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 7/7] drm/panfrost: Bump minor version to reflect the feature additions Boris Brezillon
  6 siblings, 2 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

This should help limit the number of ioctls when submitting multiple
jobs. The new ioctl also supports syncobj timelines and BO access flags.

v4:
* Implement panfrost_ioctl_submit() as a wrapper around
  panfrost_submit_job()
* Replace stride fields by a version field which is mapped to
  a <job_stride,bo_ref_stride,syncobj_ref_stride> tuple internally

v3:
* Re-use panfrost_get_job_bos() and panfrost_get_job_in_syncs() in the
  old submit path

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
---
 drivers/gpu/drm/panfrost/panfrost_drv.c | 562 ++++++++++++++++--------
 drivers/gpu/drm/panfrost/panfrost_job.c |   3 +
 include/uapi/drm/panfrost_drm.h         |  92 ++++
 3 files changed, 479 insertions(+), 178 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 8e28ef30310b..a624e4f86aff 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -138,184 +138,6 @@ panfrost_get_job_mappings(struct drm_file *file_priv, struct panfrost_job *job)
 	return 0;
 }
 
-/**
- * panfrost_lookup_bos() - Sets up job->bo[] with the GEM objects
- * referenced by the job.
- * @dev: DRM device
- * @file_priv: DRM file for this fd
- * @args: IOCTL args
- * @job: job being set up
- *
- * Resolve handles from userspace to BOs and attach them to job.
- *
- * Note that this function doesn't need to unreference the BOs on
- * failure, because that will happen at panfrost_job_cleanup() time.
- */
-static int
-panfrost_lookup_bos(struct drm_device *dev,
-		  struct drm_file *file_priv,
-		  struct drm_panfrost_submit *args,
-		  struct panfrost_job *job)
-{
-	unsigned int i;
-	int ret;
-
-	job->bo_count = args->bo_handle_count;
-
-	if (!job->bo_count)
-		return 0;
-
-	job->bo_flags = kvmalloc_array(job->bo_count,
-				       sizeof(*job->bo_flags),
-				       GFP_KERNEL | __GFP_ZERO);
-	if (!job->bo_flags)
-		return -ENOMEM;
-
-	for (i = 0; i < job->bo_count; i++)
-		job->bo_flags[i] = PANFROST_BO_REF_EXCLUSIVE;
-
-	ret = drm_gem_objects_lookup(file_priv,
-				     (void __user *)(uintptr_t)args->bo_handles,
-				     job->bo_count, &job->bos);
-	if (ret)
-		return ret;
-
-	return panfrost_get_job_mappings(file_priv, job);
-}
-
-/**
- * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
- * referenced by the job.
- * @dev: DRM device
- * @file_priv: DRM file for this fd
- * @args: IOCTL args
- * @job: job being set up
- *
- * Resolve syncobjs from userspace to fences and attach them to job.
- *
- * Note that this function doesn't need to unreference the fences on
- * failure, because that will happen at panfrost_job_cleanup() time.
- */
-static int
-panfrost_copy_in_sync(struct drm_device *dev,
-		  struct drm_file *file_priv,
-		  struct drm_panfrost_submit *args,
-		  struct panfrost_job *job)
-{
-	u32 *handles;
-	int ret = 0;
-	int i, in_fence_count;
-
-	in_fence_count = args->in_sync_count;
-
-	if (!in_fence_count)
-		return 0;
-
-	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
-	if (!handles) {
-		ret = -ENOMEM;
-		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
-		goto fail;
-	}
-
-	if (copy_from_user(handles,
-			   (void __user *)(uintptr_t)args->in_syncs,
-			   in_fence_count * sizeof(u32))) {
-		ret = -EFAULT;
-		DRM_DEBUG("Failed to copy in syncobj handles\n");
-		goto fail;
-	}
-
-	for (i = 0; i < in_fence_count; i++) {
-		struct dma_fence *fence;
-
-		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
-					     &fence);
-		if (ret)
-			goto fail;
-
-		ret = drm_gem_fence_array_add(&job->deps, fence);
-
-		if (ret)
-			goto fail;
-	}
-
-fail:
-	kvfree(handles);
-	return ret;
-}
-
-static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
-		struct drm_file *file)
-{
-	struct panfrost_device *pfdev = dev->dev_private;
-	struct drm_panfrost_submit *args = data;
-	struct drm_syncobj *sync_out = NULL;
-	struct panfrost_submitqueue *queue;
-	struct panfrost_job *job;
-	int ret = 0;
-
-	if (!args->jc)
-		return -EINVAL;
-
-	if (args->requirements && args->requirements != PANFROST_JD_REQ_FS)
-		return -EINVAL;
-
-	queue = panfrost_submitqueue_get(file->driver_priv, 0);
-	if (IS_ERR(queue))
-		return PTR_ERR(queue);
-
-	if (args->out_sync > 0) {
-		sync_out = drm_syncobj_find(file, args->out_sync);
-		if (!sync_out) {
-			ret = -ENODEV;
-			goto fail_put_queue;
-		}
-	}
-
-	job = kzalloc(sizeof(*job), GFP_KERNEL);
-	if (!job) {
-		ret = -ENOMEM;
-		goto fail_out_sync;
-	}
-
-	kref_init(&job->refcount);
-
-	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
-
-	job->pfdev = pfdev;
-	job->jc = args->jc;
-	job->requirements = args->requirements;
-	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
-	job->file_priv = file->driver_priv;
-
-	ret = panfrost_copy_in_sync(dev, file, args, job);
-	if (ret)
-		goto fail_job;
-
-	ret = panfrost_lookup_bos(dev, file, args, job);
-	if (ret)
-		goto fail_job;
-
-	ret = panfrost_job_push(queue, job);
-	if (ret)
-		goto fail_job;
-
-	/* Update the return sync object for the job */
-	if (sync_out)
-		drm_syncobj_replace_fence(sync_out, job->render_done_fence);
-
-fail_job:
-	panfrost_job_put(job);
-fail_out_sync:
-	if (sync_out)
-		drm_syncobj_put(sync_out);
-fail_put_queue:
-	panfrost_submitqueue_put(queue);
-
-	return ret;
-}
-
 static int
 panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
 		       struct drm_file *file_priv)
@@ -491,6 +313,389 @@ panfrost_ioctl_destroy_submitqueue(struct drm_device *dev, void *data,
 	return panfrost_submitqueue_destroy(priv, id);
 }
 
+#define PANFROST_BO_REF_ALLOWED_FLAGS \
+	(PANFROST_BO_REF_EXCLUSIVE | PANFROST_BO_REF_NO_IMPLICIT_DEP)
+
+static int
+panfrost_get_job_bos(struct drm_file *file_priv,
+		     u64 refs, u32 ref_stride, u32 count,
+		     struct panfrost_job *job)
+{
+	void __user *in = u64_to_user_ptr(refs);
+	unsigned int i;
+
+	job->bo_count = count;
+
+	if (!count)
+		return 0;
+
+	job->bos = kvmalloc_array(job->bo_count, sizeof(*job->bos),
+				  GFP_KERNEL | __GFP_ZERO);
+	job->bo_flags = kvmalloc_array(job->bo_count,
+				       sizeof(*job->bo_flags),
+				       GFP_KERNEL | __GFP_ZERO);
+	if (!job->bos || !job->bo_flags)
+		return -ENOMEM;
+
+	for (i = 0; i < count; i++) {
+		struct drm_panfrost_bo_ref ref = { };
+		int ret;
+
+		ret = copy_struct_from_user(&ref, sizeof(ref),
+					    in + (i * ref_stride),
+					    ref_stride);
+		if (ret)
+			return ret;
+
+		/* Prior to the BATCH_SUBMIT ioctl all accessed BOs were
+		 * treated as exclusive.
+		 */
+		if (ref_stride == sizeof(u32))
+			ref.flags = PANFROST_BO_REF_EXCLUSIVE;
+
+		if ((ref.flags & ~PANFROST_BO_REF_ALLOWED_FLAGS))
+			return -EINVAL;
+
+		job->bos[i] = drm_gem_object_lookup(file_priv, ref.handle);
+		if (!job->bos[i])
+			return -EINVAL;
+
+		job->bo_flags[i] = ref.flags;
+	}
+
+	return 0;
+}
+
+static int
+panfrost_get_job_in_syncs(struct drm_file *file_priv,
+			  u64 refs, u32 ref_stride,
+			  u32 count, struct panfrost_job *job)
+{
+	const void __user *in = u64_to_user_ptr(refs);
+	unsigned int i;
+	int ret;
+
+	if (!count)
+		return 0;
+
+	for (i = 0; i < count; i++) {
+		struct drm_panfrost_syncobj_ref ref = { };
+		struct dma_fence *fence;
+
+		ret = copy_struct_from_user(&ref, sizeof(ref),
+					    in + (i * ref_stride),
+					    ref_stride);
+		if (ret)
+			return ret;
+
+		if (ref.pad)
+			return -EINVAL;
+
+		ret = drm_syncobj_find_fence(file_priv, ref.handle, ref.point,
+					     0, &fence);
+		if (ret)
+			return ret;
+
+		ret = drm_gem_fence_array_add(&job->deps, fence);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+struct panfrost_job_out_sync {
+	struct drm_syncobj *syncobj;
+	struct dma_fence_chain *chain;
+	u64 point;
+};
+
+static void
+panfrost_put_job_out_syncs(struct panfrost_job_out_sync *out_syncs, u32 count)
+{
+	unsigned int i;
+
+	for (i = 0; i < count; i++) {
+		if (!out_syncs[i].syncobj)
+			break;
+
+		drm_syncobj_put(out_syncs[i].syncobj);
+		kvfree(out_syncs[i].chain);
+	}
+
+	kvfree(out_syncs);
+}
+
+static struct panfrost_job_out_sync *
+panfrost_get_job_out_syncs(struct drm_file *file_priv,
+			   u64 refs, u32 ref_stride,
+			   u32 count)
+{
+	void __user *in = u64_to_user_ptr(refs);
+	struct panfrost_job_out_sync *out_syncs;
+	unsigned int i;
+	int ret;
+
+	if (!count)
+		return NULL;
+
+	/* If the syncobj ref_stride == sizeof(u32) we are called from the
+	 * old submit ioctl() which only accepted one out syncobj. In that
+	 * case the syncobj handle is passed directly through the
+	 * ->out_syncs field, so let's make sure the refs fits in a u32.
+	 */
+	if (ref_stride == sizeof(u32) &&
+	    (count != 1 || refs > UINT_MAX))
+		return ERR_PTR(-EINVAL);
+
+	out_syncs = kvmalloc_array(count, sizeof(*out_syncs),
+				   GFP_KERNEL | __GFP_ZERO);
+	if (!out_syncs)
+		return ERR_PTR(-ENOMEM);
+
+	for (i = 0; i < count; i++) {
+		struct drm_panfrost_syncobj_ref ref = { };
+
+		if (ref_stride == sizeof(u32)) {
+			/* Special case for the old submit wrapper: in that
+			 * case there's only one out_sync, and the syncobj
+			 * handle is passed directly in the out_syncs field.
+			 */
+			ref.handle = refs;
+		} else {
+			ret = copy_struct_from_user(&ref, sizeof(ref),
+						    in + (i * ref_stride),
+						    ref_stride);
+			if (ret)
+				goto err_free_out_syncs;
+		}
+
+		if (ref.pad) {
+			ret = -EINVAL;
+			goto err_free_out_syncs;
+		}
+
+		out_syncs[i].syncobj = drm_syncobj_find(file_priv, ref.handle);
+		if (!out_syncs[i].syncobj) {
+			ret = -ENODEV;
+			goto err_free_out_syncs;
+		}
+
+		out_syncs[i].point = ref.point;
+		if (!out_syncs[i].point)
+			continue;
+
+		out_syncs[i].chain = kmalloc(sizeof(*out_syncs[i].chain),
+					     GFP_KERNEL);
+		if (!out_syncs[i].chain) {
+			ret = -ENOMEM;
+			goto err_free_out_syncs;
+		}
+	}
+
+	return out_syncs;
+
+err_free_out_syncs:
+	panfrost_put_job_out_syncs(out_syncs, count);
+	return ERR_PTR(ret);
+}
+
+static void
+panfrost_set_job_out_fence(struct panfrost_job_out_sync *out_syncs,
+			   unsigned int count, struct dma_fence *fence)
+{
+	unsigned int i;
+
+	for (i = 0; i < count; i++) {
+		if (out_syncs[i].chain) {
+			drm_syncobj_add_point(out_syncs[i].syncobj,
+					      out_syncs[i].chain,
+					      fence, out_syncs[i].point);
+			out_syncs[i].chain = NULL;
+		} else {
+			drm_syncobj_replace_fence(out_syncs[i].syncobj,
+						  fence);
+		}
+	}
+}
+
+struct panfrost_submit_ioctl_version_info {
+	u32 job_stride;
+	u32 bo_ref_stride;
+	u32 syncobj_ref_stride;
+};
+
+static const struct panfrost_submit_ioctl_version_info submit_versions[] = {
+	/* SUBMIT */
+	[0] = { 0, 4, 4 },
+
+	/* BATCH_SUBMIT v1 */
+	[1] = { 48, 8, 16 },
+};
+
+#define PANFROST_JD_ALLOWED_REQS PANFROST_JD_REQ_FS
+
+static int
+panfrost_submit_job(struct drm_device *dev, struct drm_file *file_priv,
+		    struct panfrost_submitqueue *queue,
+		    const struct drm_panfrost_job *args,
+		    u32 version)
+{
+	struct panfrost_device *pfdev = dev->dev_private;
+	struct panfrost_job_out_sync *out_syncs;
+	u32 bo_stride, syncobj_stride;
+	struct panfrost_job *job;
+	int ret;
+
+	if (!args->head)
+		return -EINVAL;
+
+	if (args->requirements & ~PANFROST_JD_ALLOWED_REQS)
+		return -EINVAL;
+
+	bo_stride = submit_versions[version].bo_ref_stride;
+	syncobj_stride = submit_versions[version].syncobj_ref_stride;
+
+	job = kzalloc(sizeof(*job), GFP_KERNEL);
+	if (!job)
+		return -ENOMEM;
+
+	kref_init(&job->refcount);
+
+	job->pfdev = pfdev;
+	job->jc = args->head;
+	job->requirements = args->requirements;
+	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
+	job->file_priv = file_priv->driver_priv;
+	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
+
+	ret = panfrost_get_job_in_syncs(file_priv,
+					args->in_syncs,
+					syncobj_stride,
+					args->in_sync_count,
+					job);
+	if (ret)
+		goto err_put_job;
+
+	out_syncs = panfrost_get_job_out_syncs(file_priv,
+					       args->out_syncs,
+					       syncobj_stride,
+					       args->out_sync_count);
+	if (IS_ERR(out_syncs)) {
+		ret = PTR_ERR(out_syncs);
+		goto err_put_job;
+	}
+
+	ret = panfrost_get_job_bos(file_priv, args->bos, bo_stride,
+				   args->bo_count, job);
+	if (ret)
+		goto err_put_job;
+
+	ret = panfrost_get_job_mappings(file_priv, job);
+	if (ret)
+		goto err_put_job;
+
+	ret = panfrost_job_push(queue, job);
+	if (ret) {
+		panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
+		goto err_put_job;
+	}
+
+	panfrost_set_job_out_fence(out_syncs, args->out_sync_count,
+				   job->render_done_fence);
+	panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
+	return 0;
+
+err_put_job:
+	panfrost_job_put(job);
+	return ret;
+}
+
+static int
+panfrost_ioctl_submit(struct drm_device *dev, void *data,
+		      struct drm_file *file)
+{
+	struct drm_panfrost_submit *args = data;
+	struct drm_panfrost_job job_args = {
+		.head = args->jc,
+		.bos = args->bo_handles,
+		.in_syncs = args->in_syncs,
+
+		/* We are abusing .out_syncs and passing the handle directly
+		 * instead of a pointer to a user u32 array, but
+		 * panfrost_job_submit() knows about it, so it's fine.
+		 */
+		.out_syncs = args->out_sync,
+		.in_sync_count = args->in_sync_count,
+		.out_sync_count = args->out_sync > 0 ? 1 : 0,
+		.bo_count = args->bo_handle_count,
+		.requirements = args->requirements
+	};
+	struct panfrost_submitqueue *queue;
+	int ret;
+
+	queue = panfrost_submitqueue_get(file->driver_priv, 0);
+	if (IS_ERR(queue))
+		return PTR_ERR(queue);
+
+	ret = panfrost_submit_job(dev, file, queue, &job_args, 0);
+	panfrost_submitqueue_put(queue);
+
+	return ret;
+}
+
+static int
+panfrost_ioctl_batch_submit(struct drm_device *dev, void *data,
+			    struct drm_file *file_priv)
+{
+	struct drm_panfrost_batch_submit *args = data;
+	void __user *jobs_args = u64_to_user_ptr(args->jobs);
+	struct panfrost_submitqueue *queue;
+	u32 version = args->version;
+	u32 job_stride;
+	unsigned int i;
+	int ret;
+
+	/* Version 0 doesn't exists (it's reserved for the SUBMIT ioctl) */
+	if (!version)
+		return -EINVAL;
+
+	/* If the version specified is bigger than what we currently support,
+	 * pick the last supported version and let copy_struct_from_user()
+	 * check that any extra job, bo_ref and syncobj_ref fields are zeroed.
+	 */
+	if (version >= ARRAY_SIZE(submit_versions))
+		version = ARRAY_SIZE(submit_versions) - 1;
+
+	queue = panfrost_submitqueue_get(file_priv->driver_priv, args->queue);
+	if (IS_ERR(queue))
+		return PTR_ERR(queue);
+
+	job_stride = submit_versions[version].job_stride;
+	for (i = 0; i < args->job_count; i++) {
+		struct drm_panfrost_job job_args = { };
+
+		ret = copy_struct_from_user(&job_args, sizeof(job_args),
+					    jobs_args + (i * job_stride),
+					    job_stride);
+		if (ret) {
+			args->fail_idx = i;
+			goto out_put_queue;
+		}
+
+		ret = panfrost_submit_job(dev, file_priv, queue, &job_args,
+					  version);
+		if (ret) {
+			args->fail_idx = i;
+			goto out_put_queue;
+		}
+	}
+
+out_put_queue:
+	panfrost_submitqueue_put(queue);
+	return 0;
+}
+
 int panfrost_unstable_ioctl_check(void)
 {
 	if (!unstable_ioctls)
@@ -572,6 +777,7 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
 	PANFROST_IOCTL(MADVISE,		madvise,	DRM_RENDER_ALLOW),
 	PANFROST_IOCTL(CREATE_SUBMITQUEUE, create_submitqueue, DRM_RENDER_ALLOW),
 	PANFROST_IOCTL(DESTROY_SUBMITQUEUE, destroy_submitqueue, DRM_RENDER_ALLOW),
+	PANFROST_IOCTL(BATCH_SUBMIT,	batch_submit,	DRM_RENDER_ALLOW),
 };
 
 DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 56ae89272e19..4e1540bce865 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -254,6 +254,9 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
 				return ret;
 		}
 
+		if (job->bo_flags[i] & PANFROST_BO_REF_NO_IMPLICIT_DEP)
+			continue;
+
 		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i],
 						       exclusive);
 		if (ret)
diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
index e31a22c176d9..5d534e61c28e 100644
--- a/include/uapi/drm/panfrost_drm.h
+++ b/include/uapi/drm/panfrost_drm.h
@@ -23,6 +23,7 @@ extern "C" {
 #define DRM_PANFROST_MADVISE			0x08
 #define DRM_PANFROST_CREATE_SUBMITQUEUE		0x09
 #define DRM_PANFROST_DESTROY_SUBMITQUEUE	0x0a
+#define DRM_PANFROST_BATCH_SUBMIT		0x0b
 
 #define DRM_IOCTL_PANFROST_SUBMIT		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_SUBMIT, struct drm_panfrost_submit)
 #define DRM_IOCTL_PANFROST_WAIT_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_WAIT_BO, struct drm_panfrost_wait_bo)
@@ -33,6 +34,7 @@ extern "C" {
 #define DRM_IOCTL_PANFROST_MADVISE		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_MADVISE, struct drm_panfrost_madvise)
 #define DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_CREATE_SUBMITQUEUE, struct drm_panfrost_create_submitqueue)
 #define DRM_IOCTL_PANFROST_DESTROY_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_DESTROY_SUBMITQUEUE, __u32)
+#define DRM_IOCTL_PANFROST_BATCH_SUBMIT		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_BATCH_SUBMIT, struct drm_panfrost_batch_submit)
 
 /*
  * Unstable ioctl(s): only exposed when the unsafe unstable_ioctls module
@@ -241,9 +243,99 @@ struct drm_panfrost_create_submitqueue {
 	__u32 id;	/* out, identifier */
 };
 
+/* Syncobj reference passed at job submission time to encode explicit
+ * input/output fences.
+ */
+struct drm_panfrost_syncobj_ref {
+	/** Syncobj handle */
+	__u32 handle;
+
+	/** Padding field, must be set to 0 */
+	__u32 pad;
+
+	/**
+	 * For timeline syncobjs, the point on the timeline the reference
+	 * points to. 0 for the last point.
+	 * Must be set to 0 for non-timeline syncobjs
+	 */
+	__u64 point;
+};
+
 /* Exclusive (AKA write) access to the BO */
 #define PANFROST_BO_REF_EXCLUSIVE	0x1
 
+/* Disable the implicit depency on the BO fence */
+#define PANFROST_BO_REF_NO_IMPLICIT_DEP	0x2
+
+/* Describes a BO referenced by a job and the type of access. */
+struct drm_panfrost_bo_ref {
+	/** A GEM handle */
+	__u32 handle;
+
+	/** A combination of PANFROST_BO_REF_x flags */
+	__u32 flags;
+};
+
+/* Describes a GPU job and the resources attached to it. */
+struct drm_panfrost_job {
+	/** GPU pointer to the head of the job chain. */
+	__u64 head;
+
+	/**
+	 * Array of drm_panfrost_bo_ref objects describing the BOs referenced
+	 * by this job.
+	 */
+	__u64 bos;
+
+	/**
+	 * Arrays of drm_panfrost_syncobj_ref objects describing the input
+	 * and output fences.
+	 */
+	__u64 in_syncs;
+	__u64 out_syncs;
+
+	/** Syncobj reference array sizes. */
+	__u32 in_sync_count;
+	__u32 out_sync_count;
+
+	/** BO reference array size. */
+	__u32 bo_count;
+
+	/** Combination of PANFROST_JD_REQ_* flags. */
+	__u32 requirements;
+};
+
+#define PANFROST_SUBMIT_BATCH_VERSION	1
+
+/* Used to submit multiple jobs in one call */
+struct drm_panfrost_batch_submit {
+	/**
+	 * Always set to PANFROST_SUBMIT_BATCH_VERSION. This is used to let the
+	 * kernel know about the size of the various structs passed to the
+	 * BATCH_SUBMIT ioctl.
+	 */
+	__u32 version;
+
+	/** Number of jobs to submit. */
+	__u32 job_count;
+
+	/* Pointer to a job array. */
+	__u64 jobs;
+
+	/**
+	 * ID of the queue to submit those jobs to. 0 is the default
+	 * submit queue and should always exists. If you need a dedicated
+	 * queue, create it with DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE.
+	 */
+	__u32 queue;
+
+	/**
+	 * If the submission fails, this encodes the index of the job
+	 * failed.
+	 */
+	__u32 fail_idx;
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 6/7] drm/panfrost: Advertise the SYNCOBJ_TIMELINE feature
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
                   ` (4 preceding siblings ...)
  2021-07-05  8:29 ` [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  2021-07-05  8:29 ` [PATCH v4 7/7] drm/panfrost: Bump minor version to reflect the feature additions Boris Brezillon
  6 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

Now that we have a new SUBMIT ioctl dealing with timelined syncojbs we
can advertise the feature.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
---
 drivers/gpu/drm/panfrost/panfrost_drv.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index a624e4f86aff..fae62142c878 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -789,7 +789,8 @@ DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
  * - 1.2 - adds AFBC_FEATURES query
  */
 static const struct drm_driver panfrost_drm_driver = {
-	.driver_features	= DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ,
+	.driver_features	= DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
+				  DRIVER_SYNCOBJ_TIMELINE,
 	.open			= panfrost_open,
 	.postclose		= panfrost_postclose,
 	.ioctls			= panfrost_drm_driver_ioctls,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 7/7] drm/panfrost: Bump minor version to reflect the feature additions
  2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
                   ` (5 preceding siblings ...)
  2021-07-05  8:29 ` [PATCH v4 6/7] drm/panfrost: Advertise the SYNCOBJ_TIMELINE feature Boris Brezillon
@ 2021-07-05  8:29 ` Boris Brezillon
  6 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2021-07-05  8:29 UTC (permalink / raw)
  To: Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig, Steven Price, Robin Murphy
  Cc: Jason Ekstrand, dri-devel, Boris Brezillon

We now have a new ioctl that allows submitting multiple jobs at once
(among other things) and we support timelined syncobjs. Bump the
minor version number to reflect those changes.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
---
 drivers/gpu/drm/panfrost/panfrost_drv.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index fae62142c878..27e2e8b36ec9 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -787,6 +787,8 @@ DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
  * - 1.0 - initial interface
  * - 1.1 - adds HEAP and NOEXEC flags for CREATE_BO
  * - 1.2 - adds AFBC_FEATURES query
+ * - 1.3 - adds the BATCH_SUBMIT, CREATE_SUBMITQUEUE, DESTROY_SUBMITQUEUE
+ *	   ioctls and advertises the SYNCOBJ_TIMELINE feature
  */
 static const struct drm_driver panfrost_drm_driver = {
 	.driver_features	= DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
@@ -800,7 +802,7 @@ static const struct drm_driver panfrost_drm_driver = {
 	.desc			= "panfrost DRM",
 	.date			= "20180908",
 	.major			= 1,
-	.minor			= 2,
+	.minor			= 3,
 
 	.gem_create_object	= panfrost_gem_create_object,
 	.prime_handle_to_fd	= drm_gem_prime_handle_to_fd,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/7] drm/panfrost: Add the ability to create submit queues
  2021-07-05  8:29 ` [PATCH v4 4/7] drm/panfrost: Add the ability to create submit queues Boris Brezillon
@ 2021-07-05  8:56   ` Steven Price
  0 siblings, 0 replies; 14+ messages in thread
From: Steven Price @ 2021-07-05  8:56 UTC (permalink / raw)
  To: Boris Brezillon, Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig,
	Robin Murphy
  Cc: Jason Ekstrand, dri-devel

On 05/07/2021 09:29, Boris Brezillon wrote:
> Needed to keep VkQueues isolated from each other.
> 
> v4:
> * Make panfrost_ioctl_create_submitqueue() return the queue ID
>   instead of a queue object
> 
> v3:
> * Limit the number of submitqueue per context to 16
> * Fix a deadlock
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>

Reviewed-by: Steven Price <steven.price@arm.com>

> ---
>  drivers/gpu/drm/panfrost/Makefile             |   3 +-
>  drivers/gpu/drm/panfrost/panfrost_device.h    |   2 +-
>  drivers/gpu/drm/panfrost/panfrost_drv.c       |  69 ++++++++-
>  drivers/gpu/drm/panfrost/panfrost_job.c       |  47 ++-----
>  drivers/gpu/drm/panfrost/panfrost_job.h       |   9 +-
>  .../gpu/drm/panfrost/panfrost_submitqueue.c   | 132 ++++++++++++++++++
>  .../gpu/drm/panfrost/panfrost_submitqueue.h   |  26 ++++
>  include/uapi/drm/panfrost_drm.h               |  17 +++
>  8 files changed, 260 insertions(+), 45 deletions(-)
>  create mode 100644 drivers/gpu/drm/panfrost/panfrost_submitqueue.c
>  create mode 100644 drivers/gpu/drm/panfrost/panfrost_submitqueue.h
> 
> diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile
> index b71935862417..e99192b66ec9 100644
> --- a/drivers/gpu/drm/panfrost/Makefile
> +++ b/drivers/gpu/drm/panfrost/Makefile
> @@ -9,6 +9,7 @@ panfrost-y := \
>  	panfrost_gpu.o \
>  	panfrost_job.o \
>  	panfrost_mmu.o \
> -	panfrost_perfcnt.o
> +	panfrost_perfcnt.o \
> +	panfrost_submitqueue.o
>  
>  obj-$(CONFIG_DRM_PANFROST) += panfrost.o
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 8b25278f34c8..51c0ba4e50f5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -137,7 +137,7 @@ struct panfrost_mmu {
>  struct panfrost_file_priv {
>  	struct panfrost_device *pfdev;
>  
> -	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
> +	struct idr queues;
>  
>  	struct panfrost_mmu *mmu;
>  };
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index b6b5997c9366..8e28ef30310b 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -19,6 +19,7 @@
>  #include "panfrost_job.h"
>  #include "panfrost_gpu.h"
>  #include "panfrost_perfcnt.h"
> +#include "panfrost_submitqueue.h"
>  
>  static bool unstable_ioctls;
>  module_param_unsafe(unstable_ioctls, bool, 0600);
> @@ -250,6 +251,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  	struct panfrost_device *pfdev = dev->dev_private;
>  	struct drm_panfrost_submit *args = data;
>  	struct drm_syncobj *sync_out = NULL;
> +	struct panfrost_submitqueue *queue;
>  	struct panfrost_job *job;
>  	int ret = 0;
>  
> @@ -259,10 +261,16 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  	if (args->requirements && args->requirements != PANFROST_JD_REQ_FS)
>  		return -EINVAL;
>  
> +	queue = panfrost_submitqueue_get(file->driver_priv, 0);
> +	if (IS_ERR(queue))
> +		return PTR_ERR(queue);
> +
>  	if (args->out_sync > 0) {
>  		sync_out = drm_syncobj_find(file, args->out_sync);
> -		if (!sync_out)
> -			return -ENODEV;
> +		if (!sync_out) {
> +			ret = -ENODEV;
> +			goto fail_put_queue;
> +		}
>  	}
>  
>  	job = kzalloc(sizeof(*job), GFP_KERNEL);
> @@ -289,7 +297,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  	if (ret)
>  		goto fail_job;
>  
> -	ret = panfrost_job_push(job);
> +	ret = panfrost_job_push(queue, job);
>  	if (ret)
>  		goto fail_job;
>  
> @@ -302,6 +310,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  fail_out_sync:
>  	if (sync_out)
>  		drm_syncobj_put(sync_out);
> +fail_put_queue:
> +	panfrost_submitqueue_put(queue);
>  
>  	return ret;
>  }
> @@ -451,6 +461,36 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>  	return ret;
>  }
>  
> +static int
> +panfrost_ioctl_create_submitqueue(struct drm_device *dev, void *data,
> +				  struct drm_file *file_priv)
> +{
> +	struct panfrost_file_priv *priv = file_priv->driver_priv;
> +	struct drm_panfrost_create_submitqueue *args = data;
> +	int ret;
> +
> +	ret = panfrost_submitqueue_create(priv, args->priority, args->flags);
> +	if (ret < 0)
> +		return ret;
> +
> +	args->id = ret;
> +	return 0;
> +}
> +
> +static int
> +panfrost_ioctl_destroy_submitqueue(struct drm_device *dev, void *data,
> +				   struct drm_file *file_priv)
> +{
> +	struct panfrost_file_priv *priv = file_priv->driver_priv;
> +	u32 id = *((u32 *)data);
> +
> +	/* Default queue can't be destroyed. */
> +	if (!id)
> +		return -ENOENT;
> +
> +	return panfrost_submitqueue_destroy(priv, id);
> +}
> +
>  int panfrost_unstable_ioctl_check(void)
>  {
>  	if (!unstable_ioctls)
> @@ -479,13 +519,22 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
>  		goto err_free;
>  	}
>  
> -	ret = panfrost_job_open(panfrost_priv);
> +	idr_init(&panfrost_priv->queues);
> +
> +	ret = panfrost_submitqueue_create(panfrost_priv,
> +					  PANFROST_SUBMITQUEUE_PRIORITY_MEDIUM,
> +					  0);
> +
> +	/* We expect the default queue to get id 0, a positive queue id is
> +	 * considered a failure in that case.
> +	 */
>  	if (ret)
> -		goto err_job;
> +		goto err_destroy_idr;
>  
>  	return 0;
>  
> -err_job:
> +err_destroy_idr:
> +	idr_destroy(&panfrost_priv->queues);
>  	panfrost_mmu_ctx_put(panfrost_priv->mmu);
>  err_free:
>  	kfree(panfrost_priv);
> @@ -496,11 +545,15 @@ static void
>  panfrost_postclose(struct drm_device *dev, struct drm_file *file)
>  {
>  	struct panfrost_file_priv *panfrost_priv = file->driver_priv;
> +	u32 id;
>  
>  	panfrost_perfcnt_close(file);
> -	panfrost_job_close(panfrost_priv);
> +
> +	for (id = 0; idr_get_next(&panfrost_priv->queues, &id); id++)
> +		panfrost_submitqueue_destroy(panfrost_priv, id);
>  
>  	panfrost_mmu_ctx_put(panfrost_priv->mmu);
> +	idr_destroy(&panfrost_priv->queues);
>  	kfree(panfrost_priv);
>  }
>  
> @@ -517,6 +570,8 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
>  	PANFROST_IOCTL(PERFCNT_ENABLE,	perfcnt_enable,	DRM_RENDER_ALLOW),
>  	PANFROST_IOCTL(PERFCNT_DUMP,	perfcnt_dump,	DRM_RENDER_ALLOW),
>  	PANFROST_IOCTL(MADVISE,		madvise,	DRM_RENDER_ALLOW),
> +	PANFROST_IOCTL(CREATE_SUBMITQUEUE, create_submitqueue, DRM_RENDER_ALLOW),
> +	PANFROST_IOCTL(DESTROY_SUBMITQUEUE, destroy_submitqueue, DRM_RENDER_ALLOW),
>  };
>  
>  DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 152245b122be..56ae89272e19 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -20,6 +20,7 @@
>  #include "panfrost_regs.h"
>  #include "panfrost_gpu.h"
>  #include "panfrost_mmu.h"
> +#include "panfrost_submitqueue.h"
>  
>  #define JOB_TIMEOUT_MS 500
>  
> @@ -276,15 +277,15 @@ static void panfrost_attach_object_fences(struct panfrost_job *job)
>  	}
>  }
>  
> -int panfrost_job_push(struct panfrost_job *job)
> +int panfrost_job_push(struct panfrost_submitqueue *queue,
> +		      struct panfrost_job *job)
>  {
>  	struct panfrost_device *pfdev = job->pfdev;
>  	int slot = panfrost_job_get_slot(job);
> -	struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot];
> +	struct drm_sched_entity *entity = &queue->sched_entity[slot];
>  	struct ww_acquire_ctx acquire_ctx;
>  	int ret = 0;
>  
> -
>  	ret = drm_gem_lock_reservations(job->bos, job->bo_count,
>  					    &acquire_ctx);
>  	if (ret)
> @@ -881,43 +882,18 @@ void panfrost_job_fini(struct panfrost_device *pfdev)
>  	destroy_workqueue(pfdev->reset.wq);
>  }
>  
> -int panfrost_job_open(struct panfrost_file_priv *panfrost_priv)
> +void panfrost_job_kill_queue(struct panfrost_submitqueue *queue)
>  {
> -	struct panfrost_device *pfdev = panfrost_priv->pfdev;
> -	struct panfrost_job_slot *js = pfdev->js;
> -	struct drm_gpu_scheduler *sched;
> -	int ret, i;
> +	struct panfrost_device *pfdev = queue->pfdev;
> +	int i, j;
>  
> -	for (i = 0; i < NUM_JOB_SLOTS; i++) {
> -		sched = &js->queue[i].sched;
> -		ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i],
> -					    DRM_SCHED_PRIORITY_NORMAL, &sched,
> -					    1, NULL);
> -		if (WARN_ON(ret))
> -			return ret;
> -	}
> -	return 0;
> -}
> -
> -void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
> -{
> -	struct panfrost_device *pfdev = panfrost_priv->pfdev;
> -	int i;
> -
> -	for (i = 0; i < NUM_JOB_SLOTS; i++)
> -		drm_sched_entity_destroy(&panfrost_priv->sched_entity[i]);
> -
> -	/* Kill in-flight jobs */
>  	spin_lock(&pfdev->js->job_lock);
>  	for (i = 0; i < NUM_JOB_SLOTS; i++) {
> -		struct drm_sched_entity *entity = &panfrost_priv->sched_entity[i];
> -		int j;
> -
>  		for (j = ARRAY_SIZE(pfdev->jobs[0]) - 1; j >= 0; j--) {
>  			struct panfrost_job *job = pfdev->jobs[i][j];
>  			u32 cmd;
>  
> -			if (!job || job->base.entity != entity)
> +			if (!job || job->base.entity != &queue->sched_entity[i])
>  				continue;
>  
>  			if (j == 1) {
> @@ -936,7 +912,6 @@ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv)
>  			} else {
>  				cmd = JS_COMMAND_HARD_STOP;
>  			}
> -
>  			job_write(pfdev, JS_COMMAND(i), cmd);
>  		}
>  	}
> @@ -956,3 +931,9 @@ int panfrost_job_is_idle(struct panfrost_device *pfdev)
>  
>  	return true;
>  }
> +
> +struct drm_gpu_scheduler *
> +panfrost_job_get_sched(struct panfrost_device *pfdev, unsigned int js)
> +{
> +	return &pfdev->js->queue[js].sched;
> +}
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
> index 1cbc3621b663..5c228bb431c0 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.h
> @@ -10,6 +10,7 @@
>  struct panfrost_device;
>  struct panfrost_gem_object;
>  struct panfrost_file_priv;
> +struct panfrost_submitqueue;
>  
>  struct panfrost_job {
>  	struct drm_sched_job base;
> @@ -41,11 +42,13 @@ struct panfrost_job {
>  
>  int panfrost_job_init(struct panfrost_device *pfdev);
>  void panfrost_job_fini(struct panfrost_device *pfdev);
> -int panfrost_job_open(struct panfrost_file_priv *panfrost_priv);
> -void panfrost_job_close(struct panfrost_file_priv *panfrost_priv);
> -int panfrost_job_push(struct panfrost_job *job);
> +int panfrost_job_push(struct panfrost_submitqueue *queue,
> +		      struct panfrost_job *job);
>  void panfrost_job_put(struct panfrost_job *job);
>  void panfrost_job_enable_interrupts(struct panfrost_device *pfdev);
>  int panfrost_job_is_idle(struct panfrost_device *pfdev);
> +void panfrost_job_kill_queue(struct panfrost_submitqueue *queue);
> +struct drm_gpu_scheduler *
> +panfrost_job_get_sched(struct panfrost_device *pfdev, unsigned int js);
>  
>  #endif
> diff --git a/drivers/gpu/drm/panfrost/panfrost_submitqueue.c b/drivers/gpu/drm/panfrost/panfrost_submitqueue.c
> new file mode 100644
> index 000000000000..8944b4410be3
> --- /dev/null
> +++ b/drivers/gpu/drm/panfrost/panfrost_submitqueue.c
> @@ -0,0 +1,132 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright 2021 Collabora ltd. */
> +
> +#include <linux/idr.h>
> +
> +#include "panfrost_device.h"
> +#include "panfrost_job.h"
> +#include "panfrost_submitqueue.h"
> +
> +#define PAN_MAX_SUBMITQUEUES	16
> +
> +static enum drm_sched_priority
> +to_sched_prio(enum panfrost_submitqueue_priority priority)
> +{
> +	switch (priority) {
> +	case PANFROST_SUBMITQUEUE_PRIORITY_LOW:
> +		return DRM_SCHED_PRIORITY_MIN;
> +	case PANFROST_SUBMITQUEUE_PRIORITY_MEDIUM:
> +		return DRM_SCHED_PRIORITY_NORMAL;
> +	case PANFROST_SUBMITQUEUE_PRIORITY_HIGH:
> +		return DRM_SCHED_PRIORITY_HIGH;
> +	default:
> +		break;
> +	}
> +
> +	return DRM_SCHED_PRIORITY_UNSET;
> +}
> +
> +static void
> +panfrost_submitqueue_cleanup(struct kref *ref)
> +{
> +	struct panfrost_submitqueue *queue;
> +	unsigned int i;
> +
> +	queue = container_of(ref, struct panfrost_submitqueue, refcount);
> +
> +	for (i = 0; i < NUM_JOB_SLOTS; i++)
> +		drm_sched_entity_destroy(&queue->sched_entity[i]);
> +
> +	/* Kill in-flight jobs */
> +	panfrost_job_kill_queue(queue);
> +
> +	kfree(queue);
> +}
> +
> +void panfrost_submitqueue_put(struct panfrost_submitqueue *queue)
> +{
> +	if (!IS_ERR_OR_NULL(queue))
> +		kref_put(&queue->refcount, panfrost_submitqueue_cleanup);
> +}
> +
> +int
> +panfrost_submitqueue_create(struct panfrost_file_priv *ctx,
> +			    enum panfrost_submitqueue_priority priority,
> +			    u32 flags)
> +{
> +	struct panfrost_submitqueue *queue;
> +	enum drm_sched_priority sched_prio;
> +	int ret, i;
> +
> +	if (flags || priority >= PANFROST_SUBMITQUEUE_PRIORITY_COUNT)
> +		return -EINVAL;
> +
> +	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> +	if (!queue)
> +		return -ENOMEM;
> +
> +	queue->pfdev = ctx->pfdev;
> +	sched_prio = to_sched_prio(priority);
> +	for (i = 0; i < NUM_JOB_SLOTS; i++) {
> +		struct drm_gpu_scheduler *sched;
> +
> +		sched = panfrost_job_get_sched(ctx->pfdev, i);
> +		ret = drm_sched_entity_init(&queue->sched_entity[i],
> +					    sched_prio, &sched, 1, NULL);
> +		if (ret)
> +			break;
> +	}
> +
> +	if (ret) {
> +		for (i--; i >= 0; i--)
> +			drm_sched_entity_destroy(&queue->sched_entity[i]);
> +
> +		return ret;
> +	}
> +
> +	kref_init(&queue->refcount);
> +
> +	idr_preload(GFP_KERNEL);
> +	idr_lock(&ctx->queues);
> +	ret = idr_alloc(&ctx->queues, queue, 0, PAN_MAX_SUBMITQUEUES,
> +			GFP_NOWAIT);
> +	idr_unlock(&ctx->queues);
> +	idr_preload_end();
> +
> +	if (ret < 0)
> +		panfrost_submitqueue_put(queue);
> +
> +	return ret;
> +}
> +
> +int panfrost_submitqueue_destroy(struct panfrost_file_priv *ctx, u32 id)
> +{
> +	struct panfrost_submitqueue *queue;
> +
> +	idr_lock(&ctx->queues);
> +	queue = idr_remove(&ctx->queues, id);
> +	idr_unlock(&ctx->queues);
> +
> +	if (!queue)
> +		return -ENOENT;
> +
> +	panfrost_submitqueue_put(queue);
> +	return 0;
> +}
> +
> +struct panfrost_submitqueue *
> +panfrost_submitqueue_get(struct panfrost_file_priv *ctx, u32 id)
> +{
> +	struct panfrost_submitqueue *queue;
> +
> +	idr_lock(&ctx->queues);
> +	queue = idr_find(&ctx->queues, id);
> +	if (queue)
> +		kref_get(&queue->refcount);
> +	idr_unlock(&ctx->queues);
> +
> +	if (!queue)
> +		return ERR_PTR(-ENOENT);
> +
> +	return queue;
> +}
> diff --git a/drivers/gpu/drm/panfrost/panfrost_submitqueue.h b/drivers/gpu/drm/panfrost/panfrost_submitqueue.h
> new file mode 100644
> index 000000000000..ade224725844
> --- /dev/null
> +++ b/drivers/gpu/drm/panfrost/panfrost_submitqueue.h
> @@ -0,0 +1,26 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright 2032 Collabora ltd. */
> +
> +#ifndef __PANFROST_SUBMITQUEUE_H__
> +#define __PANFROST_SUBMITQUEUE_H__
> +
> +#include <drm/panfrost_drm.h>
> +
> +struct panfrost_submitqueue {
> +	struct kref refcount;
> +	struct panfrost_device *pfdev;
> +	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
> +};
> +
> +struct panfrost_file_priv;
> +
> +int
> +panfrost_submitqueue_create(struct panfrost_file_priv *ctx,
> +			    enum panfrost_submitqueue_priority priority,
> +			    u32 flags);
> +int panfrost_submitqueue_destroy(struct panfrost_file_priv *ctx, u32 id);
> +struct panfrost_submitqueue *
> +panfrost_submitqueue_get(struct panfrost_file_priv *ctx, u32 id);
> +void panfrost_submitqueue_put(struct panfrost_submitqueue *queue);
> +
> +#endif
> diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
> index 3723c9d231b5..e31a22c176d9 100644
> --- a/include/uapi/drm/panfrost_drm.h
> +++ b/include/uapi/drm/panfrost_drm.h
> @@ -21,6 +21,8 @@ extern "C" {
>  #define DRM_PANFROST_PERFCNT_ENABLE		0x06
>  #define DRM_PANFROST_PERFCNT_DUMP		0x07
>  #define DRM_PANFROST_MADVISE			0x08
> +#define DRM_PANFROST_CREATE_SUBMITQUEUE		0x09
> +#define DRM_PANFROST_DESTROY_SUBMITQUEUE	0x0a
>  
>  #define DRM_IOCTL_PANFROST_SUBMIT		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_SUBMIT, struct drm_panfrost_submit)
>  #define DRM_IOCTL_PANFROST_WAIT_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_WAIT_BO, struct drm_panfrost_wait_bo)
> @@ -29,6 +31,8 @@ extern "C" {
>  #define DRM_IOCTL_PANFROST_GET_PARAM		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_GET_PARAM, struct drm_panfrost_get_param)
>  #define DRM_IOCTL_PANFROST_GET_BO_OFFSET	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_GET_BO_OFFSET, struct drm_panfrost_get_bo_offset)
>  #define DRM_IOCTL_PANFROST_MADVISE		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_MADVISE, struct drm_panfrost_madvise)
> +#define DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_CREATE_SUBMITQUEUE, struct drm_panfrost_create_submitqueue)
> +#define DRM_IOCTL_PANFROST_DESTROY_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_DESTROY_SUBMITQUEUE, __u32)
>  
>  /*
>   * Unstable ioctl(s): only exposed when the unsafe unstable_ioctls module
> @@ -224,6 +228,19 @@ struct drm_panfrost_madvise {
>  	__u32 retained;       /* out, whether backing store still exists */
>  };
>  
> +enum panfrost_submitqueue_priority {
> +	PANFROST_SUBMITQUEUE_PRIORITY_LOW = 0,
> +	PANFROST_SUBMITQUEUE_PRIORITY_MEDIUM,
> +	PANFROST_SUBMITQUEUE_PRIORITY_HIGH,
> +	PANFROST_SUBMITQUEUE_PRIORITY_COUNT,
> +};
> +
> +struct drm_panfrost_create_submitqueue {
> +	__u32 flags;	/* in, PANFROST_SUBMITQUEUE_x */
> +	__u32 priority;	/* in, enum panfrost_submitqueue_priority */
> +	__u32 id;	/* out, identifier */
> +};
> +
>  /* Exclusive (AKA write) access to the BO */
>  #define PANFROST_BO_REF_EXCLUSIVE	0x1
>  
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches
  2021-07-05  8:29 ` [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches Boris Brezillon
@ 2021-07-05  9:32   ` Daniel Vetter
  2021-07-08 12:10     ` Christian König
  2021-07-05  9:42   ` Steven Price
  1 sibling, 1 reply; 14+ messages in thread
From: Daniel Vetter @ 2021-07-05  9:32 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Jason Ekstrand, Tomeu Vizoso, Christian König, dri-devel,
	Steven Price, Rob Herring, Alyssa Rosenzweig, Robin Murphy

On Mon, Jul 05, 2021 at 10:29:48AM +0200, Boris Brezillon wrote:
> This should help limit the number of ioctls when submitting multiple
> jobs. The new ioctl also supports syncobj timelines and BO access flags.
> 
> v4:
> * Implement panfrost_ioctl_submit() as a wrapper around
>   panfrost_submit_job()
> * Replace stride fields by a version field which is mapped to
>   a <job_stride,bo_ref_stride,syncobj_ref_stride> tuple internally
> 
> v3:
> * Re-use panfrost_get_job_bos() and panfrost_get_job_in_syncs() in the
>   old submit path
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 562 ++++++++++++++++--------
>  drivers/gpu/drm/panfrost/panfrost_job.c |   3 +
>  include/uapi/drm/panfrost_drm.h         |  92 ++++
>  3 files changed, 479 insertions(+), 178 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 8e28ef30310b..a624e4f86aff 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -138,184 +138,6 @@ panfrost_get_job_mappings(struct drm_file *file_priv, struct panfrost_job *job)
>  	return 0;
>  }
>  
> -/**
> - * panfrost_lookup_bos() - Sets up job->bo[] with the GEM objects
> - * referenced by the job.
> - * @dev: DRM device
> - * @file_priv: DRM file for this fd
> - * @args: IOCTL args
> - * @job: job being set up
> - *
> - * Resolve handles from userspace to BOs and attach them to job.
> - *
> - * Note that this function doesn't need to unreference the BOs on
> - * failure, because that will happen at panfrost_job_cleanup() time.
> - */
> -static int
> -panfrost_lookup_bos(struct drm_device *dev,
> -		  struct drm_file *file_priv,
> -		  struct drm_panfrost_submit *args,
> -		  struct panfrost_job *job)
> -{
> -	unsigned int i;
> -	int ret;
> -
> -	job->bo_count = args->bo_handle_count;
> -
> -	if (!job->bo_count)
> -		return 0;
> -
> -	job->bo_flags = kvmalloc_array(job->bo_count,
> -				       sizeof(*job->bo_flags),
> -				       GFP_KERNEL | __GFP_ZERO);
> -	if (!job->bo_flags)
> -		return -ENOMEM;
> -
> -	for (i = 0; i < job->bo_count; i++)
> -		job->bo_flags[i] = PANFROST_BO_REF_EXCLUSIVE;
> -
> -	ret = drm_gem_objects_lookup(file_priv,
> -				     (void __user *)(uintptr_t)args->bo_handles,
> -				     job->bo_count, &job->bos);
> -	if (ret)
> -		return ret;
> -
> -	return panfrost_get_job_mappings(file_priv, job);
> -}
> -
> -/**
> - * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
> - * referenced by the job.
> - * @dev: DRM device
> - * @file_priv: DRM file for this fd
> - * @args: IOCTL args
> - * @job: job being set up
> - *
> - * Resolve syncobjs from userspace to fences and attach them to job.
> - *
> - * Note that this function doesn't need to unreference the fences on
> - * failure, because that will happen at panfrost_job_cleanup() time.
> - */
> -static int
> -panfrost_copy_in_sync(struct drm_device *dev,
> -		  struct drm_file *file_priv,
> -		  struct drm_panfrost_submit *args,
> -		  struct panfrost_job *job)
> -{
> -	u32 *handles;
> -	int ret = 0;
> -	int i, in_fence_count;
> -
> -	in_fence_count = args->in_sync_count;
> -
> -	if (!in_fence_count)
> -		return 0;
> -
> -	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
> -	if (!handles) {
> -		ret = -ENOMEM;
> -		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
> -		goto fail;
> -	}
> -
> -	if (copy_from_user(handles,
> -			   (void __user *)(uintptr_t)args->in_syncs,
> -			   in_fence_count * sizeof(u32))) {
> -		ret = -EFAULT;
> -		DRM_DEBUG("Failed to copy in syncobj handles\n");
> -		goto fail;
> -	}
> -
> -	for (i = 0; i < in_fence_count; i++) {
> -		struct dma_fence *fence;
> -
> -		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
> -					     &fence);
> -		if (ret)
> -			goto fail;
> -
> -		ret = drm_gem_fence_array_add(&job->deps, fence);
> -
> -		if (ret)
> -			goto fail;
> -	}
> -
> -fail:
> -	kvfree(handles);
> -	return ret;
> -}
> -
> -static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
> -		struct drm_file *file)
> -{
> -	struct panfrost_device *pfdev = dev->dev_private;
> -	struct drm_panfrost_submit *args = data;
> -	struct drm_syncobj *sync_out = NULL;
> -	struct panfrost_submitqueue *queue;
> -	struct panfrost_job *job;
> -	int ret = 0;
> -
> -	if (!args->jc)
> -		return -EINVAL;
> -
> -	if (args->requirements && args->requirements != PANFROST_JD_REQ_FS)
> -		return -EINVAL;
> -
> -	queue = panfrost_submitqueue_get(file->driver_priv, 0);
> -	if (IS_ERR(queue))
> -		return PTR_ERR(queue);
> -
> -	if (args->out_sync > 0) {
> -		sync_out = drm_syncobj_find(file, args->out_sync);
> -		if (!sync_out) {
> -			ret = -ENODEV;
> -			goto fail_put_queue;
> -		}
> -	}
> -
> -	job = kzalloc(sizeof(*job), GFP_KERNEL);
> -	if (!job) {
> -		ret = -ENOMEM;
> -		goto fail_out_sync;
> -	}
> -
> -	kref_init(&job->refcount);
> -
> -	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> -
> -	job->pfdev = pfdev;
> -	job->jc = args->jc;
> -	job->requirements = args->requirements;
> -	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
> -	job->file_priv = file->driver_priv;
> -
> -	ret = panfrost_copy_in_sync(dev, file, args, job);
> -	if (ret)
> -		goto fail_job;
> -
> -	ret = panfrost_lookup_bos(dev, file, args, job);
> -	if (ret)
> -		goto fail_job;
> -
> -	ret = panfrost_job_push(queue, job);
> -	if (ret)
> -		goto fail_job;
> -
> -	/* Update the return sync object for the job */
> -	if (sync_out)
> -		drm_syncobj_replace_fence(sync_out, job->render_done_fence);
> -
> -fail_job:
> -	panfrost_job_put(job);
> -fail_out_sync:
> -	if (sync_out)
> -		drm_syncobj_put(sync_out);
> -fail_put_queue:
> -	panfrost_submitqueue_put(queue);
> -
> -	return ret;
> -}
> -
>  static int
>  panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>  		       struct drm_file *file_priv)
> @@ -491,6 +313,389 @@ panfrost_ioctl_destroy_submitqueue(struct drm_device *dev, void *data,
>  	return panfrost_submitqueue_destroy(priv, id);
>  }
>  
> +#define PANFROST_BO_REF_ALLOWED_FLAGS \
> +	(PANFROST_BO_REF_EXCLUSIVE | PANFROST_BO_REF_NO_IMPLICIT_DEP)
> +
> +static int
> +panfrost_get_job_bos(struct drm_file *file_priv,
> +		     u64 refs, u32 ref_stride, u32 count,
> +		     struct panfrost_job *job)
> +{
> +	void __user *in = u64_to_user_ptr(refs);
> +	unsigned int i;
> +
> +	job->bo_count = count;
> +
> +	if (!count)
> +		return 0;
> +
> +	job->bos = kvmalloc_array(job->bo_count, sizeof(*job->bos),
> +				  GFP_KERNEL | __GFP_ZERO);
> +	job->bo_flags = kvmalloc_array(job->bo_count,
> +				       sizeof(*job->bo_flags),
> +				       GFP_KERNEL | __GFP_ZERO);
> +	if (!job->bos || !job->bo_flags)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < count; i++) {
> +		struct drm_panfrost_bo_ref ref = { };
> +		int ret;
> +
> +		ret = copy_struct_from_user(&ref, sizeof(ref),
> +					    in + (i * ref_stride),
> +					    ref_stride);
> +		if (ret)
> +			return ret;
> +
> +		/* Prior to the BATCH_SUBMIT ioctl all accessed BOs were
> +		 * treated as exclusive.
> +		 */
> +		if (ref_stride == sizeof(u32))
> +			ref.flags = PANFROST_BO_REF_EXCLUSIVE;
> +
> +		if ((ref.flags & ~PANFROST_BO_REF_ALLOWED_FLAGS))
> +			return -EINVAL;
> +
> +		job->bos[i] = drm_gem_object_lookup(file_priv, ref.handle);
> +		if (!job->bos[i])
> +			return -EINVAL;
> +
> +		job->bo_flags[i] = ref.flags;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +panfrost_get_job_in_syncs(struct drm_file *file_priv,
> +			  u64 refs, u32 ref_stride,
> +			  u32 count, struct panfrost_job *job)
> +{
> +	const void __user *in = u64_to_user_ptr(refs);
> +	unsigned int i;
> +	int ret;
> +
> +	if (!count)
> +		return 0;
> +
> +	for (i = 0; i < count; i++) {
> +		struct drm_panfrost_syncobj_ref ref = { };
> +		struct dma_fence *fence;
> +
> +		ret = copy_struct_from_user(&ref, sizeof(ref),
> +					    in + (i * ref_stride),
> +					    ref_stride);
> +		if (ret)
> +			return ret;
> +
> +		if (ref.pad)
> +			return -EINVAL;
> +
> +		ret = drm_syncobj_find_fence(file_priv, ref.handle, ref.point,
> +					     0, &fence);
> +		if (ret)
> +			return ret;
> +
> +		ret = drm_gem_fence_array_add(&job->deps, fence);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +struct panfrost_job_out_sync {
> +	struct drm_syncobj *syncobj;
> +	struct dma_fence_chain *chain;
> +	u64 point;
> +};
> +
> +static void
> +panfrost_put_job_out_syncs(struct panfrost_job_out_sync *out_syncs, u32 count)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < count; i++) {
> +		if (!out_syncs[i].syncobj)
> +			break;
> +
> +		drm_syncobj_put(out_syncs[i].syncobj);
> +		kvfree(out_syncs[i].chain);
> +	}
> +
> +	kvfree(out_syncs);
> +}
> +
> +static struct panfrost_job_out_sync *
> +panfrost_get_job_out_syncs(struct drm_file *file_priv,
> +			   u64 refs, u32 ref_stride,
> +			   u32 count)
> +{
> +	void __user *in = u64_to_user_ptr(refs);
> +	struct panfrost_job_out_sync *out_syncs;
> +	unsigned int i;
> +	int ret;
> +
> +	if (!count)
> +		return NULL;
> +
> +	/* If the syncobj ref_stride == sizeof(u32) we are called from the
> +	 * old submit ioctl() which only accepted one out syncobj. In that
> +	 * case the syncobj handle is passed directly through the
> +	 * ->out_syncs field, so let's make sure the refs fits in a u32.
> +	 */
> +	if (ref_stride == sizeof(u32) &&
> +	    (count != 1 || refs > UINT_MAX))
> +		return ERR_PTR(-EINVAL);
> +
> +	out_syncs = kvmalloc_array(count, sizeof(*out_syncs),
> +				   GFP_KERNEL | __GFP_ZERO);
> +	if (!out_syncs)
> +		return ERR_PTR(-ENOMEM);
> +
> +	for (i = 0; i < count; i++) {
> +		struct drm_panfrost_syncobj_ref ref = { };
> +
> +		if (ref_stride == sizeof(u32)) {
> +			/* Special case for the old submit wrapper: in that
> +			 * case there's only one out_sync, and the syncobj
> +			 * handle is passed directly in the out_syncs field.
> +			 */
> +			ref.handle = refs;
> +		} else {
> +			ret = copy_struct_from_user(&ref, sizeof(ref),
> +						    in + (i * ref_stride),
> +						    ref_stride);
> +			if (ret)
> +				goto err_free_out_syncs;
> +		}
> +
> +		if (ref.pad) {
> +			ret = -EINVAL;
> +			goto err_free_out_syncs;
> +		}
> +
> +		out_syncs[i].syncobj = drm_syncobj_find(file_priv, ref.handle);
> +		if (!out_syncs[i].syncobj) {
> +			ret = -ENODEV;
> +			goto err_free_out_syncs;
> +		}
> +
> +		out_syncs[i].point = ref.point;
> +		if (!out_syncs[i].point)
> +			continue;
> +
> +		out_syncs[i].chain = kmalloc(sizeof(*out_syncs[i].chain),
> +					     GFP_KERNEL);
> +		if (!out_syncs[i].chain) {
> +			ret = -ENOMEM;
> +			goto err_free_out_syncs;
> +		}
> +	}
> +
> +	return out_syncs;
> +
> +err_free_out_syncs:
> +	panfrost_put_job_out_syncs(out_syncs, count);
> +	return ERR_PTR(ret);
> +}
> +
> +static void
> +panfrost_set_job_out_fence(struct panfrost_job_out_sync *out_syncs,
> +			   unsigned int count, struct dma_fence *fence)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < count; i++) {
> +		if (out_syncs[i].chain) {
> +			drm_syncobj_add_point(out_syncs[i].syncobj,
> +					      out_syncs[i].chain,
> +					      fence, out_syncs[i].point);
> +			out_syncs[i].chain = NULL;
> +		} else {
> +			drm_syncobj_replace_fence(out_syncs[i].syncobj,
> +						  fence);
> +		}
> +	}
> +}
> +
> +struct panfrost_submit_ioctl_version_info {
> +	u32 job_stride;
> +	u32 bo_ref_stride;
> +	u32 syncobj_ref_stride;
> +};
> +
> +static const struct panfrost_submit_ioctl_version_info submit_versions[] = {
> +	/* SUBMIT */
> +	[0] = { 0, 4, 4 },
> +
> +	/* BATCH_SUBMIT v1 */
> +	[1] = { 48, 8, 16 },
> +};
> +
> +#define PANFROST_JD_ALLOWED_REQS PANFROST_JD_REQ_FS
> +
> +static int
> +panfrost_submit_job(struct drm_device *dev, struct drm_file *file_priv,
> +		    struct panfrost_submitqueue *queue,
> +		    const struct drm_panfrost_job *args,
> +		    u32 version)
> +{
> +	struct panfrost_device *pfdev = dev->dev_private;
> +	struct panfrost_job_out_sync *out_syncs;
> +	u32 bo_stride, syncobj_stride;
> +	struct panfrost_job *job;
> +	int ret;
> +
> +	if (!args->head)
> +		return -EINVAL;
> +
> +	if (args->requirements & ~PANFROST_JD_ALLOWED_REQS)
> +		return -EINVAL;
> +
> +	bo_stride = submit_versions[version].bo_ref_stride;
> +	syncobj_stride = submit_versions[version].syncobj_ref_stride;
> +
> +	job = kzalloc(sizeof(*job), GFP_KERNEL);
> +	if (!job)
> +		return -ENOMEM;
> +
> +	kref_init(&job->refcount);
> +
> +	job->pfdev = pfdev;
> +	job->jc = args->head;
> +	job->requirements = args->requirements;
> +	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
> +	job->file_priv = file_priv->driver_priv;
> +	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> +
> +	ret = panfrost_get_job_in_syncs(file_priv,
> +					args->in_syncs,
> +					syncobj_stride,
> +					args->in_sync_count,
> +					job);
> +	if (ret)
> +		goto err_put_job;
> +
> +	out_syncs = panfrost_get_job_out_syncs(file_priv,
> +					       args->out_syncs,
> +					       syncobj_stride,
> +					       args->out_sync_count);
> +	if (IS_ERR(out_syncs)) {
> +		ret = PTR_ERR(out_syncs);
> +		goto err_put_job;
> +	}
> +
> +	ret = panfrost_get_job_bos(file_priv, args->bos, bo_stride,
> +				   args->bo_count, job);
> +	if (ret)
> +		goto err_put_job;
> +
> +	ret = panfrost_get_job_mappings(file_priv, job);
> +	if (ret)
> +		goto err_put_job;
> +
> +	ret = panfrost_job_push(queue, job);
> +	if (ret) {
> +		panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
> +		goto err_put_job;
> +	}
> +
> +	panfrost_set_job_out_fence(out_syncs, args->out_sync_count,
> +				   job->render_done_fence);
> +	panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
> +	return 0;
> +
> +err_put_job:
> +	panfrost_job_put(job);
> +	return ret;
> +}
> +
> +static int
> +panfrost_ioctl_submit(struct drm_device *dev, void *data,
> +		      struct drm_file *file)
> +{
> +	struct drm_panfrost_submit *args = data;
> +	struct drm_panfrost_job job_args = {
> +		.head = args->jc,
> +		.bos = args->bo_handles,
> +		.in_syncs = args->in_syncs,
> +
> +		/* We are abusing .out_syncs and passing the handle directly
> +		 * instead of a pointer to a user u32 array, but
> +		 * panfrost_job_submit() knows about it, so it's fine.
> +		 */
> +		.out_syncs = args->out_sync,
> +		.in_sync_count = args->in_sync_count,
> +		.out_sync_count = args->out_sync > 0 ? 1 : 0,
> +		.bo_count = args->bo_handle_count,
> +		.requirements = args->requirements
> +	};
> +	struct panfrost_submitqueue *queue;
> +	int ret;
> +
> +	queue = panfrost_submitqueue_get(file->driver_priv, 0);
> +	if (IS_ERR(queue))
> +		return PTR_ERR(queue);
> +
> +	ret = panfrost_submit_job(dev, file, queue, &job_args, 0);
> +	panfrost_submitqueue_put(queue);
> +
> +	return ret;
> +}
> +
> +static int
> +panfrost_ioctl_batch_submit(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct drm_panfrost_batch_submit *args = data;
> +	void __user *jobs_args = u64_to_user_ptr(args->jobs);
> +	struct panfrost_submitqueue *queue;
> +	u32 version = args->version;
> +	u32 job_stride;
> +	unsigned int i;
> +	int ret;
> +
> +	/* Version 0 doesn't exists (it's reserved for the SUBMIT ioctl) */
> +	if (!version)
> +		return -EINVAL;
> +
> +	/* If the version specified is bigger than what we currently support,
> +	 * pick the last supported version and let copy_struct_from_user()
> +	 * check that any extra job, bo_ref and syncobj_ref fields are zeroed.
> +	 */
> +	if (version >= ARRAY_SIZE(submit_versions))
> +		version = ARRAY_SIZE(submit_versions) - 1;
> +
> +	queue = panfrost_submitqueue_get(file_priv->driver_priv, args->queue);
> +	if (IS_ERR(queue))
> +		return PTR_ERR(queue);
> +
> +	job_stride = submit_versions[version].job_stride;
> +	for (i = 0; i < args->job_count; i++) {
> +		struct drm_panfrost_job job_args = { };
> +
> +		ret = copy_struct_from_user(&job_args, sizeof(job_args),
> +					    jobs_args + (i * job_stride),
> +					    job_stride);
> +		if (ret) {
> +			args->fail_idx = i;
> +			goto out_put_queue;
> +		}
> +
> +		ret = panfrost_submit_job(dev, file_priv, queue, &job_args,
> +					  version);
> +		if (ret) {
> +			args->fail_idx = i;
> +			goto out_put_queue;
> +		}
> +	}
> +
> +out_put_queue:
> +	panfrost_submitqueue_put(queue);
> +	return 0;
> +}
> +
>  int panfrost_unstable_ioctl_check(void)
>  {
>  	if (!unstable_ioctls)
> @@ -572,6 +777,7 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
>  	PANFROST_IOCTL(MADVISE,		madvise,	DRM_RENDER_ALLOW),
>  	PANFROST_IOCTL(CREATE_SUBMITQUEUE, create_submitqueue, DRM_RENDER_ALLOW),
>  	PANFROST_IOCTL(DESTROY_SUBMITQUEUE, destroy_submitqueue, DRM_RENDER_ALLOW),
> +	PANFROST_IOCTL(BATCH_SUBMIT,	batch_submit,	DRM_RENDER_ALLOW),
>  };
>  
>  DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 56ae89272e19..4e1540bce865 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -254,6 +254,9 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
>  				return ret;
>  		}
>  
> +		if (job->bo_flags[i] & PANFROST_BO_REF_NO_IMPLICIT_DEP)
> +			continue;

This breaks dma_resv rules. I'll send out patch set fixing this pattern in
other drivers, I'll ping you on that for what you need to change. Should
go out today or so.

Also cc: Christian König.
-Daniel

> +
>  		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i],
>  						       exclusive);
>  		if (ret)
> diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
> index e31a22c176d9..5d534e61c28e 100644
> --- a/include/uapi/drm/panfrost_drm.h
> +++ b/include/uapi/drm/panfrost_drm.h
> @@ -23,6 +23,7 @@ extern "C" {
>  #define DRM_PANFROST_MADVISE			0x08
>  #define DRM_PANFROST_CREATE_SUBMITQUEUE		0x09
>  #define DRM_PANFROST_DESTROY_SUBMITQUEUE	0x0a
> +#define DRM_PANFROST_BATCH_SUBMIT		0x0b
>  
>  #define DRM_IOCTL_PANFROST_SUBMIT		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_SUBMIT, struct drm_panfrost_submit)
>  #define DRM_IOCTL_PANFROST_WAIT_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_WAIT_BO, struct drm_panfrost_wait_bo)
> @@ -33,6 +34,7 @@ extern "C" {
>  #define DRM_IOCTL_PANFROST_MADVISE		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_MADVISE, struct drm_panfrost_madvise)
>  #define DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_CREATE_SUBMITQUEUE, struct drm_panfrost_create_submitqueue)
>  #define DRM_IOCTL_PANFROST_DESTROY_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_DESTROY_SUBMITQUEUE, __u32)
> +#define DRM_IOCTL_PANFROST_BATCH_SUBMIT		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_BATCH_SUBMIT, struct drm_panfrost_batch_submit)
>  
>  /*
>   * Unstable ioctl(s): only exposed when the unsafe unstable_ioctls module
> @@ -241,9 +243,99 @@ struct drm_panfrost_create_submitqueue {
>  	__u32 id;	/* out, identifier */
>  };
>  
> +/* Syncobj reference passed at job submission time to encode explicit
> + * input/output fences.
> + */
> +struct drm_panfrost_syncobj_ref {
> +	/** Syncobj handle */
> +	__u32 handle;
> +
> +	/** Padding field, must be set to 0 */
> +	__u32 pad;
> +
> +	/**
> +	 * For timeline syncobjs, the point on the timeline the reference
> +	 * points to. 0 for the last point.
> +	 * Must be set to 0 for non-timeline syncobjs
> +	 */
> +	__u64 point;
> +};
> +
>  /* Exclusive (AKA write) access to the BO */
>  #define PANFROST_BO_REF_EXCLUSIVE	0x1
>  
> +/* Disable the implicit depency on the BO fence */
> +#define PANFROST_BO_REF_NO_IMPLICIT_DEP	0x2
> +
> +/* Describes a BO referenced by a job and the type of access. */
> +struct drm_panfrost_bo_ref {
> +	/** A GEM handle */
> +	__u32 handle;
> +
> +	/** A combination of PANFROST_BO_REF_x flags */
> +	__u32 flags;
> +};
> +
> +/* Describes a GPU job and the resources attached to it. */
> +struct drm_panfrost_job {
> +	/** GPU pointer to the head of the job chain. */
> +	__u64 head;
> +
> +	/**
> +	 * Array of drm_panfrost_bo_ref objects describing the BOs referenced
> +	 * by this job.
> +	 */
> +	__u64 bos;
> +
> +	/**
> +	 * Arrays of drm_panfrost_syncobj_ref objects describing the input
> +	 * and output fences.
> +	 */
> +	__u64 in_syncs;
> +	__u64 out_syncs;
> +
> +	/** Syncobj reference array sizes. */
> +	__u32 in_sync_count;
> +	__u32 out_sync_count;
> +
> +	/** BO reference array size. */
> +	__u32 bo_count;
> +
> +	/** Combination of PANFROST_JD_REQ_* flags. */
> +	__u32 requirements;
> +};
> +
> +#define PANFROST_SUBMIT_BATCH_VERSION	1
> +
> +/* Used to submit multiple jobs in one call */
> +struct drm_panfrost_batch_submit {
> +	/**
> +	 * Always set to PANFROST_SUBMIT_BATCH_VERSION. This is used to let the
> +	 * kernel know about the size of the various structs passed to the
> +	 * BATCH_SUBMIT ioctl.
> +	 */
> +	__u32 version;
> +
> +	/** Number of jobs to submit. */
> +	__u32 job_count;
> +
> +	/* Pointer to a job array. */
> +	__u64 jobs;
> +
> +	/**
> +	 * ID of the queue to submit those jobs to. 0 is the default
> +	 * submit queue and should always exists. If you need a dedicated
> +	 * queue, create it with DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE.
> +	 */
> +	__u32 queue;
> +
> +	/**
> +	 * If the submission fails, this encodes the index of the job
> +	 * failed.
> +	 */
> +	__u32 fail_idx;
> +};
> +
>  #if defined(__cplusplus)
>  }
>  #endif
> -- 
> 2.31.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches
  2021-07-05  8:29 ` [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches Boris Brezillon
  2021-07-05  9:32   ` Daniel Vetter
@ 2021-07-05  9:42   ` Steven Price
  1 sibling, 0 replies; 14+ messages in thread
From: Steven Price @ 2021-07-05  9:42 UTC (permalink / raw)
  To: Boris Brezillon, Rob Herring, Tomeu Vizoso, Alyssa Rosenzweig,
	Robin Murphy
  Cc: Jason Ekstrand, dri-devel

On 05/07/2021 09:29, Boris Brezillon wrote:
> This should help limit the number of ioctls when submitting multiple
> jobs. The new ioctl also supports syncobj timelines and BO access flags.
> 
> v4:
> * Implement panfrost_ioctl_submit() as a wrapper around
>   panfrost_submit_job()
> * Replace stride fields by a version field which is mapped to
>   a <job_stride,bo_ref_stride,syncobj_ref_stride> tuple internally
> 
> v3:
> * Re-use panfrost_get_job_bos() and panfrost_get_job_in_syncs() in the
>   old submit path
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 562 ++++++++++++++++--------
>  drivers/gpu/drm/panfrost/panfrost_job.c |   3 +
>  include/uapi/drm/panfrost_drm.h         |  92 ++++
>  3 files changed, 479 insertions(+), 178 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 8e28ef30310b..a624e4f86aff 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -138,184 +138,6 @@ panfrost_get_job_mappings(struct drm_file *file_priv, struct panfrost_job *job)
>  	return 0;
>  }
>  
> -/**
> - * panfrost_lookup_bos() - Sets up job->bo[] with the GEM objects
> - * referenced by the job.
> - * @dev: DRM device
> - * @file_priv: DRM file for this fd
> - * @args: IOCTL args
> - * @job: job being set up
> - *
> - * Resolve handles from userspace to BOs and attach them to job.
> - *
> - * Note that this function doesn't need to unreference the BOs on
> - * failure, because that will happen at panfrost_job_cleanup() time.
> - */
> -static int
> -panfrost_lookup_bos(struct drm_device *dev,
> -		  struct drm_file *file_priv,
> -		  struct drm_panfrost_submit *args,
> -		  struct panfrost_job *job)
> -{
> -	unsigned int i;
> -	int ret;
> -
> -	job->bo_count = args->bo_handle_count;
> -
> -	if (!job->bo_count)
> -		return 0;
> -
> -	job->bo_flags = kvmalloc_array(job->bo_count,
> -				       sizeof(*job->bo_flags),
> -				       GFP_KERNEL | __GFP_ZERO);
> -	if (!job->bo_flags)
> -		return -ENOMEM;
> -
> -	for (i = 0; i < job->bo_count; i++)
> -		job->bo_flags[i] = PANFROST_BO_REF_EXCLUSIVE;
> -
> -	ret = drm_gem_objects_lookup(file_priv,
> -				     (void __user *)(uintptr_t)args->bo_handles,
> -				     job->bo_count, &job->bos);
> -	if (ret)
> -		return ret;
> -
> -	return panfrost_get_job_mappings(file_priv, job);
> -}
> -
> -/**
> - * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
> - * referenced by the job.
> - * @dev: DRM device
> - * @file_priv: DRM file for this fd
> - * @args: IOCTL args
> - * @job: job being set up
> - *
> - * Resolve syncobjs from userspace to fences and attach them to job.
> - *
> - * Note that this function doesn't need to unreference the fences on
> - * failure, because that will happen at panfrost_job_cleanup() time.
> - */
> -static int
> -panfrost_copy_in_sync(struct drm_device *dev,
> -		  struct drm_file *file_priv,
> -		  struct drm_panfrost_submit *args,
> -		  struct panfrost_job *job)
> -{
> -	u32 *handles;
> -	int ret = 0;
> -	int i, in_fence_count;
> -
> -	in_fence_count = args->in_sync_count;
> -
> -	if (!in_fence_count)
> -		return 0;
> -
> -	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
> -	if (!handles) {
> -		ret = -ENOMEM;
> -		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
> -		goto fail;
> -	}
> -
> -	if (copy_from_user(handles,
> -			   (void __user *)(uintptr_t)args->in_syncs,
> -			   in_fence_count * sizeof(u32))) {
> -		ret = -EFAULT;
> -		DRM_DEBUG("Failed to copy in syncobj handles\n");
> -		goto fail;
> -	}
> -
> -	for (i = 0; i < in_fence_count; i++) {
> -		struct dma_fence *fence;
> -
> -		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
> -					     &fence);
> -		if (ret)
> -			goto fail;
> -
> -		ret = drm_gem_fence_array_add(&job->deps, fence);
> -
> -		if (ret)
> -			goto fail;
> -	}
> -
> -fail:
> -	kvfree(handles);
> -	return ret;
> -}
> -
> -static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
> -		struct drm_file *file)
> -{
> -	struct panfrost_device *pfdev = dev->dev_private;
> -	struct drm_panfrost_submit *args = data;
> -	struct drm_syncobj *sync_out = NULL;
> -	struct panfrost_submitqueue *queue;
> -	struct panfrost_job *job;
> -	int ret = 0;
> -
> -	if (!args->jc)
> -		return -EINVAL;
> -
> -	if (args->requirements && args->requirements != PANFROST_JD_REQ_FS)
> -		return -EINVAL;
> -
> -	queue = panfrost_submitqueue_get(file->driver_priv, 0);
> -	if (IS_ERR(queue))
> -		return PTR_ERR(queue);
> -
> -	if (args->out_sync > 0) {
> -		sync_out = drm_syncobj_find(file, args->out_sync);
> -		if (!sync_out) {
> -			ret = -ENODEV;
> -			goto fail_put_queue;
> -		}
> -	}
> -
> -	job = kzalloc(sizeof(*job), GFP_KERNEL);
> -	if (!job) {
> -		ret = -ENOMEM;
> -		goto fail_out_sync;
> -	}
> -
> -	kref_init(&job->refcount);
> -
> -	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> -
> -	job->pfdev = pfdev;
> -	job->jc = args->jc;
> -	job->requirements = args->requirements;
> -	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
> -	job->file_priv = file->driver_priv;
> -
> -	ret = panfrost_copy_in_sync(dev, file, args, job);
> -	if (ret)
> -		goto fail_job;
> -
> -	ret = panfrost_lookup_bos(dev, file, args, job);
> -	if (ret)
> -		goto fail_job;
> -
> -	ret = panfrost_job_push(queue, job);
> -	if (ret)
> -		goto fail_job;
> -
> -	/* Update the return sync object for the job */
> -	if (sync_out)
> -		drm_syncobj_replace_fence(sync_out, job->render_done_fence);
> -
> -fail_job:
> -	panfrost_job_put(job);
> -fail_out_sync:
> -	if (sync_out)
> -		drm_syncobj_put(sync_out);
> -fail_put_queue:
> -	panfrost_submitqueue_put(queue);
> -
> -	return ret;
> -}
> -
>  static int
>  panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>  		       struct drm_file *file_priv)
> @@ -491,6 +313,389 @@ panfrost_ioctl_destroy_submitqueue(struct drm_device *dev, void *data,
>  	return panfrost_submitqueue_destroy(priv, id);
>  }
>  
> +#define PANFROST_BO_REF_ALLOWED_FLAGS \
> +	(PANFROST_BO_REF_EXCLUSIVE | PANFROST_BO_REF_NO_IMPLICIT_DEP)
> +
> +static int
> +panfrost_get_job_bos(struct drm_file *file_priv,
> +		     u64 refs, u32 ref_stride, u32 count,
> +		     struct panfrost_job *job)
> +{
> +	void __user *in = u64_to_user_ptr(refs);
> +	unsigned int i;
> +
> +	job->bo_count = count;
> +
> +	if (!count)
> +		return 0;
> +
> +	job->bos = kvmalloc_array(job->bo_count, sizeof(*job->bos),
> +				  GFP_KERNEL | __GFP_ZERO);
> +	job->bo_flags = kvmalloc_array(job->bo_count,
> +				       sizeof(*job->bo_flags),
> +				       GFP_KERNEL | __GFP_ZERO);
> +	if (!job->bos || !job->bo_flags)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < count; i++) {
> +		struct drm_panfrost_bo_ref ref = { };
> +		int ret;
> +
> +		ret = copy_struct_from_user(&ref, sizeof(ref),
> +					    in + (i * ref_stride),
> +					    ref_stride);
> +		if (ret)
> +			return ret;
> +
> +		/* Prior to the BATCH_SUBMIT ioctl all accessed BOs were
> +		 * treated as exclusive.
> +		 */
> +		if (ref_stride == sizeof(u32))
> +			ref.flags = PANFROST_BO_REF_EXCLUSIVE;
> +
> +		if ((ref.flags & ~PANFROST_BO_REF_ALLOWED_FLAGS))
> +			return -EINVAL;
> +
> +		job->bos[i] = drm_gem_object_lookup(file_priv, ref.handle);
> +		if (!job->bos[i])
> +			return -EINVAL;
> +
> +		job->bo_flags[i] = ref.flags;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +panfrost_get_job_in_syncs(struct drm_file *file_priv,
> +			  u64 refs, u32 ref_stride,
> +			  u32 count, struct panfrost_job *job)
> +{
> +	const void __user *in = u64_to_user_ptr(refs);
> +	unsigned int i;
> +	int ret;
> +
> +	if (!count)
> +		return 0;
> +
> +	for (i = 0; i < count; i++) {
> +		struct drm_panfrost_syncobj_ref ref = { };
> +		struct dma_fence *fence;
> +
> +		ret = copy_struct_from_user(&ref, sizeof(ref),
> +					    in + (i * ref_stride),
> +					    ref_stride);
> +		if (ret)
> +			return ret;
> +
> +		if (ref.pad)
> +			return -EINVAL;
> +
> +		ret = drm_syncobj_find_fence(file_priv, ref.handle, ref.point,
> +					     0, &fence);
> +		if (ret)
> +			return ret;
> +
> +		ret = drm_gem_fence_array_add(&job->deps, fence);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +struct panfrost_job_out_sync {
> +	struct drm_syncobj *syncobj;
> +	struct dma_fence_chain *chain;
> +	u64 point;
> +};
> +
> +static void
> +panfrost_put_job_out_syncs(struct panfrost_job_out_sync *out_syncs, u32 count)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < count; i++) {
> +		if (!out_syncs[i].syncobj)
> +			break;
> +
> +		drm_syncobj_put(out_syncs[i].syncobj);
> +		kvfree(out_syncs[i].chain);
> +	}
> +
> +	kvfree(out_syncs);
> +}
> +
> +static struct panfrost_job_out_sync *
> +panfrost_get_job_out_syncs(struct drm_file *file_priv,
> +			   u64 refs, u32 ref_stride,
> +			   u32 count)
> +{
> +	void __user *in = u64_to_user_ptr(refs);
> +	struct panfrost_job_out_sync *out_syncs;
> +	unsigned int i;
> +	int ret;
> +
> +	if (!count)
> +		return NULL;
> +
> +	/* If the syncobj ref_stride == sizeof(u32) we are called from the
> +	 * old submit ioctl() which only accepted one out syncobj. In that
> +	 * case the syncobj handle is passed directly through the
> +	 * ->out_syncs field, so let's make sure the refs fits in a u32.
> +	 */
> +	if (ref_stride == sizeof(u32) &&
> +	    (count != 1 || refs > UINT_MAX))
> +		return ERR_PTR(-EINVAL);
> +
> +	out_syncs = kvmalloc_array(count, sizeof(*out_syncs),
> +				   GFP_KERNEL | __GFP_ZERO);
> +	if (!out_syncs)
> +		return ERR_PTR(-ENOMEM);
> +
> +	for (i = 0; i < count; i++) {
> +		struct drm_panfrost_syncobj_ref ref = { };
> +
> +		if (ref_stride == sizeof(u32)) {
> +			/* Special case for the old submit wrapper: in that
> +			 * case there's only one out_sync, and the syncobj
> +			 * handle is passed directly in the out_syncs field.
> +			 */
> +			ref.handle = refs;
> +		} else {
> +			ret = copy_struct_from_user(&ref, sizeof(ref),
> +						    in + (i * ref_stride),
> +						    ref_stride);
> +			if (ret)
> +				goto err_free_out_syncs;
> +		}
> +
> +		if (ref.pad) {
> +			ret = -EINVAL;
> +			goto err_free_out_syncs;
> +		}
> +
> +		out_syncs[i].syncobj = drm_syncobj_find(file_priv, ref.handle);
> +		if (!out_syncs[i].syncobj) {
> +			ret = -ENODEV;
> +			goto err_free_out_syncs;
> +		}
> +
> +		out_syncs[i].point = ref.point;
> +		if (!out_syncs[i].point)
> +			continue;
> +
> +		out_syncs[i].chain = kmalloc(sizeof(*out_syncs[i].chain),
> +					     GFP_KERNEL);
> +		if (!out_syncs[i].chain) {
> +			ret = -ENOMEM;
> +			goto err_free_out_syncs;
> +		}
> +	}
> +
> +	return out_syncs;
> +
> +err_free_out_syncs:
> +	panfrost_put_job_out_syncs(out_syncs, count);
> +	return ERR_PTR(ret);
> +}
> +
> +static void
> +panfrost_set_job_out_fence(struct panfrost_job_out_sync *out_syncs,
> +			   unsigned int count, struct dma_fence *fence)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < count; i++) {
> +		if (out_syncs[i].chain) {
> +			drm_syncobj_add_point(out_syncs[i].syncobj,
> +					      out_syncs[i].chain,
> +					      fence, out_syncs[i].point);
> +			out_syncs[i].chain = NULL;
> +		} else {
> +			drm_syncobj_replace_fence(out_syncs[i].syncobj,
> +						  fence);
> +		}
> +	}
> +}
> +
> +struct panfrost_submit_ioctl_version_info {
> +	u32 job_stride;
> +	u32 bo_ref_stride;
> +	u32 syncobj_ref_stride;
> +};
> +
> +static const struct panfrost_submit_ioctl_version_info submit_versions[] = {
> +	/* SUBMIT */
> +	[0] = { 0, 4, 4 },
> +
> +	/* BATCH_SUBMIT v1 */
> +	[1] = { 48, 8, 16 },
> +};

It might be worth putting something like this in:

 	BUILD_BUG_ON(sizeof(struct drm_panfrost_job) !=
 		     submit_versions[PANFROST_SUBMIT_BATCH_VERSION].job_stride);
 	BUILD_BUG_ON(sizeof(struct drm_panfrost_bo_ref) !=
 		     submit_versions[PANFROST_SUBMIT_BATCH_VERSION].bo_ref_stride);
 	BUILD_BUG_ON(sizeof(struct drm_panfrost_syncobj_ref) !=
 		     submit_versions[PANFROST_SUBMIT_BATCH_VERSION].syncobj_ref_stride);

That way we won't accidentally let it get out of sync.

> +
> +#define PANFROST_JD_ALLOWED_REQS PANFROST_JD_REQ_FS
> +
> +static int
> +panfrost_submit_job(struct drm_device *dev, struct drm_file *file_priv,
> +		    struct panfrost_submitqueue *queue,
> +		    const struct drm_panfrost_job *args,
> +		    u32 version)
> +{
> +	struct panfrost_device *pfdev = dev->dev_private;
> +	struct panfrost_job_out_sync *out_syncs;
> +	u32 bo_stride, syncobj_stride;
> +	struct panfrost_job *job;
> +	int ret;
> +
> +	if (!args->head)
> +		return -EINVAL;
> +
> +	if (args->requirements & ~PANFROST_JD_ALLOWED_REQS)
> +		return -EINVAL;
> +
> +	bo_stride = submit_versions[version].bo_ref_stride;
> +	syncobj_stride = submit_versions[version].syncobj_ref_stride;
> +
> +	job = kzalloc(sizeof(*job), GFP_KERNEL);
> +	if (!job)
> +		return -ENOMEM;
> +
> +	kref_init(&job->refcount);
> +
> +	job->pfdev = pfdev;
> +	job->jc = args->head;
> +	job->requirements = args->requirements;
> +	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
> +	job->file_priv = file_priv->driver_priv;
> +	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> +
> +	ret = panfrost_get_job_in_syncs(file_priv,
> +					args->in_syncs,
> +					syncobj_stride,
> +					args->in_sync_count,
> +					job);
> +	if (ret)
> +		goto err_put_job;
> +
> +	out_syncs = panfrost_get_job_out_syncs(file_priv,
> +					       args->out_syncs,
> +					       syncobj_stride,
> +					       args->out_sync_count);
> +	if (IS_ERR(out_syncs)) {
> +		ret = PTR_ERR(out_syncs);
> +		goto err_put_job;
> +	}
> +
> +	ret = panfrost_get_job_bos(file_priv, args->bos, bo_stride,
> +				   args->bo_count, job);
> +	if (ret)
> +		goto err_put_job;
> +
> +	ret = panfrost_get_job_mappings(file_priv, job);
> +	if (ret)
> +		goto err_put_job;
> +
> +	ret = panfrost_job_push(queue, job);
> +	if (ret) {
> +		panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
> +		goto err_put_job;
> +	}
> +
> +	panfrost_set_job_out_fence(out_syncs, args->out_sync_count,
> +				   job->render_done_fence);
> +	panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
> +	return 0;
> +
> +err_put_job:
> +	panfrost_job_put(job);
> +	return ret;
> +}
> +
> +static int
> +panfrost_ioctl_submit(struct drm_device *dev, void *data,
> +		      struct drm_file *file)
> +{
> +	struct drm_panfrost_submit *args = data;
> +	struct drm_panfrost_job job_args = {
> +		.head = args->jc,
> +		.bos = args->bo_handles,
> +		.in_syncs = args->in_syncs,
> +
> +		/* We are abusing .out_syncs and passing the handle directly
> +		 * instead of a pointer to a user u32 array, but
> +		 * panfrost_job_submit() knows about it, so it's fine.
> +		 */
> +		.out_syncs = args->out_sync,
> +		.in_sync_count = args->in_sync_count,
> +		.out_sync_count = args->out_sync > 0 ? 1 : 0,
> +		.bo_count = args->bo_handle_count,
> +		.requirements = args->requirements
> +	};
> +	struct panfrost_submitqueue *queue;
> +	int ret;
> +
> +	queue = panfrost_submitqueue_get(file->driver_priv, 0);
> +	if (IS_ERR(queue))
> +		return PTR_ERR(queue);
> +
> +	ret = panfrost_submit_job(dev, file, queue, &job_args, 0);
> +	panfrost_submitqueue_put(queue);
> +
> +	return ret;
> +}
> +
> +static int
> +panfrost_ioctl_batch_submit(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct drm_panfrost_batch_submit *args = data;
> +	void __user *jobs_args = u64_to_user_ptr(args->jobs);
> +	struct panfrost_submitqueue *queue;
> +	u32 version = args->version;
> +	u32 job_stride;
> +	unsigned int i;
> +	int ret;
> +
> +	/* Version 0 doesn't exists (it's reserved for the SUBMIT ioctl) */
> +	if (!version)
> +		return -EINVAL;
> +
> +	/* If the version specified is bigger than what we currently support,
> +	 * pick the last supported version and let copy_struct_from_user()
> +	 * check that any extra job, bo_ref and syncobj_ref fields are zeroed.
> +	 */
> +	if (version >= ARRAY_SIZE(submit_versions))
> +		version = ARRAY_SIZE(submit_versions) - 1;
> +
> +	queue = panfrost_submitqueue_get(file_priv->driver_priv, args->queue);
> +	if (IS_ERR(queue))
> +		return PTR_ERR(queue);
> +
> +	job_stride = submit_versions[version].job_stride;
> +	for (i = 0; i < args->job_count; i++) {
> +		struct drm_panfrost_job job_args = { };
> +
> +		ret = copy_struct_from_user(&job_args, sizeof(job_args),
> +					    jobs_args + (i * job_stride),
> +					    job_stride);
> +		if (ret) {
> +			args->fail_idx = i;
> +			goto out_put_queue;
> +		}
> +
> +		ret = panfrost_submit_job(dev, file_priv, queue, &job_args,
> +					  version);
> +		if (ret) {
> +			args->fail_idx = i;
> +			goto out_put_queue;
> +		}

NIT: You could avoid the repeated code with something like:

	ret = copy_struct_from_user(...)

	if (!ret)
		ret = panfrost_submit_job(...)
	if (ret){
		args->fail_idx = i;
		goto out_out_queue;
	}

> +	}
> +
> +out_put_queue:
> +	panfrost_submitqueue_put(queue);
> +	return 0;

Shouldn't this be "return ret" so that it returns an error if any job
fails? (You will also need to init ret for the case where job_count==0).
Otherwise user space would need to set fail_idx to something invalid and
check if it's been modified to know about the failure.

> +}
> +
>  int panfrost_unstable_ioctl_check(void)
>  {
>  	if (!unstable_ioctls)
> @@ -572,6 +777,7 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
>  	PANFROST_IOCTL(MADVISE,		madvise,	DRM_RENDER_ALLOW),
>  	PANFROST_IOCTL(CREATE_SUBMITQUEUE, create_submitqueue, DRM_RENDER_ALLOW),
>  	PANFROST_IOCTL(DESTROY_SUBMITQUEUE, destroy_submitqueue, DRM_RENDER_ALLOW),
> +	PANFROST_IOCTL(BATCH_SUBMIT,	batch_submit,	DRM_RENDER_ALLOW),
>  };
>  
>  DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 56ae89272e19..4e1540bce865 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -254,6 +254,9 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
>  				return ret;
>  		}
>  
> +		if (job->bo_flags[i] & PANFROST_BO_REF_NO_IMPLICIT_DEP)
> +			continue;
> +
>  		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i],
>  						       exclusive);
>  		if (ret)
> diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
> index e31a22c176d9..5d534e61c28e 100644
> --- a/include/uapi/drm/panfrost_drm.h
> +++ b/include/uapi/drm/panfrost_drm.h
> @@ -23,6 +23,7 @@ extern "C" {
>  #define DRM_PANFROST_MADVISE			0x08
>  #define DRM_PANFROST_CREATE_SUBMITQUEUE		0x09
>  #define DRM_PANFROST_DESTROY_SUBMITQUEUE	0x0a
> +#define DRM_PANFROST_BATCH_SUBMIT		0x0b
>  
>  #define DRM_IOCTL_PANFROST_SUBMIT		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_SUBMIT, struct drm_panfrost_submit)
>  #define DRM_IOCTL_PANFROST_WAIT_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_WAIT_BO, struct drm_panfrost_wait_bo)
> @@ -33,6 +34,7 @@ extern "C" {
>  #define DRM_IOCTL_PANFROST_MADVISE		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_MADVISE, struct drm_panfrost_madvise)
>  #define DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_CREATE_SUBMITQUEUE, struct drm_panfrost_create_submitqueue)
>  #define DRM_IOCTL_PANFROST_DESTROY_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_DESTROY_SUBMITQUEUE, __u32)
> +#define DRM_IOCTL_PANFROST_BATCH_SUBMIT		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_BATCH_SUBMIT, struct drm_panfrost_batch_submit)
>  
>  /*
>   * Unstable ioctl(s): only exposed when the unsafe unstable_ioctls module
> @@ -241,9 +243,99 @@ struct drm_panfrost_create_submitqueue {
>  	__u32 id;	/* out, identifier */
>  };
>  
> +/* Syncobj reference passed at job submission time to encode explicit
> + * input/output fences.
> + */
> +struct drm_panfrost_syncobj_ref {
> +	/** Syncobj handle */
> +	__u32 handle;
> +
> +	/** Padding field, must be set to 0 */
> +	__u32 pad;
> +
> +	/**
> +	 * For timeline syncobjs, the point on the timeline the reference
> +	 * points to. 0 for the last point.
> +	 * Must be set to 0 for non-timeline syncobjs
> +	 */
> +	__u64 point;
> +};
> +
>  /* Exclusive (AKA write) access to the BO */
>  #define PANFROST_BO_REF_EXCLUSIVE	0x1
>  
> +/* Disable the implicit depency on the BO fence */

NIT: s/depency/dependency/

> +#define PANFROST_BO_REF_NO_IMPLICIT_DEP	0x2
> +
> +/* Describes a BO referenced by a job and the type of access. */
> +struct drm_panfrost_bo_ref {
> +	/** A GEM handle */
> +	__u32 handle;
> +
> +	/** A combination of PANFROST_BO_REF_x flags */
> +	__u32 flags;
> +};
> +
> +/* Describes a GPU job and the resources attached to it. */
> +struct drm_panfrost_job {
> +	/** GPU pointer to the head of the job chain. */
> +	__u64 head;
> +
> +	/**
> +	 * Array of drm_panfrost_bo_ref objects describing the BOs referenced
> +	 * by this job.
> +	 */
> +	__u64 bos;
> +
> +	/**
> +	 * Arrays of drm_panfrost_syncobj_ref objects describing the input
> +	 * and output fences.
> +	 */
> +	__u64 in_syncs;
> +	__u64 out_syncs;
> +
> +	/** Syncobj reference array sizes. */
> +	__u32 in_sync_count;
> +	__u32 out_sync_count;
> +
> +	/** BO reference array size. */
> +	__u32 bo_count;
> +
> +	/** Combination of PANFROST_JD_REQ_* flags. */
> +	__u32 requirements;
> +};
> +
> +#define PANFROST_SUBMIT_BATCH_VERSION	1
> +
> +/* Used to submit multiple jobs in one call */
> +struct drm_panfrost_batch_submit {
> +	/**
> +	 * Always set to PANFROST_SUBMIT_BATCH_VERSION. This is used to let the
> +	 * kernel know about the size of the various structs passed to the
> +	 * BATCH_SUBMIT ioctl.
> +	 */
> +	__u32 version;
> +
> +	/** Number of jobs to submit. */
> +	__u32 job_count;
> +
> +	/* Pointer to a job array. */
> +	__u64 jobs;
> +
> +	/**
> +	 * ID of the queue to submit those jobs to. 0 is the default
> +	 * submit queue and should always exists. If you need a dedicated
> +	 * queue, create it with DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE.
> +	 */
> +	__u32 queue;
> +
> +	/**
> +	 * If the submission fails, this encodes the index of the job
> +	 * failed.

NIT: "the index of the first job that failed"

Steve

> +	 */
> +	__u32 fail_idx;
> +};
> +
>  #if defined(__cplusplus)
>  }
>  #endif
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches
  2021-07-05  9:32   ` Daniel Vetter
@ 2021-07-08 12:10     ` Christian König
  2021-07-26 10:27       ` Boris Brezillon
  0 siblings, 1 reply; 14+ messages in thread
From: Christian König @ 2021-07-08 12:10 UTC (permalink / raw)
  To: Daniel Vetter, Boris Brezillon
  Cc: Tomeu Vizoso, dri-devel, Steven Price, Rob Herring,
	Alyssa Rosenzweig, Jason Ekstrand, Robin Murphy

Am 05.07.21 um 11:32 schrieb Daniel Vetter:
> On Mon, Jul 05, 2021 at 10:29:48AM +0200, Boris Brezillon wrote:
>> This should help limit the number of ioctls when submitting multiple
>> jobs. The new ioctl also supports syncobj timelines and BO access flags.
>>
>> v4:
>> * Implement panfrost_ioctl_submit() as a wrapper around
>>    panfrost_submit_job()
>> * Replace stride fields by a version field which is mapped to
>>    a <job_stride,bo_ref_stride,syncobj_ref_stride> tuple internally
>>
>> v3:
>> * Re-use panfrost_get_job_bos() and panfrost_get_job_in_syncs() in the
>>    old submit path
>>
>> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
>> ---
>>   drivers/gpu/drm/panfrost/panfrost_drv.c | 562 ++++++++++++++++--------
>>   drivers/gpu/drm/panfrost/panfrost_job.c |   3 +
>>   include/uapi/drm/panfrost_drm.h         |  92 ++++
>>   3 files changed, 479 insertions(+), 178 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>> index 8e28ef30310b..a624e4f86aff 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>> @@ -138,184 +138,6 @@ panfrost_get_job_mappings(struct drm_file *file_priv, struct panfrost_job *job)
>>   	return 0;
>>   }
>>   
>> -/**
>> - * panfrost_lookup_bos() - Sets up job->bo[] with the GEM objects
>> - * referenced by the job.
>> - * @dev: DRM device
>> - * @file_priv: DRM file for this fd
>> - * @args: IOCTL args
>> - * @job: job being set up
>> - *
>> - * Resolve handles from userspace to BOs and attach them to job.
>> - *
>> - * Note that this function doesn't need to unreference the BOs on
>> - * failure, because that will happen at panfrost_job_cleanup() time.
>> - */
>> -static int
>> -panfrost_lookup_bos(struct drm_device *dev,
>> -		  struct drm_file *file_priv,
>> -		  struct drm_panfrost_submit *args,
>> -		  struct panfrost_job *job)
>> -{
>> -	unsigned int i;
>> -	int ret;
>> -
>> -	job->bo_count = args->bo_handle_count;
>> -
>> -	if (!job->bo_count)
>> -		return 0;
>> -
>> -	job->bo_flags = kvmalloc_array(job->bo_count,
>> -				       sizeof(*job->bo_flags),
>> -				       GFP_KERNEL | __GFP_ZERO);
>> -	if (!job->bo_flags)
>> -		return -ENOMEM;
>> -
>> -	for (i = 0; i < job->bo_count; i++)
>> -		job->bo_flags[i] = PANFROST_BO_REF_EXCLUSIVE;
>> -
>> -	ret = drm_gem_objects_lookup(file_priv,
>> -				     (void __user *)(uintptr_t)args->bo_handles,
>> -				     job->bo_count, &job->bos);
>> -	if (ret)
>> -		return ret;
>> -
>> -	return panfrost_get_job_mappings(file_priv, job);
>> -}
>> -
>> -/**
>> - * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
>> - * referenced by the job.
>> - * @dev: DRM device
>> - * @file_priv: DRM file for this fd
>> - * @args: IOCTL args
>> - * @job: job being set up
>> - *
>> - * Resolve syncobjs from userspace to fences and attach them to job.
>> - *
>> - * Note that this function doesn't need to unreference the fences on
>> - * failure, because that will happen at panfrost_job_cleanup() time.
>> - */
>> -static int
>> -panfrost_copy_in_sync(struct drm_device *dev,
>> -		  struct drm_file *file_priv,
>> -		  struct drm_panfrost_submit *args,
>> -		  struct panfrost_job *job)
>> -{
>> -	u32 *handles;
>> -	int ret = 0;
>> -	int i, in_fence_count;
>> -
>> -	in_fence_count = args->in_sync_count;
>> -
>> -	if (!in_fence_count)
>> -		return 0;
>> -
>> -	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
>> -	if (!handles) {
>> -		ret = -ENOMEM;
>> -		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
>> -		goto fail;
>> -	}
>> -
>> -	if (copy_from_user(handles,
>> -			   (void __user *)(uintptr_t)args->in_syncs,
>> -			   in_fence_count * sizeof(u32))) {
>> -		ret = -EFAULT;
>> -		DRM_DEBUG("Failed to copy in syncobj handles\n");
>> -		goto fail;
>> -	}
>> -
>> -	for (i = 0; i < in_fence_count; i++) {
>> -		struct dma_fence *fence;
>> -
>> -		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
>> -					     &fence);
>> -		if (ret)
>> -			goto fail;
>> -
>> -		ret = drm_gem_fence_array_add(&job->deps, fence);
>> -
>> -		if (ret)
>> -			goto fail;
>> -	}
>> -
>> -fail:
>> -	kvfree(handles);
>> -	return ret;
>> -}
>> -
>> -static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>> -		struct drm_file *file)
>> -{
>> -	struct panfrost_device *pfdev = dev->dev_private;
>> -	struct drm_panfrost_submit *args = data;
>> -	struct drm_syncobj *sync_out = NULL;
>> -	struct panfrost_submitqueue *queue;
>> -	struct panfrost_job *job;
>> -	int ret = 0;
>> -
>> -	if (!args->jc)
>> -		return -EINVAL;
>> -
>> -	if (args->requirements && args->requirements != PANFROST_JD_REQ_FS)
>> -		return -EINVAL;
>> -
>> -	queue = panfrost_submitqueue_get(file->driver_priv, 0);
>> -	if (IS_ERR(queue))
>> -		return PTR_ERR(queue);
>> -
>> -	if (args->out_sync > 0) {
>> -		sync_out = drm_syncobj_find(file, args->out_sync);
>> -		if (!sync_out) {
>> -			ret = -ENODEV;
>> -			goto fail_put_queue;
>> -		}
>> -	}
>> -
>> -	job = kzalloc(sizeof(*job), GFP_KERNEL);
>> -	if (!job) {
>> -		ret = -ENOMEM;
>> -		goto fail_out_sync;
>> -	}
>> -
>> -	kref_init(&job->refcount);
>> -
>> -	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
>> -
>> -	job->pfdev = pfdev;
>> -	job->jc = args->jc;
>> -	job->requirements = args->requirements;
>> -	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
>> -	job->file_priv = file->driver_priv;
>> -
>> -	ret = panfrost_copy_in_sync(dev, file, args, job);
>> -	if (ret)
>> -		goto fail_job;
>> -
>> -	ret = panfrost_lookup_bos(dev, file, args, job);
>> -	if (ret)
>> -		goto fail_job;
>> -
>> -	ret = panfrost_job_push(queue, job);
>> -	if (ret)
>> -		goto fail_job;
>> -
>> -	/* Update the return sync object for the job */
>> -	if (sync_out)
>> -		drm_syncobj_replace_fence(sync_out, job->render_done_fence);
>> -
>> -fail_job:
>> -	panfrost_job_put(job);
>> -fail_out_sync:
>> -	if (sync_out)
>> -		drm_syncobj_put(sync_out);
>> -fail_put_queue:
>> -	panfrost_submitqueue_put(queue);
>> -
>> -	return ret;
>> -}
>> -
>>   static int
>>   panfrost_ioctl_wait_bo(struct drm_device *dev, void *data,
>>   		       struct drm_file *file_priv)
>> @@ -491,6 +313,389 @@ panfrost_ioctl_destroy_submitqueue(struct drm_device *dev, void *data,
>>   	return panfrost_submitqueue_destroy(priv, id);
>>   }
>>   
>> +#define PANFROST_BO_REF_ALLOWED_FLAGS \
>> +	(PANFROST_BO_REF_EXCLUSIVE | PANFROST_BO_REF_NO_IMPLICIT_DEP)
>> +
>> +static int
>> +panfrost_get_job_bos(struct drm_file *file_priv,
>> +		     u64 refs, u32 ref_stride, u32 count,
>> +		     struct panfrost_job *job)
>> +{
>> +	void __user *in = u64_to_user_ptr(refs);
>> +	unsigned int i;
>> +
>> +	job->bo_count = count;
>> +
>> +	if (!count)
>> +		return 0;
>> +
>> +	job->bos = kvmalloc_array(job->bo_count, sizeof(*job->bos),
>> +				  GFP_KERNEL | __GFP_ZERO);
>> +	job->bo_flags = kvmalloc_array(job->bo_count,
>> +				       sizeof(*job->bo_flags),
>> +				       GFP_KERNEL | __GFP_ZERO);
>> +	if (!job->bos || !job->bo_flags)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < count; i++) {
>> +		struct drm_panfrost_bo_ref ref = { };
>> +		int ret;
>> +
>> +		ret = copy_struct_from_user(&ref, sizeof(ref),
>> +					    in + (i * ref_stride),
>> +					    ref_stride);
>> +		if (ret)
>> +			return ret;
>> +
>> +		/* Prior to the BATCH_SUBMIT ioctl all accessed BOs were
>> +		 * treated as exclusive.
>> +		 */
>> +		if (ref_stride == sizeof(u32))
>> +			ref.flags = PANFROST_BO_REF_EXCLUSIVE;
>> +
>> +		if ((ref.flags & ~PANFROST_BO_REF_ALLOWED_FLAGS))
>> +			return -EINVAL;
>> +
>> +		job->bos[i] = drm_gem_object_lookup(file_priv, ref.handle);
>> +		if (!job->bos[i])
>> +			return -EINVAL;
>> +
>> +		job->bo_flags[i] = ref.flags;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int
>> +panfrost_get_job_in_syncs(struct drm_file *file_priv,
>> +			  u64 refs, u32 ref_stride,
>> +			  u32 count, struct panfrost_job *job)
>> +{
>> +	const void __user *in = u64_to_user_ptr(refs);
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	if (!count)
>> +		return 0;
>> +
>> +	for (i = 0; i < count; i++) {
>> +		struct drm_panfrost_syncobj_ref ref = { };
>> +		struct dma_fence *fence;
>> +
>> +		ret = copy_struct_from_user(&ref, sizeof(ref),
>> +					    in + (i * ref_stride),
>> +					    ref_stride);
>> +		if (ret)
>> +			return ret;
>> +
>> +		if (ref.pad)
>> +			return -EINVAL;
>> +
>> +		ret = drm_syncobj_find_fence(file_priv, ref.handle, ref.point,
>> +					     0, &fence);
>> +		if (ret)
>> +			return ret;
>> +
>> +		ret = drm_gem_fence_array_add(&job->deps, fence);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +struct panfrost_job_out_sync {
>> +	struct drm_syncobj *syncobj;
>> +	struct dma_fence_chain *chain;
>> +	u64 point;
>> +};
>> +
>> +static void
>> +panfrost_put_job_out_syncs(struct panfrost_job_out_sync *out_syncs, u32 count)
>> +{
>> +	unsigned int i;
>> +
>> +	for (i = 0; i < count; i++) {
>> +		if (!out_syncs[i].syncobj)
>> +			break;
>> +
>> +		drm_syncobj_put(out_syncs[i].syncobj);
>> +		kvfree(out_syncs[i].chain);
>> +	}
>> +
>> +	kvfree(out_syncs);
>> +}
>> +
>> +static struct panfrost_job_out_sync *
>> +panfrost_get_job_out_syncs(struct drm_file *file_priv,
>> +			   u64 refs, u32 ref_stride,
>> +			   u32 count)
>> +{
>> +	void __user *in = u64_to_user_ptr(refs);
>> +	struct panfrost_job_out_sync *out_syncs;
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	if (!count)
>> +		return NULL;
>> +
>> +	/* If the syncobj ref_stride == sizeof(u32) we are called from the
>> +	 * old submit ioctl() which only accepted one out syncobj. In that
>> +	 * case the syncobj handle is passed directly through the
>> +	 * ->out_syncs field, so let's make sure the refs fits in a u32.
>> +	 */
>> +	if (ref_stride == sizeof(u32) &&
>> +	    (count != 1 || refs > UINT_MAX))
>> +		return ERR_PTR(-EINVAL);
>> +
>> +	out_syncs = kvmalloc_array(count, sizeof(*out_syncs),
>> +				   GFP_KERNEL | __GFP_ZERO);
>> +	if (!out_syncs)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	for (i = 0; i < count; i++) {
>> +		struct drm_panfrost_syncobj_ref ref = { };
>> +
>> +		if (ref_stride == sizeof(u32)) {
>> +			/* Special case for the old submit wrapper: in that
>> +			 * case there's only one out_sync, and the syncobj
>> +			 * handle is passed directly in the out_syncs field.
>> +			 */
>> +			ref.handle = refs;
>> +		} else {
>> +			ret = copy_struct_from_user(&ref, sizeof(ref),
>> +						    in + (i * ref_stride),
>> +						    ref_stride);
>> +			if (ret)
>> +				goto err_free_out_syncs;
>> +		}
>> +
>> +		if (ref.pad) {
>> +			ret = -EINVAL;
>> +			goto err_free_out_syncs;
>> +		}
>> +
>> +		out_syncs[i].syncobj = drm_syncobj_find(file_priv, ref.handle);
>> +		if (!out_syncs[i].syncobj) {
>> +			ret = -ENODEV;
>> +			goto err_free_out_syncs;
>> +		}
>> +
>> +		out_syncs[i].point = ref.point;
>> +		if (!out_syncs[i].point)
>> +			continue;
>> +
>> +		out_syncs[i].chain = kmalloc(sizeof(*out_syncs[i].chain),
>> +					     GFP_KERNEL);
>> +		if (!out_syncs[i].chain) {
>> +			ret = -ENOMEM;
>> +			goto err_free_out_syncs;
>> +		}
>> +	}
>> +
>> +	return out_syncs;
>> +
>> +err_free_out_syncs:
>> +	panfrost_put_job_out_syncs(out_syncs, count);
>> +	return ERR_PTR(ret);
>> +}
>> +
>> +static void
>> +panfrost_set_job_out_fence(struct panfrost_job_out_sync *out_syncs,
>> +			   unsigned int count, struct dma_fence *fence)
>> +{
>> +	unsigned int i;
>> +
>> +	for (i = 0; i < count; i++) {
>> +		if (out_syncs[i].chain) {
>> +			drm_syncobj_add_point(out_syncs[i].syncobj,
>> +					      out_syncs[i].chain,
>> +					      fence, out_syncs[i].point);
>> +			out_syncs[i].chain = NULL;
>> +		} else {
>> +			drm_syncobj_replace_fence(out_syncs[i].syncobj,
>> +						  fence);
>> +		}
>> +	}
>> +}
>> +
>> +struct panfrost_submit_ioctl_version_info {
>> +	u32 job_stride;
>> +	u32 bo_ref_stride;
>> +	u32 syncobj_ref_stride;
>> +};
>> +
>> +static const struct panfrost_submit_ioctl_version_info submit_versions[] = {
>> +	/* SUBMIT */
>> +	[0] = { 0, 4, 4 },
>> +
>> +	/* BATCH_SUBMIT v1 */
>> +	[1] = { 48, 8, 16 },
>> +};
>> +
>> +#define PANFROST_JD_ALLOWED_REQS PANFROST_JD_REQ_FS
>> +
>> +static int
>> +panfrost_submit_job(struct drm_device *dev, struct drm_file *file_priv,
>> +		    struct panfrost_submitqueue *queue,
>> +		    const struct drm_panfrost_job *args,
>> +		    u32 version)
>> +{
>> +	struct panfrost_device *pfdev = dev->dev_private;
>> +	struct panfrost_job_out_sync *out_syncs;
>> +	u32 bo_stride, syncobj_stride;
>> +	struct panfrost_job *job;
>> +	int ret;
>> +
>> +	if (!args->head)
>> +		return -EINVAL;
>> +
>> +	if (args->requirements & ~PANFROST_JD_ALLOWED_REQS)
>> +		return -EINVAL;
>> +
>> +	bo_stride = submit_versions[version].bo_ref_stride;
>> +	syncobj_stride = submit_versions[version].syncobj_ref_stride;
>> +
>> +	job = kzalloc(sizeof(*job), GFP_KERNEL);
>> +	if (!job)
>> +		return -ENOMEM;
>> +
>> +	kref_init(&job->refcount);
>> +
>> +	job->pfdev = pfdev;
>> +	job->jc = args->head;
>> +	job->requirements = args->requirements;
>> +	job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
>> +	job->file_priv = file_priv->driver_priv;
>> +	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
>> +
>> +	ret = panfrost_get_job_in_syncs(file_priv,
>> +					args->in_syncs,
>> +					syncobj_stride,
>> +					args->in_sync_count,
>> +					job);
>> +	if (ret)
>> +		goto err_put_job;
>> +
>> +	out_syncs = panfrost_get_job_out_syncs(file_priv,
>> +					       args->out_syncs,
>> +					       syncobj_stride,
>> +					       args->out_sync_count);
>> +	if (IS_ERR(out_syncs)) {
>> +		ret = PTR_ERR(out_syncs);
>> +		goto err_put_job;
>> +	}
>> +
>> +	ret = panfrost_get_job_bos(file_priv, args->bos, bo_stride,
>> +				   args->bo_count, job);
>> +	if (ret)
>> +		goto err_put_job;
>> +
>> +	ret = panfrost_get_job_mappings(file_priv, job);
>> +	if (ret)
>> +		goto err_put_job;
>> +
>> +	ret = panfrost_job_push(queue, job);
>> +	if (ret) {
>> +		panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
>> +		goto err_put_job;
>> +	}
>> +
>> +	panfrost_set_job_out_fence(out_syncs, args->out_sync_count,
>> +				   job->render_done_fence);
>> +	panfrost_put_job_out_syncs(out_syncs, args->out_sync_count);
>> +	return 0;
>> +
>> +err_put_job:
>> +	panfrost_job_put(job);
>> +	return ret;
>> +}
>> +
>> +static int
>> +panfrost_ioctl_submit(struct drm_device *dev, void *data,
>> +		      struct drm_file *file)
>> +{
>> +	struct drm_panfrost_submit *args = data;
>> +	struct drm_panfrost_job job_args = {
>> +		.head = args->jc,
>> +		.bos = args->bo_handles,
>> +		.in_syncs = args->in_syncs,
>> +
>> +		/* We are abusing .out_syncs and passing the handle directly
>> +		 * instead of a pointer to a user u32 array, but
>> +		 * panfrost_job_submit() knows about it, so it's fine.
>> +		 */
>> +		.out_syncs = args->out_sync,
>> +		.in_sync_count = args->in_sync_count,
>> +		.out_sync_count = args->out_sync > 0 ? 1 : 0,
>> +		.bo_count = args->bo_handle_count,
>> +		.requirements = args->requirements
>> +	};
>> +	struct panfrost_submitqueue *queue;
>> +	int ret;
>> +
>> +	queue = panfrost_submitqueue_get(file->driver_priv, 0);
>> +	if (IS_ERR(queue))
>> +		return PTR_ERR(queue);
>> +
>> +	ret = panfrost_submit_job(dev, file, queue, &job_args, 0);
>> +	panfrost_submitqueue_put(queue);
>> +
>> +	return ret;
>> +}
>> +
>> +static int
>> +panfrost_ioctl_batch_submit(struct drm_device *dev, void *data,
>> +			    struct drm_file *file_priv)
>> +{
>> +	struct drm_panfrost_batch_submit *args = data;
>> +	void __user *jobs_args = u64_to_user_ptr(args->jobs);
>> +	struct panfrost_submitqueue *queue;
>> +	u32 version = args->version;
>> +	u32 job_stride;
>> +	unsigned int i;
>> +	int ret;
>> +
>> +	/* Version 0 doesn't exists (it's reserved for the SUBMIT ioctl) */
>> +	if (!version)
>> +		return -EINVAL;
>> +
>> +	/* If the version specified is bigger than what we currently support,
>> +	 * pick the last supported version and let copy_struct_from_user()
>> +	 * check that any extra job, bo_ref and syncobj_ref fields are zeroed.
>> +	 */
>> +	if (version >= ARRAY_SIZE(submit_versions))
>> +		version = ARRAY_SIZE(submit_versions) - 1;
>> +
>> +	queue = panfrost_submitqueue_get(file_priv->driver_priv, args->queue);
>> +	if (IS_ERR(queue))
>> +		return PTR_ERR(queue);
>> +
>> +	job_stride = submit_versions[version].job_stride;
>> +	for (i = 0; i < args->job_count; i++) {
>> +		struct drm_panfrost_job job_args = { };
>> +
>> +		ret = copy_struct_from_user(&job_args, sizeof(job_args),
>> +					    jobs_args + (i * job_stride),
>> +					    job_stride);
>> +		if (ret) {
>> +			args->fail_idx = i;
>> +			goto out_put_queue;
>> +		}
>> +
>> +		ret = panfrost_submit_job(dev, file_priv, queue, &job_args,
>> +					  version);
>> +		if (ret) {
>> +			args->fail_idx = i;
>> +			goto out_put_queue;
>> +		}
>> +	}
>> +
>> +out_put_queue:
>> +	panfrost_submitqueue_put(queue);
>> +	return 0;
>> +}
>> +
>>   int panfrost_unstable_ioctl_check(void)
>>   {
>>   	if (!unstable_ioctls)
>> @@ -572,6 +777,7 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = {
>>   	PANFROST_IOCTL(MADVISE,		madvise,	DRM_RENDER_ALLOW),
>>   	PANFROST_IOCTL(CREATE_SUBMITQUEUE, create_submitqueue, DRM_RENDER_ALLOW),
>>   	PANFROST_IOCTL(DESTROY_SUBMITQUEUE, destroy_submitqueue, DRM_RENDER_ALLOW),
>> +	PANFROST_IOCTL(BATCH_SUBMIT,	batch_submit,	DRM_RENDER_ALLOW),
>>   };
>>   
>>   DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops);
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 56ae89272e19..4e1540bce865 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -254,6 +254,9 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
>>   				return ret;
>>   		}
>>   
>> +		if (job->bo_flags[i] & PANFROST_BO_REF_NO_IMPLICIT_DEP)
>> +			continue;
> This breaks dma_resv rules. I'll send out patch set fixing this pattern in
> other drivers, I'll ping you on that for what you need to change. Should
> go out today or so.

I'm really wondering if the behavior that the exclusive fences replaces 
all the shared fences was such a good idea.

It just allows drivers to mess up things in a way which can be easily 
used to compromise the system.

Christian.

>
> Also cc: Christian König.
> -Daniel
>
>> +
>>   		ret = drm_gem_fence_array_add_implicit(&job->deps, job->bos[i],
>>   						       exclusive);
>>   		if (ret)
>> diff --git a/include/uapi/drm/panfrost_drm.h b/include/uapi/drm/panfrost_drm.h
>> index e31a22c176d9..5d534e61c28e 100644
>> --- a/include/uapi/drm/panfrost_drm.h
>> +++ b/include/uapi/drm/panfrost_drm.h
>> @@ -23,6 +23,7 @@ extern "C" {
>>   #define DRM_PANFROST_MADVISE			0x08
>>   #define DRM_PANFROST_CREATE_SUBMITQUEUE		0x09
>>   #define DRM_PANFROST_DESTROY_SUBMITQUEUE	0x0a
>> +#define DRM_PANFROST_BATCH_SUBMIT		0x0b
>>   
>>   #define DRM_IOCTL_PANFROST_SUBMIT		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_SUBMIT, struct drm_panfrost_submit)
>>   #define DRM_IOCTL_PANFROST_WAIT_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_PANFROST_WAIT_BO, struct drm_panfrost_wait_bo)
>> @@ -33,6 +34,7 @@ extern "C" {
>>   #define DRM_IOCTL_PANFROST_MADVISE		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_MADVISE, struct drm_panfrost_madvise)
>>   #define DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_CREATE_SUBMITQUEUE, struct drm_panfrost_create_submitqueue)
>>   #define DRM_IOCTL_PANFROST_DESTROY_SUBMITQUEUE	DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_DESTROY_SUBMITQUEUE, __u32)
>> +#define DRM_IOCTL_PANFROST_BATCH_SUBMIT		DRM_IOWR(DRM_COMMAND_BASE + DRM_PANFROST_BATCH_SUBMIT, struct drm_panfrost_batch_submit)
>>   
>>   /*
>>    * Unstable ioctl(s): only exposed when the unsafe unstable_ioctls module
>> @@ -241,9 +243,99 @@ struct drm_panfrost_create_submitqueue {
>>   	__u32 id;	/* out, identifier */
>>   };
>>   
>> +/* Syncobj reference passed at job submission time to encode explicit
>> + * input/output fences.
>> + */
>> +struct drm_panfrost_syncobj_ref {
>> +	/** Syncobj handle */
>> +	__u32 handle;
>> +
>> +	/** Padding field, must be set to 0 */
>> +	__u32 pad;
>> +
>> +	/**
>> +	 * For timeline syncobjs, the point on the timeline the reference
>> +	 * points to. 0 for the last point.
>> +	 * Must be set to 0 for non-timeline syncobjs
>> +	 */
>> +	__u64 point;
>> +};
>> +
>>   /* Exclusive (AKA write) access to the BO */
>>   #define PANFROST_BO_REF_EXCLUSIVE	0x1
>>   
>> +/* Disable the implicit depency on the BO fence */
>> +#define PANFROST_BO_REF_NO_IMPLICIT_DEP	0x2
>> +
>> +/* Describes a BO referenced by a job and the type of access. */
>> +struct drm_panfrost_bo_ref {
>> +	/** A GEM handle */
>> +	__u32 handle;
>> +
>> +	/** A combination of PANFROST_BO_REF_x flags */
>> +	__u32 flags;
>> +};
>> +
>> +/* Describes a GPU job and the resources attached to it. */
>> +struct drm_panfrost_job {
>> +	/** GPU pointer to the head of the job chain. */
>> +	__u64 head;
>> +
>> +	/**
>> +	 * Array of drm_panfrost_bo_ref objects describing the BOs referenced
>> +	 * by this job.
>> +	 */
>> +	__u64 bos;
>> +
>> +	/**
>> +	 * Arrays of drm_panfrost_syncobj_ref objects describing the input
>> +	 * and output fences.
>> +	 */
>> +	__u64 in_syncs;
>> +	__u64 out_syncs;
>> +
>> +	/** Syncobj reference array sizes. */
>> +	__u32 in_sync_count;
>> +	__u32 out_sync_count;
>> +
>> +	/** BO reference array size. */
>> +	__u32 bo_count;
>> +
>> +	/** Combination of PANFROST_JD_REQ_* flags. */
>> +	__u32 requirements;
>> +};
>> +
>> +#define PANFROST_SUBMIT_BATCH_VERSION	1
>> +
>> +/* Used to submit multiple jobs in one call */
>> +struct drm_panfrost_batch_submit {
>> +	/**
>> +	 * Always set to PANFROST_SUBMIT_BATCH_VERSION. This is used to let the
>> +	 * kernel know about the size of the various structs passed to the
>> +	 * BATCH_SUBMIT ioctl.
>> +	 */
>> +	__u32 version;
>> +
>> +	/** Number of jobs to submit. */
>> +	__u32 job_count;
>> +
>> +	/* Pointer to a job array. */
>> +	__u64 jobs;
>> +
>> +	/**
>> +	 * ID of the queue to submit those jobs to. 0 is the default
>> +	 * submit queue and should always exists. If you need a dedicated
>> +	 * queue, create it with DRM_IOCTL_PANFROST_CREATE_SUBMITQUEUE.
>> +	 */
>> +	__u32 queue;
>> +
>> +	/**
>> +	 * If the submission fails, this encodes the index of the job
>> +	 * failed.
>> +	 */
>> +	__u32 fail_idx;
>> +};
>> +
>>   #if defined(__cplusplus)
>>   }
>>   #endif
>> -- 
>> 2.31.1
>>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches
  2021-07-08 12:10     ` Christian König
@ 2021-07-26 10:27       ` Boris Brezillon
  2021-07-27  9:17         ` Daniel Vetter
  0 siblings, 1 reply; 14+ messages in thread
From: Boris Brezillon @ 2021-07-26 10:27 UTC (permalink / raw)
  To: Christian König
  Cc: Jason Ekstrand, Tomeu Vizoso, dri-devel, Steven Price,
	Rob Herring, Alyssa Rosenzweig, Robin Murphy

On Thu, 8 Jul 2021 14:10:45 +0200
Christian König <ckoenig.leichtzumerken@gmail.com> wrote:

> >> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> >> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> >> @@ -254,6 +254,9 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
> >>   				return ret;
> >>   		}
> >>   
> >> +		if (job->bo_flags[i] & PANFROST_BO_REF_NO_IMPLICIT_DEP)
> >> +			continue;  
> > This breaks dma_resv rules. I'll send out patch set fixing this pattern in
> > other drivers, I'll ping you on that for what you need to change. Should
> > go out today or so.

I guess you're talking about [1]. TBH, I don't quite see the point of
exposing a 'no-implicit' flag if we end up forcing this implicit dep
anyway, but I'm probably missing something.

> 
> I'm really wondering if the behavior that the exclusive fences replaces 
> all the shared fences was such a good idea.

Is that what's done in [1], or are you talking about a different
patchset/approach?

> 
> It just allows drivers to mess up things in a way which can be easily 
> used to compromise the system.

I must admit I'm a bit lost, so I'm tempted to drop that flag for now
:-).

[1]https://patchwork.freedesktop.org/patch/443711/?series=92334&rev=3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches
  2021-07-26 10:27       ` Boris Brezillon
@ 2021-07-27  9:17         ` Daniel Vetter
  0 siblings, 0 replies; 14+ messages in thread
From: Daniel Vetter @ 2021-07-27  9:17 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Jason Ekstrand, Tomeu Vizoso, Christian König, dri-devel,
	Steven Price, Rob Herring, Alyssa Rosenzweig, Robin Murphy

On Mon, Jul 26, 2021 at 12:27:06PM +0200, Boris Brezillon wrote:
> On Thu, 8 Jul 2021 14:10:45 +0200
> Christian König <ckoenig.leichtzumerken@gmail.com> wrote:
> 
> > >> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > >> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > >> @@ -254,6 +254,9 @@ static int panfrost_acquire_object_fences(struct panfrost_job *job)
> > >>   				return ret;
> > >>   		}
> > >>   
> > >> +		if (job->bo_flags[i] & PANFROST_BO_REF_NO_IMPLICIT_DEP)
> > >> +			continue;  
> > > This breaks dma_resv rules. I'll send out patch set fixing this pattern in
> > > other drivers, I'll ping you on that for what you need to change. Should
> > > go out today or so.
> 
> I guess you're talking about [1]. TBH, I don't quite see the point of
> exposing a 'no-implicit' flag if we end up forcing this implicit dep
> anyway, but I'm probably missing something.

Yeah that's the patch set.

Note that there's better ways to do this, it's just that these better ways
take more typing and need some actual testing (and ideally igts and
everything).

The NO_IMPLICIT flag is still useful even with the hacky solution, as long
as you don't set a write fence too. For that it might be better to look
into the dma_fence import patch from Jason:

https://lore.kernel.org/dri-devel/20210610210925.642582-7-jason@jlekstrand.net/

> > I'm really wondering if the behavior that the exclusive fences replaces 
> > all the shared fences was such a good idea.
> 
> Is that what's done in [1], or are you talking about a different
> patchset/approach?

That's just how dma_resv works for the exclusive slot.
-Daniel

> 
> > 
> > It just allows drivers to mess up things in a way which can be easily 
> > used to compromise the system.
> 
> I must admit I'm a bit lost, so I'm tempted to drop that flag for now
> :-).
> 
> [1]https://patchwork.freedesktop.org/patch/443711/?series=92334&rev=3

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-07-27  9:17 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05  8:29 [PATCH v4 0/7] drm/panfrost: drm/panfrost: Add a new submit ioctl Boris Brezillon
2021-07-05  8:29 ` [PATCH v4 1/7] drm/panfrost: Pass a job to panfrost_{acquire, attach}_object_fences() Boris Brezillon
2021-07-05  8:29 ` [PATCH v4 2/7] drm/panfrost: Move the mappings collection out of panfrost_lookup_bos() Boris Brezillon
2021-07-05  8:29 ` [PATCH v4 3/7] drm/panfrost: Add BO access flags to relax dependencies between jobs Boris Brezillon
2021-07-05  8:29 ` [PATCH v4 4/7] drm/panfrost: Add the ability to create submit queues Boris Brezillon
2021-07-05  8:56   ` Steven Price
2021-07-05  8:29 ` [PATCH v4 5/7] drm/panfrost: Add a new ioctl to submit batches Boris Brezillon
2021-07-05  9:32   ` Daniel Vetter
2021-07-08 12:10     ` Christian König
2021-07-26 10:27       ` Boris Brezillon
2021-07-27  9:17         ` Daniel Vetter
2021-07-05  9:42   ` Steven Price
2021-07-05  8:29 ` [PATCH v4 6/7] drm/panfrost: Advertise the SYNCOBJ_TIMELINE feature Boris Brezillon
2021-07-05  8:29 ` [PATCH v4 7/7] drm/panfrost: Bump minor version to reflect the feature additions Boris Brezillon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.