All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] dma-buf: return index of the first signaled fence (v2)
@ 2016-11-04 20:16 Alex Deucher
  2016-11-04 20:16 ` [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4) Alex Deucher
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Alex Deucher @ 2016-11-04 20:16 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, sumit.semwal-QSEj5FYQhm4dnm+yROfE0A, monk.liu

From: "monk.liu" <monk.liu@amd.com>

Return the index of the first signaled fence.  This information
is useful in some APIs like Vulkan.

v2: rebase on drm-next (fence -> dma_fence)

Signed-off-by: monk.liu <monk.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
---

This is the same patch set I send out yesterday, I just
squashed the amdgpu patches together and rebased everything on
the fence -> dma_fence renaming.  This is used by our VK driver
and we are planning to use it in mesa as well.

drivers/dma-buf/dma-fence.c | 20 +++++++++++++++-----
 include/linux/dma-fence.h   |  2 +-
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 9ef3c2f..dd00990 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -402,14 +402,18 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
 EXPORT_SYMBOL(dma_fence_default_wait);
 
 static bool
-dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count)
+dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count,
+			    uint32_t *idx)
 {
 	int i;
 
 	for (i = 0; i < count; ++i) {
 		struct dma_fence *fence = fences[i];
-		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+			if (idx)
+				*idx = i;
 			return true;
+		}
 	}
 	return false;
 }
@@ -421,6 +425,7 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count)
  * @count:	[in]	number of fences to wait on
  * @intr:	[in]	if true, do an interruptible wait
  * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ * @idx:       [out]   the first signaled fence index, meaninful only on Returns positive
  *
  * Returns -EINVAL on custom fence wait implementation, -ERESTARTSYS if
  * interrupted, 0 if the wait timed out, or the remaining timeout in jiffies
@@ -432,7 +437,7 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count)
  */
 signed long
 dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
-			   bool intr, signed long timeout)
+			   bool intr, signed long timeout, uint32_t *idx)
 {
 	struct default_wait_cb *cb;
 	signed long ret = timeout;
@@ -443,8 +448,11 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
 
 	if (timeout == 0) {
 		for (i = 0; i < count; ++i)
-			if (dma_fence_is_signaled(fences[i]))
+			if (dma_fence_is_signaled(fences[i])) {
+				if (idx)
+					*idx = i;
 				return 1;
+			}
 
 		return 0;
 	}
@@ -467,6 +475,8 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
 		if (dma_fence_add_callback(fence, &cb[i].base,
 					   dma_fence_default_wait_cb)) {
 			/* This fence is already signaled */
+			if (idx)
+				*idx = i;
 			goto fence_rm_cb;
 		}
 	}
@@ -477,7 +487,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
 		else
 			set_current_state(TASK_UNINTERRUPTIBLE);
 
-		if (dma_fence_test_signaled_any(fences, count))
+		if (dma_fence_test_signaled_any(fences, count, idx))
 			break;
 
 		ret = schedule_timeout(ret);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index ba60c04..e578fe7 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -382,7 +382,7 @@ signed long dma_fence_wait_timeout(struct dma_fence *,
 				   bool intr, signed long timeout);
 signed long dma_fence_wait_any_timeout(struct dma_fence **fences,
 				       uint32_t count,
-				       bool intr, signed long timeout);
+				       bool intr, signed long timeout, uint32_t *idx);
 
 /**
  * dma_fence_wait - sleep until the fence gets signaled
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4)
  2016-11-04 20:16 [PATCH 1/2] dma-buf: return index of the first signaled fence (v2) Alex Deucher
@ 2016-11-04 20:16 ` Alex Deucher
  2016-11-07  1:10   ` Gustavo Padovan
  2016-11-04 22:03 ` [PATCH 1/2] dma-buf: return index of the first signaled fence (v2) Sumit Semwal
       [not found] ` <1478290570-30982-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2 siblings, 1 reply; 10+ messages in thread
From: Alex Deucher @ 2016-11-04 20:16 UTC (permalink / raw)
  To: amd-gfx, dri-devel; +Cc: Junwei Zhang, Alex Deucher

From: Junwei Zhang <Jerry.Zhang@amd.com>

v2: agd: rebase and squash in all the previous optimizations and
changes so everything compiles.
v3: squash in Slava's 32bit build fix
v4: rebase on drm-next (fence -> dma_fence),
    squash in Monk's ioctl update patch

Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Monk Liu <monk.liu@amd.com>
Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 173 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c  |   2 +-
 include/uapi/drm/amdgpu_drm.h           |  28 ++++++
 5 files changed, 205 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index dc98ceb..7a94a3c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1212,6 +1212,8 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
 			struct drm_file *filp);
 int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
 int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
+int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *filp);
 
 int amdgpu_gem_metadata_ioctl(struct drm_device *dev, void *data,
 				struct drm_file *filp);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 2728805..2004836 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1130,6 +1130,179 @@ int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data,
 }
 
 /**
+ * amdgpu_cs_get_fence - helper to get fence from drm_amdgpu_fence
+ *
+ * @adev: amdgpu device
+ * @filp: file private
+ * @user: drm_amdgpu_fence copied from user space
+ */
+static struct dma_fence *amdgpu_cs_get_fence(struct amdgpu_device *adev,
+					     struct drm_file *filp,
+					     struct drm_amdgpu_fence *user)
+{
+	struct amdgpu_ring *ring;
+	struct amdgpu_ctx *ctx;
+	struct dma_fence *fence;
+	int r;
+
+	r = amdgpu_cs_get_ring(adev, user->ip_type, user->ip_instance,
+			       user->ring, &ring);
+	if (r)
+		return ERR_PTR(r);
+
+	ctx = amdgpu_ctx_get(filp->driver_priv, user->ctx_id);
+	if (ctx == NULL)
+		return ERR_PTR(-EINVAL);
+
+	fence = amdgpu_ctx_get_fence(ctx, ring, user->seq_no);
+	amdgpu_ctx_put(ctx);
+
+	return fence;
+}
+
+/**
+ * amdgpu_cs_wait_all_fence - wait on all fences to signal
+ *
+ * @adev: amdgpu device
+ * @filp: file private
+ * @wait: wait parameters
+ * @fences: array of drm_amdgpu_fence
+ */
+static int amdgpu_cs_wait_all_fences(struct amdgpu_device *adev,
+				     struct drm_file *filp,
+				     union drm_amdgpu_wait_fences *wait,
+				     struct drm_amdgpu_fence *fences)
+{
+	uint32_t fence_count = wait->in.fence_count;
+	unsigned i;
+	long r = 1;
+
+	for (i = 0; i < fence_count; i++) {
+		struct dma_fence *fence;
+		unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
+
+		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
+		if (IS_ERR(fence))
+			return PTR_ERR(fence);
+		else if (!fence)
+			continue;
+
+		r = dma_fence_wait_timeout(fence, true, timeout);
+		if (r < 0)
+			return r;
+
+		if (r == 0)
+			break;
+	}
+
+	memset(wait, 0, sizeof(*wait));
+	wait->out.status = (r > 0);
+
+	return 0;
+}
+
+/**
+ * amdgpu_cs_wait_any_fence - wait on any fence to signal
+ *
+ * @adev: amdgpu device
+ * @filp: file private
+ * @wait: wait parameters
+ * @fences: array of drm_amdgpu_fence
+ */
+static int amdgpu_cs_wait_any_fence(struct amdgpu_device *adev,
+				    struct drm_file *filp,
+				    union drm_amdgpu_wait_fences *wait,
+				    struct drm_amdgpu_fence *fences)
+{
+	unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
+	uint32_t fence_count = wait->in.fence_count;
+	uint32_t first = ~0;
+	struct dma_fence **array;
+	unsigned i;
+	long r;
+
+	/* Prepare the fence array */
+	array = (struct dma_fence **)kcalloc(fence_count, sizeof(struct dma_fence *),
+			GFP_KERNEL);
+	if (array == NULL)
+		return -ENOMEM;
+
+	for (i = 0; i < fence_count; i++) {
+		struct dma_fence *fence;
+
+		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
+		if (IS_ERR(fence)) {
+			r = PTR_ERR(fence);
+			goto err_free_fence_array;
+		} else if (fence) {
+			array[i] = fence;
+		} else { /* NULL, the fence has been already signaled */
+			r = 1;
+			goto out;
+		}
+	}
+
+	r = dma_fence_wait_any_timeout(array, fence_count, true, timeout, &first);
+	if (r < 0)
+		goto err_free_fence_array;
+
+out:
+	memset(wait, 0, sizeof(*wait));
+	wait->out.status = (r > 0);
+	wait->out.first_signaled = first;
+	/* set return value 0 to indicate success */
+	r = 0;
+
+err_free_fence_array:
+	for (i = 0; i < fence_count; i++)
+		dma_fence_put(array[i]);
+	kfree(array);
+
+	return r;
+}
+
+/**
+ * amdgpu_cs_wait_fences_ioctl - wait for multiple command submissions to finish
+ *
+ * @dev: drm device
+ * @data: data from userspace
+ * @filp: file private
+ */
+int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *filp)
+{
+	struct amdgpu_device *adev = dev->dev_private;
+	union drm_amdgpu_wait_fences *wait = data;
+	uint32_t fence_count = wait->in.fence_count;
+	struct drm_amdgpu_fence *fences_user;
+	struct drm_amdgpu_fence *fences;
+	int r;
+
+	/* Get the fences from userspace */
+	fences = kmalloc_array(fence_count, sizeof(struct drm_amdgpu_fence),
+			GFP_KERNEL);
+	if (fences == NULL)
+		return -ENOMEM;
+
+	fences_user = (void __user *)(unsigned long)(wait->in.fences);
+	if (copy_from_user(fences, fences_user,
+		sizeof(struct drm_amdgpu_fence) * fence_count)) {
+		r = -EFAULT;
+		goto err_free_fences;
+	}
+
+	if (wait->in.wait_all)
+		r = amdgpu_cs_wait_all_fences(adev, filp, wait, fences);
+	else
+		r = amdgpu_cs_wait_any_fence(adev, filp, wait, fences);
+
+err_free_fences:
+	kfree(fences);
+
+	return r;
+}
+
+/**
  * amdgpu_cs_find_bo_va - find bo_va for VM address
  *
  * @parser: command submission parser context
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 8f48bed..d1cf9ac 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -825,6 +825,7 @@ const struct drm_ioctl_desc amdgpu_ioctls_kms[] = {
 	DRM_IOCTL_DEF_DRV(AMDGPU_CS, amdgpu_cs_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_INFO, amdgpu_info_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_WAIT_CS, amdgpu_cs_wait_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(AMDGPU_WAIT_FENCES, amdgpu_cs_wait_fences_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_METADATA, amdgpu_gem_metadata_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_VA, amdgpu_gem_va_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(AMDGPU_GEM_OP, amdgpu_gem_op_ioctl, DRM_AUTH|DRM_RENDER_ALLOW),
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
index fd26c4b..035f714 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
@@ -361,7 +361,7 @@ int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager,
 		if (count) {
 			spin_unlock(&sa_manager->wq.lock);
 			t = dma_fence_wait_any_timeout(fences, count, false,
-						       MAX_SCHEDULE_TIMEOUT);
+						       MAX_SCHEDULE_TIMEOUT, NULL);
 			for (i = 0; i < count; ++i)
 				dma_fence_put(fences[i]);
 
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 4684f37..2191a9e 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -50,6 +50,7 @@ extern "C" {
 #define DRM_AMDGPU_WAIT_CS		0x09
 #define DRM_AMDGPU_GEM_OP		0x10
 #define DRM_AMDGPU_GEM_USERPTR		0x11
+#define DRM_AMDGPU_WAIT_FENCES		0x12
 
 #define DRM_IOCTL_AMDGPU_GEM_CREATE	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_CREATE, union drm_amdgpu_gem_create)
 #define DRM_IOCTL_AMDGPU_GEM_MMAP	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_MMAP, union drm_amdgpu_gem_mmap)
@@ -63,6 +64,7 @@ extern "C" {
 #define DRM_IOCTL_AMDGPU_WAIT_CS	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_CS, union drm_amdgpu_wait_cs)
 #define DRM_IOCTL_AMDGPU_GEM_OP		DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_OP, struct drm_amdgpu_gem_op)
 #define DRM_IOCTL_AMDGPU_GEM_USERPTR	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_USERPTR, struct drm_amdgpu_gem_userptr)
+#define DRM_IOCTL_AMDGPU_WAIT_FENCES	DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_FENCES, union drm_amdgpu_wait_fences)
 
 #define AMDGPU_GEM_DOMAIN_CPU		0x1
 #define AMDGPU_GEM_DOMAIN_GTT		0x2
@@ -307,6 +309,32 @@ union drm_amdgpu_wait_cs {
 	struct drm_amdgpu_wait_cs_out out;
 };
 
+struct drm_amdgpu_fence {
+	__u32 ctx_id;
+	__u32 ip_type;
+	__u32 ip_instance;
+	__u32 ring;
+	__u64 seq_no;
+};
+
+struct drm_amdgpu_wait_fences_in {
+	/** This points to uint64_t * which points to fences */
+	__u64 fences;
+	__u32 fence_count;
+	__u32 wait_all;
+	__u64 timeout_ns;
+};
+
+struct drm_amdgpu_wait_fences_out {
+	__u32 status;
+	__u32 first_signaled;
+};
+
+union drm_amdgpu_wait_fences {
+	struct drm_amdgpu_wait_fences_in in;
+	struct drm_amdgpu_wait_fences_out out;
+};
+
 #define AMDGPU_GEM_OP_GET_GEM_CREATE_INFO	0
 #define AMDGPU_GEM_OP_SET_PLACEMENT		1
 
-- 
2.5.5

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] dma-buf: return index of the first signaled fence (v2)
  2016-11-04 20:16 [PATCH 1/2] dma-buf: return index of the first signaled fence (v2) Alex Deucher
  2016-11-04 20:16 ` [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4) Alex Deucher
@ 2016-11-04 22:03 ` Sumit Semwal
       [not found]   ` <CAO_48GGiBXFFqxWg1fUKMVUR13QvwaH5a8eiGrOgeCbUyY0WnA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
       [not found] ` <1478290570-30982-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2 siblings, 1 reply; 10+ messages in thread
From: Sumit Semwal @ 2016-11-04 22:03 UTC (permalink / raw)
  To: Alex Deucher; +Cc: monk.liu, Alex Deucher, DRI mailing list, amd-gfx

Hi Alex,

Thanks for the patches.

On 4 November 2016 at 14:16, Alex Deucher <alexdeucher@gmail.com> wrote:
> From: "monk.liu" <monk.liu@amd.com>
>
> Return the index of the first signaled fence.  This information
> is useful in some APIs like Vulkan.
>
> v2: rebase on drm-next (fence -> dma_fence)
>
> Signed-off-by: monk.liu <monk.liu@amd.com>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> ---
>
> This is the same patch set I send out yesterday, I just
> squashed the amdgpu patches together and rebased everything on
> the fence -> dma_fence renaming.  This is used by our VK driver
> and we are planning to use it in mesa as well.
>

Would you be ok if I apply this and the amdgpu patch both together via
drm-misc, or would you like me to notify you once I merge this for you
to take the amdgpu patch via your tree? I'm fine either ways, but
perhaps drm-misc would be a bit neater.

> drivers/dma-buf/dma-fence.c | 20 +++++++++++++++-----
>  include/linux/dma-fence.h   |  2 +-
>  2 files changed, 16 insertions(+), 6 deletions(-)
>

Best regards,
Sumit.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] dma-buf: return index of the first signaled fence (v2)
       [not found] ` <1478290570-30982-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
@ 2016-11-05 12:22   ` Christian König
  0 siblings, 0 replies; 10+ messages in thread
From: Christian König @ 2016-11-05 12:22 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, sumit.semwal-QSEj5FYQhm4dnm+yROfE0A, monk.liu

Am 04.11.2016 um 21:16 schrieb Alex Deucher:
> From: "monk.liu" <monk.liu@amd.com>
>
> Return the index of the first signaled fence.  This information
> is useful in some APIs like Vulkan.
>
> v2: rebase on drm-next (fence -> dma_fence)
>
> Signed-off-by: monk.liu <monk.liu@amd.com>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>

Both patches are Reviewed-by: Christian König <christian.koenig@amd.com>.

> ---
>
> This is the same patch set I send out yesterday, I just
> squashed the amdgpu patches together and rebased everything on
> the fence -> dma_fence renaming.  This is used by our VK driver
> and we are planning to use it in mesa as well.
>
> drivers/dma-buf/dma-fence.c | 20 +++++++++++++++-----
>   include/linux/dma-fence.h   |  2 +-
>   2 files changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 9ef3c2f..dd00990 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -402,14 +402,18 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
>   EXPORT_SYMBOL(dma_fence_default_wait);
>   
>   static bool
> -dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count)
> +dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count,
> +			    uint32_t *idx)
>   {
>   	int i;
>   
>   	for (i = 0; i < count; ++i) {
>   		struct dma_fence *fence = fences[i];
> -		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> +			if (idx)
> +				*idx = i;
>   			return true;
> +		}
>   	}
>   	return false;
>   }
> @@ -421,6 +425,7 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count)
>    * @count:	[in]	number of fences to wait on
>    * @intr:	[in]	if true, do an interruptible wait
>    * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
> + * @idx:       [out]   the first signaled fence index, meaninful only on Returns positive
>    *
>    * Returns -EINVAL on custom fence wait implementation, -ERESTARTSYS if
>    * interrupted, 0 if the wait timed out, or the remaining timeout in jiffies
> @@ -432,7 +437,7 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count)
>    */
>   signed long
>   dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
> -			   bool intr, signed long timeout)
> +			   bool intr, signed long timeout, uint32_t *idx)
>   {
>   	struct default_wait_cb *cb;
>   	signed long ret = timeout;
> @@ -443,8 +448,11 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
>   
>   	if (timeout == 0) {
>   		for (i = 0; i < count; ++i)
> -			if (dma_fence_is_signaled(fences[i]))
> +			if (dma_fence_is_signaled(fences[i])) {
> +				if (idx)
> +					*idx = i;
>   				return 1;
> +			}
>   
>   		return 0;
>   	}
> @@ -467,6 +475,8 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
>   		if (dma_fence_add_callback(fence, &cb[i].base,
>   					   dma_fence_default_wait_cb)) {
>   			/* This fence is already signaled */
> +			if (idx)
> +				*idx = i;
>   			goto fence_rm_cb;
>   		}
>   	}
> @@ -477,7 +487,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
>   		else
>   			set_current_state(TASK_UNINTERRUPTIBLE);
>   
> -		if (dma_fence_test_signaled_any(fences, count))
> +		if (dma_fence_test_signaled_any(fences, count, idx))
>   			break;
>   
>   		ret = schedule_timeout(ret);
> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> index ba60c04..e578fe7 100644
> --- a/include/linux/dma-fence.h
> +++ b/include/linux/dma-fence.h
> @@ -382,7 +382,7 @@ signed long dma_fence_wait_timeout(struct dma_fence *,
>   				   bool intr, signed long timeout);
>   signed long dma_fence_wait_any_timeout(struct dma_fence **fences,
>   				       uint32_t count,
> -				       bool intr, signed long timeout);
> +				       bool intr, signed long timeout, uint32_t *idx);
>   
>   /**
>    * dma_fence_wait - sleep until the fence gets signaled


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4)
  2016-11-04 20:16 ` [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4) Alex Deucher
@ 2016-11-07  1:10   ` Gustavo Padovan
  2016-11-07  8:04     ` Christian König
  0 siblings, 1 reply; 10+ messages in thread
From: Gustavo Padovan @ 2016-11-07  1:10 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Junwei Zhang, Alex Deucher, dri-devel, amd-gfx

Hi Alex,

2016-11-04 Alex Deucher <alexdeucher@gmail.com>:

> From: Junwei Zhang <Jerry.Zhang@amd.com>
> 
> v2: agd: rebase and squash in all the previous optimizations and
> changes so everything compiles.
> v3: squash in Slava's 32bit build fix
> v4: rebase on drm-next (fence -> dma_fence),
>     squash in Monk's ioctl update patch
> 
> Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
> Reviewed-by: Monk Liu <monk.liu@amd.com>
> Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h     |   2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 173 ++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c  |   2 +-
>  include/uapi/drm/amdgpu_drm.h           |  28 ++++++
>  5 files changed, 205 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index dc98ceb..7a94a3c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1212,6 +1212,8 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
>  			struct drm_file *filp);
>  int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
>  int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
> +int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
> +				struct drm_file *filp);
>  
>  int amdgpu_gem_metadata_ioctl(struct drm_device *dev, void *data,
>  				struct drm_file *filp);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 2728805..2004836 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1130,6 +1130,179 @@ int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data,
>  }
>  
>  /**
> + * amdgpu_cs_get_fence - helper to get fence from drm_amdgpu_fence
> + *
> + * @adev: amdgpu device
> + * @filp: file private
> + * @user: drm_amdgpu_fence copied from user space
> + */
> +static struct dma_fence *amdgpu_cs_get_fence(struct amdgpu_device *adev,
> +					     struct drm_file *filp,
> +					     struct drm_amdgpu_fence *user)
> +{
> +	struct amdgpu_ring *ring;
> +	struct amdgpu_ctx *ctx;
> +	struct dma_fence *fence;
> +	int r;
> +
> +	r = amdgpu_cs_get_ring(adev, user->ip_type, user->ip_instance,
> +			       user->ring, &ring);
> +	if (r)
> +		return ERR_PTR(r);
> +
> +	ctx = amdgpu_ctx_get(filp->driver_priv, user->ctx_id);
> +	if (ctx == NULL)
> +		return ERR_PTR(-EINVAL);
> +
> +	fence = amdgpu_ctx_get_fence(ctx, ring, user->seq_no);
> +	amdgpu_ctx_put(ctx);
> +
> +	return fence;
> +}
> +
> +/**
> + * amdgpu_cs_wait_all_fence - wait on all fences to signal
> + *
> + * @adev: amdgpu device
> + * @filp: file private
> + * @wait: wait parameters
> + * @fences: array of drm_amdgpu_fence
> + */
> +static int amdgpu_cs_wait_all_fences(struct amdgpu_device *adev,
> +				     struct drm_file *filp,
> +				     union drm_amdgpu_wait_fences *wait,
> +				     struct drm_amdgpu_fence *fences)
> +{
> +	uint32_t fence_count = wait->in.fence_count;
> +	unsigned i;
> +	long r = 1;
> +
> +	for (i = 0; i < fence_count; i++) {
> +		struct dma_fence *fence;
> +		unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
> +
> +		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
> +		if (IS_ERR(fence))
> +			return PTR_ERR(fence);
> +		else if (!fence)
> +			continue;
> +
> +		r = dma_fence_wait_timeout(fence, true, timeout);
> +		if (r < 0)
> +			return r;
> +
> +		if (r == 0)
> +			break;
> +	}
> +
> +	memset(wait, 0, sizeof(*wait));
> +	wait->out.status = (r > 0);
> +
> +	return 0;
> +}
> +
> +/**
> + * amdgpu_cs_wait_any_fence - wait on any fence to signal
> + *
> + * @adev: amdgpu device
> + * @filp: file private
> + * @wait: wait parameters
> + * @fences: array of drm_amdgpu_fence
> + */
> +static int amdgpu_cs_wait_any_fence(struct amdgpu_device *adev,
> +				    struct drm_file *filp,
> +				    union drm_amdgpu_wait_fences *wait,
> +				    struct drm_amdgpu_fence *fences)
> +{
> +	unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
> +	uint32_t fence_count = wait->in.fence_count;
> +	uint32_t first = ~0;
> +	struct dma_fence **array;
> +	unsigned i;
> +	long r;
> +
> +	/* Prepare the fence array */
> +	array = (struct dma_fence **)kcalloc(fence_count, sizeof(struct dma_fence *),
> +			GFP_KERNEL);
> +	if (array == NULL)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < fence_count; i++) {
> +		struct dma_fence *fence;
> +
> +		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
> +		if (IS_ERR(fence)) {
> +			r = PTR_ERR(fence);
> +			goto err_free_fence_array;
> +		} else if (fence) {
> +			array[i] = fence;
> +		} else { /* NULL, the fence has been already signaled */
> +			r = 1;
> +			goto out;
> +		}
> +	}
> +
> +	r = dma_fence_wait_any_timeout(array, fence_count, true, timeout, &first);
> +	if (r < 0)
> +		goto err_free_fence_array;
> +
> +out:
> +	memset(wait, 0, sizeof(*wait));
> +	wait->out.status = (r > 0);
> +	wait->out.first_signaled = first;
> +	/* set return value 0 to indicate success */
> +	r = 0;
> +
> +err_free_fence_array:
> +	for (i = 0; i < fence_count; i++)
> +		dma_fence_put(array[i]);
> +	kfree(array);
> +
> +	return r;
> +}
> +
> +/**
> + * amdgpu_cs_wait_fences_ioctl - wait for multiple command submissions to finish
> + *
> + * @dev: drm device
> + * @data: data from userspace
> + * @filp: file private
> + */
> +int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
> +				struct drm_file *filp)
> +{
> +	struct amdgpu_device *adev = dev->dev_private;
> +	union drm_amdgpu_wait_fences *wait = data;
> +	uint32_t fence_count = wait->in.fence_count;
> +	struct drm_amdgpu_fence *fences_user;
> +	struct drm_amdgpu_fence *fences;
> +	int r;
> +
> +	/* Get the fences from userspace */
> +	fences = kmalloc_array(fence_count, sizeof(struct drm_amdgpu_fence),
> +			GFP_KERNEL);
> +	if (fences == NULL)
> +		return -ENOMEM;
> +
> +	fences_user = (void __user *)(unsigned long)(wait->in.fences);
> +	if (copy_from_user(fences, fences_user,
> +		sizeof(struct drm_amdgpu_fence) * fence_count)) {
> +		r = -EFAULT;
> +		goto err_free_fences;
> +	}
> +
> +	if (wait->in.wait_all)
> +		r = amdgpu_cs_wait_all_fences(adev, filp, wait, fences);
> +	else
> +		r = amdgpu_cs_wait_any_fence(adev, filp, wait, fences);

I wonder if it wouldn't be better if we use fence_array here and
register callbacks to get notfied of the first signaled fence the "any" case.
It seems to me that we could simplify this code by using a fence_array.

Gustavo

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4)
  2016-11-07  1:10   ` Gustavo Padovan
@ 2016-11-07  8:04     ` Christian König
  2016-11-08  0:36       ` Gustavo Padovan
  0 siblings, 1 reply; 10+ messages in thread
From: Christian König @ 2016-11-07  8:04 UTC (permalink / raw)
  To: Gustavo Padovan, Alex Deucher,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Junwei Zhang,
	Alex Deucher

Am 07.11.2016 um 02:10 schrieb Gustavo Padovan:
> Hi Alex,
>
> 2016-11-04 Alex Deucher <alexdeucher@gmail.com>:
>
>> From: Junwei Zhang <Jerry.Zhang@amd.com>
>>
>> v2: agd: rebase and squash in all the previous optimizations and
>> changes so everything compiles.
>> v3: squash in Slava's 32bit build fix
>> v4: rebase on drm-next (fence -> dma_fence),
>>      squash in Monk's ioctl update patch
>>
>> Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
>> Reviewed-by: Monk Liu <monk.liu@amd.com>
>> Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com>
>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h     |   2 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 173 ++++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c  |   2 +-
>>   include/uapi/drm/amdgpu_drm.h           |  28 ++++++
>>   5 files changed, 205 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> index dc98ceb..7a94a3c 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> @@ -1212,6 +1212,8 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
>>   			struct drm_file *filp);
>>   int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
>>   int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
>> +int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
>> +				struct drm_file *filp);
>>   
>>   int amdgpu_gem_metadata_ioctl(struct drm_device *dev, void *data,
>>   				struct drm_file *filp);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>> index 2728805..2004836 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>> @@ -1130,6 +1130,179 @@ int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data,
>>   }
>>   
>>   /**
>> + * amdgpu_cs_get_fence - helper to get fence from drm_amdgpu_fence
>> + *
>> + * @adev: amdgpu device
>> + * @filp: file private
>> + * @user: drm_amdgpu_fence copied from user space
>> + */
>> +static struct dma_fence *amdgpu_cs_get_fence(struct amdgpu_device *adev,
>> +					     struct drm_file *filp,
>> +					     struct drm_amdgpu_fence *user)
>> +{
>> +	struct amdgpu_ring *ring;
>> +	struct amdgpu_ctx *ctx;
>> +	struct dma_fence *fence;
>> +	int r;
>> +
>> +	r = amdgpu_cs_get_ring(adev, user->ip_type, user->ip_instance,
>> +			       user->ring, &ring);
>> +	if (r)
>> +		return ERR_PTR(r);
>> +
>> +	ctx = amdgpu_ctx_get(filp->driver_priv, user->ctx_id);
>> +	if (ctx == NULL)
>> +		return ERR_PTR(-EINVAL);
>> +
>> +	fence = amdgpu_ctx_get_fence(ctx, ring, user->seq_no);
>> +	amdgpu_ctx_put(ctx);
>> +
>> +	return fence;
>> +}
>> +
>> +/**
>> + * amdgpu_cs_wait_all_fence - wait on all fences to signal
>> + *
>> + * @adev: amdgpu device
>> + * @filp: file private
>> + * @wait: wait parameters
>> + * @fences: array of drm_amdgpu_fence
>> + */
>> +static int amdgpu_cs_wait_all_fences(struct amdgpu_device *adev,
>> +				     struct drm_file *filp,
>> +				     union drm_amdgpu_wait_fences *wait,
>> +				     struct drm_amdgpu_fence *fences)
>> +{
>> +	uint32_t fence_count = wait->in.fence_count;
>> +	unsigned i;
>> +	long r = 1;
>> +
>> +	for (i = 0; i < fence_count; i++) {
>> +		struct dma_fence *fence;
>> +		unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
>> +
>> +		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
>> +		if (IS_ERR(fence))
>> +			return PTR_ERR(fence);
>> +		else if (!fence)
>> +			continue;
>> +
>> +		r = dma_fence_wait_timeout(fence, true, timeout);
>> +		if (r < 0)
>> +			return r;
>> +
>> +		if (r == 0)
>> +			break;
>> +	}
>> +
>> +	memset(wait, 0, sizeof(*wait));
>> +	wait->out.status = (r > 0);
>> +
>> +	return 0;
>> +}
>> +
>> +/**
>> + * amdgpu_cs_wait_any_fence - wait on any fence to signal
>> + *
>> + * @adev: amdgpu device
>> + * @filp: file private
>> + * @wait: wait parameters
>> + * @fences: array of drm_amdgpu_fence
>> + */
>> +static int amdgpu_cs_wait_any_fence(struct amdgpu_device *adev,
>> +				    struct drm_file *filp,
>> +				    union drm_amdgpu_wait_fences *wait,
>> +				    struct drm_amdgpu_fence *fences)
>> +{
>> +	unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
>> +	uint32_t fence_count = wait->in.fence_count;
>> +	uint32_t first = ~0;
>> +	struct dma_fence **array;
>> +	unsigned i;
>> +	long r;
>> +
>> +	/* Prepare the fence array */
>> +	array = (struct dma_fence **)kcalloc(fence_count, sizeof(struct dma_fence *),
>> +			GFP_KERNEL);
>> +	if (array == NULL)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < fence_count; i++) {
>> +		struct dma_fence *fence;
>> +
>> +		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
>> +		if (IS_ERR(fence)) {
>> +			r = PTR_ERR(fence);
>> +			goto err_free_fence_array;
>> +		} else if (fence) {
>> +			array[i] = fence;
>> +		} else { /* NULL, the fence has been already signaled */
>> +			r = 1;
>> +			goto out;
>> +		}
>> +	}
>> +
>> +	r = dma_fence_wait_any_timeout(array, fence_count, true, timeout, &first);
>> +	if (r < 0)
>> +		goto err_free_fence_array;
>> +
>> +out:
>> +	memset(wait, 0, sizeof(*wait));
>> +	wait->out.status = (r > 0);
>> +	wait->out.first_signaled = first;
>> +	/* set return value 0 to indicate success */
>> +	r = 0;
>> +
>> +err_free_fence_array:
>> +	for (i = 0; i < fence_count; i++)
>> +		dma_fence_put(array[i]);
>> +	kfree(array);
>> +
>> +	return r;
>> +}
>> +
>> +/**
>> + * amdgpu_cs_wait_fences_ioctl - wait for multiple command submissions to finish
>> + *
>> + * @dev: drm device
>> + * @data: data from userspace
>> + * @filp: file private
>> + */
>> +int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
>> +				struct drm_file *filp)
>> +{
>> +	struct amdgpu_device *adev = dev->dev_private;
>> +	union drm_amdgpu_wait_fences *wait = data;
>> +	uint32_t fence_count = wait->in.fence_count;
>> +	struct drm_amdgpu_fence *fences_user;
>> +	struct drm_amdgpu_fence *fences;
>> +	int r;
>> +
>> +	/* Get the fences from userspace */
>> +	fences = kmalloc_array(fence_count, sizeof(struct drm_amdgpu_fence),
>> +			GFP_KERNEL);
>> +	if (fences == NULL)
>> +		return -ENOMEM;
>> +
>> +	fences_user = (void __user *)(unsigned long)(wait->in.fences);
>> +	if (copy_from_user(fences, fences_user,
>> +		sizeof(struct drm_amdgpu_fence) * fence_count)) {
>> +		r = -EFAULT;
>> +		goto err_free_fences;
>> +	}
>> +
>> +	if (wait->in.wait_all)
>> +		r = amdgpu_cs_wait_all_fences(adev, filp, wait, fences);
>> +	else
>> +		r = amdgpu_cs_wait_any_fence(adev, filp, wait, fences);
> I wonder if it wouldn't be better if we use fence_array here and
> register callbacks to get notfied of the first signaled fence the "any" case.
> It seems to me that we could simplify this code by using a fence_array.

I had this code in mind as well when working on the fence_array.

But this code actually precedes the fence_array implementation, so I 
would like to push it upstream unchanged and then clean it up to use the 
fence array.

That would make our backporting efforts a bit easier and shouldn't 
affect upstream to much in any way.

Regards,
Christian.

>
> Gustavo
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] dma-buf: return index of the first signaled fence (v2)
       [not found]   ` <CAO_48GGiBXFFqxWg1fUKMVUR13QvwaH5a8eiGrOgeCbUyY0WnA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-11-07 17:44     ` Alex Deucher
  2016-11-07 22:42       ` Sumit Semwal
  0 siblings, 1 reply; 10+ messages in thread
From: Alex Deucher @ 2016-11-07 17:44 UTC (permalink / raw)
  To: Sumit Semwal; +Cc: monk.liu, Alex Deucher, DRI mailing list, amd-gfx list

On Fri, Nov 4, 2016 at 6:03 PM, Sumit Semwal <sumit.semwal@linaro.org> wrote:
> Hi Alex,
>
> Thanks for the patches.
>
> On 4 November 2016 at 14:16, Alex Deucher <alexdeucher@gmail.com> wrote:
>> From: "monk.liu" <monk.liu@amd.com>
>>
>> Return the index of the first signaled fence.  This information
>> is useful in some APIs like Vulkan.
>>
>> v2: rebase on drm-next (fence -> dma_fence)
>>
>> Signed-off-by: monk.liu <monk.liu@amd.com>
>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>> ---
>>
>> This is the same patch set I send out yesterday, I just
>> squashed the amdgpu patches together and rebased everything on
>> the fence -> dma_fence renaming.  This is used by our VK driver
>> and we are planning to use it in mesa as well.
>>
>
> Would you be ok if I apply this and the amdgpu patch both together via
> drm-misc, or would you like me to notify you once I merge this for you
> to take the amdgpu patch via your tree? I'm fine either ways, but
> perhaps drm-misc would be a bit neater.
>

Either way works for me.  Whatever is easier for you.

Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] dma-buf: return index of the first signaled fence (v2)
  2016-11-07 17:44     ` Alex Deucher
@ 2016-11-07 22:42       ` Sumit Semwal
  2016-11-08 19:34         ` Sumit Semwal
  0 siblings, 1 reply; 10+ messages in thread
From: Sumit Semwal @ 2016-11-07 22:42 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Alex Deucher, DRI mailing list, amd-gfx list, monk.liu


[-- Attachment #1.1: Type: text/plain, Size: 1476 bytes --]

Hi Alex,

On 07-Nov-2016 11:14 PM, "Alex Deucher" <alexdeucher@gmail.com> wrote:
>
> On Fri, Nov 4, 2016 at 6:03 PM, Sumit Semwal <sumit.semwal@linaro.org>
wrote:
> > Hi Alex,
> >
> > Thanks for the patches.
> >
> > On 4 November 2016 at 14:16, Alex Deucher <alexdeucher@gmail.com> wrote:
> >> From: "monk.liu" <monk.liu@amd.com>
> >>
> >> Return the index of the first signaled fence.  This information
> >> is useful in some APIs like Vulkan.
> >>
> >> v2: rebase on drm-next (fence -> dma_fence)
> >>
> >> Signed-off-by: monk.liu <monk.liu@amd.com>
> >> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> >> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> >> ---
> >>
> >> This is the same patch set I send out yesterday, I just
> >> squashed the amdgpu patches together and rebased everything on
> >> the fence -> dma_fence renaming.  This is used by our VK driver
> >> and we are planning to use it in mesa as well.
> >>
> >
> > Would you be ok if I apply this and the amdgpu patch both together via
> > drm-misc, or would you like me to notify you once I merge this for you
> > to take the amdgpu patch via your tree? I'm fine either ways, but
> > perhaps drm-misc would be a bit neater.
> >
>
> Either way works for me.  Whatever is easier for you.
>
Thanks, will take these and Christian's patches through the  drm-misc tree,
hopefully today. (returning from LPC and just landed at my home city, 4am
here, but will hope to push these today! )
> Alex

Best,
Sumit.

[-- Attachment #1.2: Type: text/html, Size: 2334 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4)
  2016-11-07  8:04     ` Christian König
@ 2016-11-08  0:36       ` Gustavo Padovan
  0 siblings, 0 replies; 10+ messages in thread
From: Gustavo Padovan @ 2016-11-08  0:36 UTC (permalink / raw)
  To: Christian König; +Cc: Junwei Zhang, Alex Deucher, dri-devel, amd-gfx

2016-11-07 Christian König <deathsimple@vodafone.de>:

> Am 07.11.2016 um 02:10 schrieb Gustavo Padovan:
> > Hi Alex,
> > 
> > 2016-11-04 Alex Deucher <alexdeucher@gmail.com>:
> > 
> > > From: Junwei Zhang <Jerry.Zhang@amd.com>
> > > 
> > > v2: agd: rebase and squash in all the previous optimizations and
> > > changes so everything compiles.
> > > v3: squash in Slava's 32bit build fix
> > > v4: rebase on drm-next (fence -> dma_fence),
> > >      squash in Monk's ioctl update patch
> > > 
> > > Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
> > > Reviewed-by: Monk Liu <monk.liu@amd.com>
> > > Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com>
> > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > > ---
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu.h     |   2 +
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c  | 173 ++++++++++++++++++++++++++++++++
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |   1 +
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c  |   2 +-
> > >   include/uapi/drm/amdgpu_drm.h           |  28 ++++++
> > >   5 files changed, 205 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > > index dc98ceb..7a94a3c 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > > @@ -1212,6 +1212,8 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
> > >   			struct drm_file *filp);
> > >   int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
> > >   int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *filp);
> > > +int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
> > > +				struct drm_file *filp);
> > >   int amdgpu_gem_metadata_ioctl(struct drm_device *dev, void *data,
> > >   				struct drm_file *filp);
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > index 2728805..2004836 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > @@ -1130,6 +1130,179 @@ int amdgpu_cs_wait_ioctl(struct drm_device *dev, void *data,
> > >   }
> > >   /**
> > > + * amdgpu_cs_get_fence - helper to get fence from drm_amdgpu_fence
> > > + *
> > > + * @adev: amdgpu device
> > > + * @filp: file private
> > > + * @user: drm_amdgpu_fence copied from user space
> > > + */
> > > +static struct dma_fence *amdgpu_cs_get_fence(struct amdgpu_device *adev,
> > > +					     struct drm_file *filp,
> > > +					     struct drm_amdgpu_fence *user)
> > > +{
> > > +	struct amdgpu_ring *ring;
> > > +	struct amdgpu_ctx *ctx;
> > > +	struct dma_fence *fence;
> > > +	int r;
> > > +
> > > +	r = amdgpu_cs_get_ring(adev, user->ip_type, user->ip_instance,
> > > +			       user->ring, &ring);
> > > +	if (r)
> > > +		return ERR_PTR(r);
> > > +
> > > +	ctx = amdgpu_ctx_get(filp->driver_priv, user->ctx_id);
> > > +	if (ctx == NULL)
> > > +		return ERR_PTR(-EINVAL);
> > > +
> > > +	fence = amdgpu_ctx_get_fence(ctx, ring, user->seq_no);
> > > +	amdgpu_ctx_put(ctx);
> > > +
> > > +	return fence;
> > > +}
> > > +
> > > +/**
> > > + * amdgpu_cs_wait_all_fence - wait on all fences to signal
> > > + *
> > > + * @adev: amdgpu device
> > > + * @filp: file private
> > > + * @wait: wait parameters
> > > + * @fences: array of drm_amdgpu_fence
> > > + */
> > > +static int amdgpu_cs_wait_all_fences(struct amdgpu_device *adev,
> > > +				     struct drm_file *filp,
> > > +				     union drm_amdgpu_wait_fences *wait,
> > > +				     struct drm_amdgpu_fence *fences)
> > > +{
> > > +	uint32_t fence_count = wait->in.fence_count;
> > > +	unsigned i;
> > > +	long r = 1;
> > > +
> > > +	for (i = 0; i < fence_count; i++) {
> > > +		struct dma_fence *fence;
> > > +		unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
> > > +
> > > +		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
> > > +		if (IS_ERR(fence))
> > > +			return PTR_ERR(fence);
> > > +		else if (!fence)
> > > +			continue;
> > > +
> > > +		r = dma_fence_wait_timeout(fence, true, timeout);
> > > +		if (r < 0)
> > > +			return r;
> > > +
> > > +		if (r == 0)
> > > +			break;
> > > +	}
> > > +
> > > +	memset(wait, 0, sizeof(*wait));
> > > +	wait->out.status = (r > 0);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/**
> > > + * amdgpu_cs_wait_any_fence - wait on any fence to signal
> > > + *
> > > + * @adev: amdgpu device
> > > + * @filp: file private
> > > + * @wait: wait parameters
> > > + * @fences: array of drm_amdgpu_fence
> > > + */
> > > +static int amdgpu_cs_wait_any_fence(struct amdgpu_device *adev,
> > > +				    struct drm_file *filp,
> > > +				    union drm_amdgpu_wait_fences *wait,
> > > +				    struct drm_amdgpu_fence *fences)
> > > +{
> > > +	unsigned long timeout = amdgpu_gem_timeout(wait->in.timeout_ns);
> > > +	uint32_t fence_count = wait->in.fence_count;
> > > +	uint32_t first = ~0;
> > > +	struct dma_fence **array;
> > > +	unsigned i;
> > > +	long r;
> > > +
> > > +	/* Prepare the fence array */
> > > +	array = (struct dma_fence **)kcalloc(fence_count, sizeof(struct dma_fence *),
> > > +			GFP_KERNEL);
> > > +	if (array == NULL)
> > > +		return -ENOMEM;
> > > +
> > > +	for (i = 0; i < fence_count; i++) {
> > > +		struct dma_fence *fence;
> > > +
> > > +		fence = amdgpu_cs_get_fence(adev, filp, &fences[i]);
> > > +		if (IS_ERR(fence)) {
> > > +			r = PTR_ERR(fence);
> > > +			goto err_free_fence_array;
> > > +		} else if (fence) {
> > > +			array[i] = fence;
> > > +		} else { /* NULL, the fence has been already signaled */
> > > +			r = 1;
> > > +			goto out;
> > > +		}
> > > +	}
> > > +
> > > +	r = dma_fence_wait_any_timeout(array, fence_count, true, timeout, &first);
> > > +	if (r < 0)
> > > +		goto err_free_fence_array;
> > > +
> > > +out:
> > > +	memset(wait, 0, sizeof(*wait));
> > > +	wait->out.status = (r > 0);
> > > +	wait->out.first_signaled = first;
> > > +	/* set return value 0 to indicate success */
> > > +	r = 0;
> > > +
> > > +err_free_fence_array:
> > > +	for (i = 0; i < fence_count; i++)
> > > +		dma_fence_put(array[i]);
> > > +	kfree(array);
> > > +
> > > +	return r;
> > > +}
> > > +
> > > +/**
> > > + * amdgpu_cs_wait_fences_ioctl - wait for multiple command submissions to finish
> > > + *
> > > + * @dev: drm device
> > > + * @data: data from userspace
> > > + * @filp: file private
> > > + */
> > > +int amdgpu_cs_wait_fences_ioctl(struct drm_device *dev, void *data,
> > > +				struct drm_file *filp)
> > > +{
> > > +	struct amdgpu_device *adev = dev->dev_private;
> > > +	union drm_amdgpu_wait_fences *wait = data;
> > > +	uint32_t fence_count = wait->in.fence_count;
> > > +	struct drm_amdgpu_fence *fences_user;
> > > +	struct drm_amdgpu_fence *fences;
> > > +	int r;
> > > +
> > > +	/* Get the fences from userspace */
> > > +	fences = kmalloc_array(fence_count, sizeof(struct drm_amdgpu_fence),
> > > +			GFP_KERNEL);
> > > +	if (fences == NULL)
> > > +		return -ENOMEM;
> > > +
> > > +	fences_user = (void __user *)(unsigned long)(wait->in.fences);
> > > +	if (copy_from_user(fences, fences_user,
> > > +		sizeof(struct drm_amdgpu_fence) * fence_count)) {
> > > +		r = -EFAULT;
> > > +		goto err_free_fences;
> > > +	}
> > > +
> > > +	if (wait->in.wait_all)
> > > +		r = amdgpu_cs_wait_all_fences(adev, filp, wait, fences);
> > > +	else
> > > +		r = amdgpu_cs_wait_any_fence(adev, filp, wait, fences);
> > I wonder if it wouldn't be better if we use fence_array here and
> > register callbacks to get notfied of the first signaled fence the "any" case.
> > It seems to me that we could simplify this code by using a fence_array.
> 
> I had this code in mind as well when working on the fence_array.
> 
> But this code actually precedes the fence_array implementation, so I would
> like to push it upstream unchanged and then clean it up to use the fence
> array.
> 
> That would make our backporting efforts a bit easier and shouldn't affect
> upstream to much in any way.

That sounds good to me. Should add an extra patch to this
patchset to do the conversion right away?

Gustavo

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] dma-buf: return index of the first signaled fence (v2)
  2016-11-07 22:42       ` Sumit Semwal
@ 2016-11-08 19:34         ` Sumit Semwal
  0 siblings, 0 replies; 10+ messages in thread
From: Sumit Semwal @ 2016-11-08 19:34 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Alex Deucher, DRI mailing list, amd-gfx list, monk.liu

Hi Alex, Christian,

On 8 November 2016 at 04:12, Sumit Semwal <sumit.semwal@linaro.org> wrote:
> Hi Alex,
>
> On 07-Nov-2016 11:14 PM, "Alex Deucher" <alexdeucher@gmail.com> wrote:
>>
>> On Fri, Nov 4, 2016 at 6:03 PM, Sumit Semwal <sumit.semwal@linaro.org>
>> wrote:
>> > Hi Alex,
>> >
>> > Thanks for the patches.
>> >
>> > On 4 November 2016 at 14:16, Alex Deucher <alexdeucher@gmail.com> wrote:
>> >> From: "monk.liu" <monk.liu@amd.com>
>> >>
>> >> Return the index of the first signaled fence.  This information
>> >> is useful in some APIs like Vulkan.
>> >>
>> >> v2: rebase on drm-next (fence -> dma_fence)
>> >>
>> >> Signed-off-by: monk.liu <monk.liu@amd.com>
>> >> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>> >> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>> >> ---
>> >>
>> >> This is the same patch set I send out yesterday, I just
>> >> squashed the amdgpu patches together and rebased everything on
>> >> the fence -> dma_fence renaming.  This is used by our VK driver
>> >> and we are planning to use it in mesa as well.
>> >>
>> >
>> > Would you be ok if I apply this and the amdgpu patch both together via
>> > drm-misc, or would you like me to notify you once I merge this for you
>> > to take the amdgpu patch via your tree? I'm fine either ways, but
>> > perhaps drm-misc would be a bit neater.
>> >
>>
>> Either way works for me.  Whatever is easier for you.
>>
> Thanks, will take these and Christian's patches through the  drm-misc tree,
> hopefully today. (returning from LPC and just landed at my home city, 4am
> here, but will hope to push these today! )
>> Alex
>

Applied to drm-misc; Thanks!

> Best,
> Sumit.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-11-08 19:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-04 20:16 [PATCH 1/2] dma-buf: return index of the first signaled fence (v2) Alex Deucher
2016-11-04 20:16 ` [PATCH 2/2] drm/amdgpu: add the interface of waiting multiple fences (v4) Alex Deucher
2016-11-07  1:10   ` Gustavo Padovan
2016-11-07  8:04     ` Christian König
2016-11-08  0:36       ` Gustavo Padovan
2016-11-04 22:03 ` [PATCH 1/2] dma-buf: return index of the first signaled fence (v2) Sumit Semwal
     [not found]   ` <CAO_48GGiBXFFqxWg1fUKMVUR13QvwaH5a8eiGrOgeCbUyY0WnA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-11-07 17:44     ` Alex Deucher
2016-11-07 22:42       ` Sumit Semwal
2016-11-08 19:34         ` Sumit Semwal
     [not found] ` <1478290570-30982-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
2016-11-05 12:22   ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.