All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats
@ 2021-11-30 12:19 ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:19 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: linaro-mm-sig, Thomas Hellström, matthew.auld, Christian König

Introducing more usage of dma-fence-chain and dma-fence-array in the
i915 driver we start to hit lockdep splats due to the recursive fence
locking in the dma-fence-chain and dma-fence-array containers.
This is a humble suggestion to try to establish a dma-fence locking order
(patch 1) and to avoid excessive recursive locking in these containers
(patch 2)

Thomas Hellström (2):
  dma-fence: Avoid establishing a locking order between fence classes
  dma-fence: Avoid excessive recursive fence locking from
    enable_signaling() callbacks

 drivers/dma-buf/dma-fence-array.c | 23 +++++++--
 drivers/dma-buf/dma-fence-chain.c | 12 ++++-
 drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
 include/linux/dma-fence.h         |  4 ++
 4 files changed, 89 insertions(+), 29 deletions(-)

Cc: linaro-mm-sig@lists.linaro.org
Cc: dri-devel@lists.freedesktop.org
Cc: Christian König <christian.koenig@amd.com>

-- 
2.31.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [Intel-gfx] [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats
@ 2021-11-30 12:19 ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:19 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: linaro-mm-sig, Thomas Hellström, matthew.auld, Christian König

Introducing more usage of dma-fence-chain and dma-fence-array in the
i915 driver we start to hit lockdep splats due to the recursive fence
locking in the dma-fence-chain and dma-fence-array containers.
This is a humble suggestion to try to establish a dma-fence locking order
(patch 1) and to avoid excessive recursive locking in these containers
(patch 2)

Thomas Hellström (2):
  dma-fence: Avoid establishing a locking order between fence classes
  dma-fence: Avoid excessive recursive fence locking from
    enable_signaling() callbacks

 drivers/dma-buf/dma-fence-array.c | 23 +++++++--
 drivers/dma-buf/dma-fence-chain.c | 12 ++++-
 drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
 include/linux/dma-fence.h         |  4 ++
 4 files changed, 89 insertions(+), 29 deletions(-)

Cc: linaro-mm-sig@lists.linaro.org
Cc: dri-devel@lists.freedesktop.org
Cc: Christian König <christian.koenig@amd.com>

-- 
2.31.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 12:19   ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:19 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: linaro-mm-sig, Thomas Hellström, matthew.auld, Christian König

The locking order for taking two fence locks is implicitly defined in
at least two ways in the code:

1) Fence containers first and other fences next, which is defined by
the enable_signaling() callbacks of dma_fence_chain and
dma_fence_array.
2) Reverse signal order, which is used by __i915_active_fence_set().

Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
and 2) does not imply 1), and also 1) makes locking order between
different containers confusing.

Establish 2) and fix up the signal_on_any mode by calling
enable_signaling() on such fences unlocked at creation.

Cc: linaro-mm-sig@lists.linaro.org
Cc: dri-devel@lists.freedesktop.org
Cc: Christian König <christian.koenig@amd.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 13 +++--
 drivers/dma-buf/dma-fence-chain.c |  3 +-
 drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
 include/linux/dma-fence.h         |  3 ++
 4 files changed, 69 insertions(+), 29 deletions(-)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index 3e07f961e2f3..0322b92909fe 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
 		 * insufficient).
 		 */
 		dma_fence_get(&array->base);
-		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
-					   dma_fence_array_cb_func)) {
+		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
+						  dma_fence_array_cb_func)) {
 			int error = array->fences[i]->error;
 
 			dma_fence_array_set_pending_error(array, error);
@@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 {
 	struct dma_fence_array *array;
 	size_t size = sizeof(*array);
+	struct dma_fence *fence;
 
 	/* Allocate the callback structures behind the array. */
 	size += num_fences * sizeof(struct dma_fence_array_cb);
@@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 	if (!array)
 		return NULL;
 
+	fence = &array->base;
 	spin_lock_init(&array->lock);
-	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
+	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
 		       context, seqno);
 	init_irq_work(&array->work, irq_dma_fence_array_work);
 
@@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
 	array->fences = fences;
 
-	array->base.error = PENDING_ERROR;
+	fence->error = PENDING_ERROR;
+
+	if (signal_on_any)
+		dma_fence_enable_sw_signaling(fence);
 
 	return array;
 }
diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
index 1b4cb3e5cec9..0518e53880f6 100644
--- a/drivers/dma-buf/dma-fence-chain.c
+++ b/drivers/dma-buf/dma-fence-chain.c
@@ -152,7 +152,8 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
 		struct dma_fence *f = chain ? chain->fence : fence;
 
 		dma_fence_get(f);
-		if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
+		if (!dma_fence_add_callback_nested(f, &head->cb,
+						   dma_fence_chain_cb)) {
 			dma_fence_put(fence);
 			return true;
 		}
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 066400ed8841..90a3d5121746 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -610,6 +610,37 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
 }
 EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
 
+static int __dma_fence_add_callback(struct dma_fence *fence,
+				    struct dma_fence_cb *cb,
+				    dma_fence_func_t func,
+				    int nest_level)
+{
+	unsigned long flags;
+	int ret = 0;
+
+	if (WARN_ON(!fence || !func))
+		return -EINVAL;
+
+	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		INIT_LIST_HEAD(&cb->node);
+		return -ENOENT;
+	}
+
+	spin_lock_irqsave_nested(fence->lock, flags, 0);
+
+	if (__dma_fence_enable_signaling(fence)) {
+		cb->func = func;
+		list_add_tail(&cb->node, &fence->cb_list);
+	} else {
+		INIT_LIST_HEAD(&cb->node);
+		ret = -ENOENT;
+	}
+
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+
 /**
  * dma_fence_add_callback - add a callback to be called when the fence
  * is signaled
@@ -635,33 +666,33 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
 int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
 			   dma_fence_func_t func)
 {
-	unsigned long flags;
-	int ret = 0;
-
-	if (WARN_ON(!fence || !func))
-		return -EINVAL;
-
-	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
-		INIT_LIST_HEAD(&cb->node);
-		return -ENOENT;
-	}
-
-	spin_lock_irqsave(fence->lock, flags);
-
-	if (__dma_fence_enable_signaling(fence)) {
-		cb->func = func;
-		list_add_tail(&cb->node, &fence->cb_list);
-	} else {
-		INIT_LIST_HEAD(&cb->node);
-		ret = -ENOENT;
-	}
-
-	spin_unlock_irqrestore(fence->lock, flags);
-
-	return ret;
+	return __dma_fence_add_callback(fence, cb, func, 0);
 }
 EXPORT_SYMBOL(dma_fence_add_callback);
 
+/**
+ * dma_fence_add_callback_nested - add a callback from within a fence locked
+ * section to be called when the fence is signaled
+ * @fence: the fence to wait on
+ * @cb: the callback to register
+ * @func: the function to call
+ *
+ * This function is identical to dma_fence_add_callback() except it is
+ * intended to be used from within a section where the fence lock of
+ * another fence might be locked, and where it is guaranteed that
+ * other fence will signal _after_ @fence.
+ *
+ * Returns 0 in case of success, -ENOENT if the fence is already signaled
+ * and -EINVAL in case of error.
+ */
+int dma_fence_add_callback_nested(struct dma_fence *fence,
+				  struct dma_fence_cb *cb,
+				  dma_fence_func_t func)
+{
+	return __dma_fence_add_callback(fence, cb, func, SINGLE_DEPTH_NESTING);
+}
+EXPORT_SYMBOL(dma_fence_add_callback_nested);
+
 /**
  * dma_fence_get_status - returns the status upon completion
  * @fence: the dma_fence to query
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 1ea691753bd3..405cd83936f6 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -377,6 +377,9 @@ signed long dma_fence_default_wait(struct dma_fence *fence,
 int dma_fence_add_callback(struct dma_fence *fence,
 			   struct dma_fence_cb *cb,
 			   dma_fence_func_t func);
+int dma_fence_add_callback_nested(struct dma_fence *fence,
+				  struct dma_fence_cb *cb,
+				  dma_fence_func_t func);
 bool dma_fence_remove_callback(struct dma_fence *fence,
 			       struct dma_fence_cb *cb);
 void dma_fence_enable_sw_signaling(struct dma_fence *fence);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 12:19   ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:19 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: linaro-mm-sig, Thomas Hellström, matthew.auld, Christian König

The locking order for taking two fence locks is implicitly defined in
at least two ways in the code:

1) Fence containers first and other fences next, which is defined by
the enable_signaling() callbacks of dma_fence_chain and
dma_fence_array.
2) Reverse signal order, which is used by __i915_active_fence_set().

Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
and 2) does not imply 1), and also 1) makes locking order between
different containers confusing.

Establish 2) and fix up the signal_on_any mode by calling
enable_signaling() on such fences unlocked at creation.

Cc: linaro-mm-sig@lists.linaro.org
Cc: dri-devel@lists.freedesktop.org
Cc: Christian König <christian.koenig@amd.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 13 +++--
 drivers/dma-buf/dma-fence-chain.c |  3 +-
 drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
 include/linux/dma-fence.h         |  3 ++
 4 files changed, 69 insertions(+), 29 deletions(-)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index 3e07f961e2f3..0322b92909fe 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
 		 * insufficient).
 		 */
 		dma_fence_get(&array->base);
-		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
-					   dma_fence_array_cb_func)) {
+		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
+						  dma_fence_array_cb_func)) {
 			int error = array->fences[i]->error;
 
 			dma_fence_array_set_pending_error(array, error);
@@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 {
 	struct dma_fence_array *array;
 	size_t size = sizeof(*array);
+	struct dma_fence *fence;
 
 	/* Allocate the callback structures behind the array. */
 	size += num_fences * sizeof(struct dma_fence_array_cb);
@@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 	if (!array)
 		return NULL;
 
+	fence = &array->base;
 	spin_lock_init(&array->lock);
-	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
+	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
 		       context, seqno);
 	init_irq_work(&array->work, irq_dma_fence_array_work);
 
@@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
 	array->fences = fences;
 
-	array->base.error = PENDING_ERROR;
+	fence->error = PENDING_ERROR;
+
+	if (signal_on_any)
+		dma_fence_enable_sw_signaling(fence);
 
 	return array;
 }
diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
index 1b4cb3e5cec9..0518e53880f6 100644
--- a/drivers/dma-buf/dma-fence-chain.c
+++ b/drivers/dma-buf/dma-fence-chain.c
@@ -152,7 +152,8 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
 		struct dma_fence *f = chain ? chain->fence : fence;
 
 		dma_fence_get(f);
-		if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
+		if (!dma_fence_add_callback_nested(f, &head->cb,
+						   dma_fence_chain_cb)) {
 			dma_fence_put(fence);
 			return true;
 		}
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 066400ed8841..90a3d5121746 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -610,6 +610,37 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
 }
 EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
 
+static int __dma_fence_add_callback(struct dma_fence *fence,
+				    struct dma_fence_cb *cb,
+				    dma_fence_func_t func,
+				    int nest_level)
+{
+	unsigned long flags;
+	int ret = 0;
+
+	if (WARN_ON(!fence || !func))
+		return -EINVAL;
+
+	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		INIT_LIST_HEAD(&cb->node);
+		return -ENOENT;
+	}
+
+	spin_lock_irqsave_nested(fence->lock, flags, 0);
+
+	if (__dma_fence_enable_signaling(fence)) {
+		cb->func = func;
+		list_add_tail(&cb->node, &fence->cb_list);
+	} else {
+		INIT_LIST_HEAD(&cb->node);
+		ret = -ENOENT;
+	}
+
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+
 /**
  * dma_fence_add_callback - add a callback to be called when the fence
  * is signaled
@@ -635,33 +666,33 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
 int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
 			   dma_fence_func_t func)
 {
-	unsigned long flags;
-	int ret = 0;
-
-	if (WARN_ON(!fence || !func))
-		return -EINVAL;
-
-	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
-		INIT_LIST_HEAD(&cb->node);
-		return -ENOENT;
-	}
-
-	spin_lock_irqsave(fence->lock, flags);
-
-	if (__dma_fence_enable_signaling(fence)) {
-		cb->func = func;
-		list_add_tail(&cb->node, &fence->cb_list);
-	} else {
-		INIT_LIST_HEAD(&cb->node);
-		ret = -ENOENT;
-	}
-
-	spin_unlock_irqrestore(fence->lock, flags);
-
-	return ret;
+	return __dma_fence_add_callback(fence, cb, func, 0);
 }
 EXPORT_SYMBOL(dma_fence_add_callback);
 
+/**
+ * dma_fence_add_callback_nested - add a callback from within a fence locked
+ * section to be called when the fence is signaled
+ * @fence: the fence to wait on
+ * @cb: the callback to register
+ * @func: the function to call
+ *
+ * This function is identical to dma_fence_add_callback() except it is
+ * intended to be used from within a section where the fence lock of
+ * another fence might be locked, and where it is guaranteed that
+ * other fence will signal _after_ @fence.
+ *
+ * Returns 0 in case of success, -ENOENT if the fence is already signaled
+ * and -EINVAL in case of error.
+ */
+int dma_fence_add_callback_nested(struct dma_fence *fence,
+				  struct dma_fence_cb *cb,
+				  dma_fence_func_t func)
+{
+	return __dma_fence_add_callback(fence, cb, func, SINGLE_DEPTH_NESTING);
+}
+EXPORT_SYMBOL(dma_fence_add_callback_nested);
+
 /**
  * dma_fence_get_status - returns the status upon completion
  * @fence: the dma_fence to query
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 1ea691753bd3..405cd83936f6 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -377,6 +377,9 @@ signed long dma_fence_default_wait(struct dma_fence *fence,
 int dma_fence_add_callback(struct dma_fence *fence,
 			   struct dma_fence_cb *cb,
 			   dma_fence_func_t func);
+int dma_fence_add_callback_nested(struct dma_fence *fence,
+				  struct dma_fence_cb *cb,
+				  dma_fence_func_t func);
 bool dma_fence_remove_callback(struct dma_fence *fence,
 			       struct dma_fence_cb *cb);
 void dma_fence_enable_sw_signaling(struct dma_fence *fence);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [RFC PATCH 2/2] dma-fence: Avoid excessive recursive fence locking from enable_signaling() callbacks
  2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 12:19   ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:19 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: linaro-mm-sig, Thomas Hellström, matthew.auld, Christian König

Some dma-fence containers lock other fence's locks from their
enable_signaling() callbacks. We allow one level of nesting from
the dma_fence_add_callback_nested() function, but we would also like to
allow for example dma_fence_chain to point to a dma_fence_array and
vice versa, even though that would create additional levels of nesting.

To do that we need to break longer recursive chains of fence locking and
we can do that either by deferring dma_fence_add_callback_nested() to
a worker for affected fences or to call enable_signaling() early on
affected fences. Opt for the latter, and define a
DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT for fence classes that takes fence
locks recursively from within their enable_signaling() callback.

Note that a user could of course also call enable_signaling() manually
on these fences before publishing them, but this solution attempts to
do that only when necessary.

Cc: Christian König <christian.koenig@amd.com>
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 12 +++++++++++-
 drivers/dma-buf/dma-fence-chain.c |  9 +++++++++
 include/linux/dma-fence.h         |  1 +
 3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index 0322b92909fe..63ae9909bcfa 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -178,8 +178,18 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 
 	fence->error = PENDING_ERROR;
 
-	if (signal_on_any)
+	set_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT, &fence->flags);
+
+	if (signal_on_any) {
 		dma_fence_enable_sw_signaling(fence);
+	} else {
+		int i;
+
+		for (i = 0; i < num_fences; i++, fences++)
+			if (test_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT,
+				     &(*fences)->flags))
+				dma_fence_enable_sw_signaling(*fences);
+	}
 
 	return array;
 }
diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
index 0518e53880f6..b4012dbef0c9 100644
--- a/drivers/dma-buf/dma-fence-chain.c
+++ b/drivers/dma-buf/dma-fence-chain.c
@@ -255,5 +255,14 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
 
 	dma_fence_init(&chain->base, &dma_fence_chain_ops,
 		       &chain->lock, context, seqno);
+
+	set_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT, &chain->base.flags);
+	if (test_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT, &fence->flags)) {
+		/*
+		 * Disable further calls into @fence's enable_signaling
+		 * To prohibit further recursive locking
+		 */
+		dma_fence_enable_sw_signaling(fence);
+	}
 }
 EXPORT_SYMBOL(dma_fence_chain_init);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 405cd83936f6..48bf5e14636e 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
 	DMA_FENCE_FLAG_SIGNALED_BIT,
 	DMA_FENCE_FLAG_TIMESTAMP_BIT,
 	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT,
 	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [Intel-gfx] [RFC PATCH 2/2] dma-fence: Avoid excessive recursive fence locking from enable_signaling() callbacks
@ 2021-11-30 12:19   ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:19 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: linaro-mm-sig, Thomas Hellström, matthew.auld, Christian König

Some dma-fence containers lock other fence's locks from their
enable_signaling() callbacks. We allow one level of nesting from
the dma_fence_add_callback_nested() function, but we would also like to
allow for example dma_fence_chain to point to a dma_fence_array and
vice versa, even though that would create additional levels of nesting.

To do that we need to break longer recursive chains of fence locking and
we can do that either by deferring dma_fence_add_callback_nested() to
a worker for affected fences or to call enable_signaling() early on
affected fences. Opt for the latter, and define a
DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT for fence classes that takes fence
locks recursively from within their enable_signaling() callback.

Note that a user could of course also call enable_signaling() manually
on these fences before publishing them, but this solution attempts to
do that only when necessary.

Cc: Christian König <christian.koenig@amd.com>
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/dma-buf/dma-fence-array.c | 12 +++++++++++-
 drivers/dma-buf/dma-fence-chain.c |  9 +++++++++
 include/linux/dma-fence.h         |  1 +
 3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index 0322b92909fe..63ae9909bcfa 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -178,8 +178,18 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
 
 	fence->error = PENDING_ERROR;
 
-	if (signal_on_any)
+	set_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT, &fence->flags);
+
+	if (signal_on_any) {
 		dma_fence_enable_sw_signaling(fence);
+	} else {
+		int i;
+
+		for (i = 0; i < num_fences; i++, fences++)
+			if (test_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT,
+				     &(*fences)->flags))
+				dma_fence_enable_sw_signaling(*fences);
+	}
 
 	return array;
 }
diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
index 0518e53880f6..b4012dbef0c9 100644
--- a/drivers/dma-buf/dma-fence-chain.c
+++ b/drivers/dma-buf/dma-fence-chain.c
@@ -255,5 +255,14 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
 
 	dma_fence_init(&chain->base, &dma_fence_chain_ops,
 		       &chain->lock, context, seqno);
+
+	set_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT, &chain->base.flags);
+	if (test_bit(DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT, &fence->flags)) {
+		/*
+		 * Disable further calls into @fence's enable_signaling
+		 * To prohibit further recursive locking
+		 */
+		dma_fence_enable_sw_signaling(fence);
+	}
 }
 EXPORT_SYMBOL(dma_fence_chain_init);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 405cd83936f6..48bf5e14636e 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
 	DMA_FENCE_FLAG_SIGNALED_BIT,
 	DMA_FENCE_FLAG_TIMESTAMP_BIT,
 	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	DMA_FENCE_FLAG_LOCK_RECURSIVE_BIT,
 	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
 };
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:19   ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 12:25     ` Maarten Lankhorst
  -1 siblings, 0 replies; 65+ messages in thread
From: Maarten Lankhorst @ 2021-11-30 12:25 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld, Christian König

On 30-11-2021 13:19, Thomas Hellström wrote:
> The locking order for taking two fence locks is implicitly defined in
> at least two ways in the code:
>
> 1) Fence containers first and other fences next, which is defined by
> the enable_signaling() callbacks of dma_fence_chain and
> dma_fence_array.
> 2) Reverse signal order, which is used by __i915_active_fence_set().
>
> Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
> and 2) does not imply 1), and also 1) makes locking order between
> different containers confusing.
>
> Establish 2) and fix up the signal_on_any mode by calling
> enable_signaling() on such fences unlocked at creation.
>
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: Christian König <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  drivers/dma-buf/dma-fence-array.c | 13 +++--
>  drivers/dma-buf/dma-fence-chain.c |  3 +-
>  drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>  include/linux/dma-fence.h         |  3 ++
>  4 files changed, 69 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
> index 3e07f961e2f3..0322b92909fe 100644
> --- a/drivers/dma-buf/dma-fence-array.c
> +++ b/drivers/dma-buf/dma-fence-array.c
> @@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
>  		 * insufficient).
>  		 */
>  		dma_fence_get(&array->base);
> -		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
> -					   dma_fence_array_cb_func)) {
> +		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
> +						  dma_fence_array_cb_func)) {
>  			int error = array->fences[i]->error;
>  
>  			dma_fence_array_set_pending_error(array, error);
> @@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>  {
>  	struct dma_fence_array *array;
>  	size_t size = sizeof(*array);
> +	struct dma_fence *fence;
>  
>  	/* Allocate the callback structures behind the array. */
>  	size += num_fences * sizeof(struct dma_fence_array_cb);
> @@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>  	if (!array)
>  		return NULL;
>  
> +	fence = &array->base;
>  	spin_lock_init(&array->lock);
> -	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
> +	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
>  		       context, seqno);
>  	init_irq_work(&array->work, irq_dma_fence_array_work);
>  
> @@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>  	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
>  	array->fences = fences;
>  
> -	array->base.error = PENDING_ERROR;
> +	fence->error = PENDING_ERROR;
> +
> +	if (signal_on_any)
> +		dma_fence_enable_sw_signaling(fence);
>  
>  	return array;
>  }
> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
> index 1b4cb3e5cec9..0518e53880f6 100644
> --- a/drivers/dma-buf/dma-fence-chain.c
> +++ b/drivers/dma-buf/dma-fence-chain.c
> @@ -152,7 +152,8 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
>  		struct dma_fence *f = chain ? chain->fence : fence;
>  
>  		dma_fence_get(f);
> -		if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
> +		if (!dma_fence_add_callback_nested(f, &head->cb,
> +						   dma_fence_chain_cb)) {
>  			dma_fence_put(fence);
>  			return true;
>  		}
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 066400ed8841..90a3d5121746 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -610,6 +610,37 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
>  }
>  EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>  
> +static int __dma_fence_add_callback(struct dma_fence *fence,
> +				    struct dma_fence_cb *cb,
> +				    dma_fence_func_t func,
> +				    int nest_level)
> +{
> +	unsigned long flags;
> +	int ret = 0;
> +
> +	if (WARN_ON(!fence || !func))
> +		return -EINVAL;
> +
> +	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> +		INIT_LIST_HEAD(&cb->node);
> +		return -ENOENT;
> +	}
> +
> +	spin_lock_irqsave_nested(fence->lock, flags, 0);
Forgot to hook up nest_level here?
> +
> +	if (__dma_fence_enable_signaling(fence)) {
> +		cb->func = func;
> +		list_add_tail(&cb->node, &fence->cb_list);
> +	} else {
> +		INIT_LIST_HEAD(&cb->node);
> +		ret = -ENOENT;
> +	}
> +
> +	spin_unlock_irqrestore(fence->lock, flags);
> +
> +	return ret;
> +}
> +
>  /**
>   * dma_fence_add_callback - add a callback to be called when the fence
>   * is signaled
> @@ -635,33 +666,33 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>  int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
>  			   dma_fence_func_t func)
>  {
> -	unsigned long flags;
> -	int ret = 0;
> -
> -	if (WARN_ON(!fence || !func))
> -		return -EINVAL;
> -
> -	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> -		INIT_LIST_HEAD(&cb->node);
> -		return -ENOENT;
> -	}
> -
> -	spin_lock_irqsave(fence->lock, flags);
> -
> -	if (__dma_fence_enable_signaling(fence)) {
> -		cb->func = func;
> -		list_add_tail(&cb->node, &fence->cb_list);
> -	} else {
> -		INIT_LIST_HEAD(&cb->node);
> -		ret = -ENOENT;
> -	}
> -
> -	spin_unlock_irqrestore(fence->lock, flags);
> -
> -	return ret;
> +	return __dma_fence_add_callback(fence, cb, func, 0);
>  }
>  EXPORT_SYMBOL(dma_fence_add_callback);
>  

Other than that, I didn't investigate the nesting fails enough to say I can accurately review this. :)

~Maarten



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 12:25     ` Maarten Lankhorst
  0 siblings, 0 replies; 65+ messages in thread
From: Maarten Lankhorst @ 2021-11-30 12:25 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld, Christian König

On 30-11-2021 13:19, Thomas Hellström wrote:
> The locking order for taking two fence locks is implicitly defined in
> at least two ways in the code:
>
> 1) Fence containers first and other fences next, which is defined by
> the enable_signaling() callbacks of dma_fence_chain and
> dma_fence_array.
> 2) Reverse signal order, which is used by __i915_active_fence_set().
>
> Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
> and 2) does not imply 1), and also 1) makes locking order between
> different containers confusing.
>
> Establish 2) and fix up the signal_on_any mode by calling
> enable_signaling() on such fences unlocked at creation.
>
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: Christian König <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  drivers/dma-buf/dma-fence-array.c | 13 +++--
>  drivers/dma-buf/dma-fence-chain.c |  3 +-
>  drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>  include/linux/dma-fence.h         |  3 ++
>  4 files changed, 69 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
> index 3e07f961e2f3..0322b92909fe 100644
> --- a/drivers/dma-buf/dma-fence-array.c
> +++ b/drivers/dma-buf/dma-fence-array.c
> @@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
>  		 * insufficient).
>  		 */
>  		dma_fence_get(&array->base);
> -		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
> -					   dma_fence_array_cb_func)) {
> +		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
> +						  dma_fence_array_cb_func)) {
>  			int error = array->fences[i]->error;
>  
>  			dma_fence_array_set_pending_error(array, error);
> @@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>  {
>  	struct dma_fence_array *array;
>  	size_t size = sizeof(*array);
> +	struct dma_fence *fence;
>  
>  	/* Allocate the callback structures behind the array. */
>  	size += num_fences * sizeof(struct dma_fence_array_cb);
> @@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>  	if (!array)
>  		return NULL;
>  
> +	fence = &array->base;
>  	spin_lock_init(&array->lock);
> -	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
> +	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
>  		       context, seqno);
>  	init_irq_work(&array->work, irq_dma_fence_array_work);
>  
> @@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>  	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
>  	array->fences = fences;
>  
> -	array->base.error = PENDING_ERROR;
> +	fence->error = PENDING_ERROR;
> +
> +	if (signal_on_any)
> +		dma_fence_enable_sw_signaling(fence);
>  
>  	return array;
>  }
> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
> index 1b4cb3e5cec9..0518e53880f6 100644
> --- a/drivers/dma-buf/dma-fence-chain.c
> +++ b/drivers/dma-buf/dma-fence-chain.c
> @@ -152,7 +152,8 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
>  		struct dma_fence *f = chain ? chain->fence : fence;
>  
>  		dma_fence_get(f);
> -		if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
> +		if (!dma_fence_add_callback_nested(f, &head->cb,
> +						   dma_fence_chain_cb)) {
>  			dma_fence_put(fence);
>  			return true;
>  		}
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 066400ed8841..90a3d5121746 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -610,6 +610,37 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
>  }
>  EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>  
> +static int __dma_fence_add_callback(struct dma_fence *fence,
> +				    struct dma_fence_cb *cb,
> +				    dma_fence_func_t func,
> +				    int nest_level)
> +{
> +	unsigned long flags;
> +	int ret = 0;
> +
> +	if (WARN_ON(!fence || !func))
> +		return -EINVAL;
> +
> +	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> +		INIT_LIST_HEAD(&cb->node);
> +		return -ENOENT;
> +	}
> +
> +	spin_lock_irqsave_nested(fence->lock, flags, 0);
Forgot to hook up nest_level here?
> +
> +	if (__dma_fence_enable_signaling(fence)) {
> +		cb->func = func;
> +		list_add_tail(&cb->node, &fence->cb_list);
> +	} else {
> +		INIT_LIST_HEAD(&cb->node);
> +		ret = -ENOENT;
> +	}
> +
> +	spin_unlock_irqrestore(fence->lock, flags);
> +
> +	return ret;
> +}
> +
>  /**
>   * dma_fence_add_callback - add a callback to be called when the fence
>   * is signaled
> @@ -635,33 +666,33 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>  int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
>  			   dma_fence_func_t func)
>  {
> -	unsigned long flags;
> -	int ret = 0;
> -
> -	if (WARN_ON(!fence || !func))
> -		return -EINVAL;
> -
> -	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> -		INIT_LIST_HEAD(&cb->node);
> -		return -ENOENT;
> -	}
> -
> -	spin_lock_irqsave(fence->lock, flags);
> -
> -	if (__dma_fence_enable_signaling(fence)) {
> -		cb->func = func;
> -		list_add_tail(&cb->node, &fence->cb_list);
> -	} else {
> -		INIT_LIST_HEAD(&cb->node);
> -		ret = -ENOENT;
> -	}
> -
> -	spin_unlock_irqrestore(fence->lock, flags);
> -
> -	return ret;
> +	return __dma_fence_add_callback(fence, cb, func, 0);
>  }
>  EXPORT_SYMBOL(dma_fence_add_callback);
>  

Other than that, I didn't investigate the nesting fails enough to say I can accurately review this. :)

~Maarten



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:25     ` [Intel-gfx] " Maarten Lankhorst
@ 2021-11-30 12:31       ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:31 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld, Christian König


On 11/30/21 13:25, Maarten Lankhorst wrote:
> On 30-11-2021 13:19, Thomas Hellström wrote:
>> The locking order for taking two fence locks is implicitly defined in
>> at least two ways in the code:
>>
>> 1) Fence containers first and other fences next, which is defined by
>> the enable_signaling() callbacks of dma_fence_chain and
>> dma_fence_array.
>> 2) Reverse signal order, which is used by __i915_active_fence_set().
>>
>> Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
>> and 2) does not imply 1), and also 1) makes locking order between
>> different containers confusing.
>>
>> Establish 2) and fix up the signal_on_any mode by calling
>> enable_signaling() on such fences unlocked at creation.
>>
>> Cc: linaro-mm-sig@lists.linaro.org
>> Cc: dri-devel@lists.freedesktop.org
>> Cc: Christian König <christian.koenig@amd.com>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>>   drivers/dma-buf/dma-fence-array.c | 13 +++--
>>   drivers/dma-buf/dma-fence-chain.c |  3 +-
>>   drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>>   include/linux/dma-fence.h         |  3 ++
>>   4 files changed, 69 insertions(+), 29 deletions(-)
>>
>> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
>> index 3e07f961e2f3..0322b92909fe 100644
>> --- a/drivers/dma-buf/dma-fence-array.c
>> +++ b/drivers/dma-buf/dma-fence-array.c
>> @@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
>>   		 * insufficient).
>>   		 */
>>   		dma_fence_get(&array->base);
>> -		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
>> -					   dma_fence_array_cb_func)) {
>> +		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
>> +						  dma_fence_array_cb_func)) {
>>   			int error = array->fences[i]->error;
>>   
>>   			dma_fence_array_set_pending_error(array, error);
>> @@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>>   {
>>   	struct dma_fence_array *array;
>>   	size_t size = sizeof(*array);
>> +	struct dma_fence *fence;
>>   
>>   	/* Allocate the callback structures behind the array. */
>>   	size += num_fences * sizeof(struct dma_fence_array_cb);
>> @@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>>   	if (!array)
>>   		return NULL;
>>   
>> +	fence = &array->base;
>>   	spin_lock_init(&array->lock);
>> -	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
>> +	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
>>   		       context, seqno);
>>   	init_irq_work(&array->work, irq_dma_fence_array_work);
>>   
>> @@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>>   	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
>>   	array->fences = fences;
>>   
>> -	array->base.error = PENDING_ERROR;
>> +	fence->error = PENDING_ERROR;
>> +
>> +	if (signal_on_any)
>> +		dma_fence_enable_sw_signaling(fence);
>>   
>>   	return array;
>>   }
>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>> index 1b4cb3e5cec9..0518e53880f6 100644
>> --- a/drivers/dma-buf/dma-fence-chain.c
>> +++ b/drivers/dma-buf/dma-fence-chain.c
>> @@ -152,7 +152,8 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
>>   		struct dma_fence *f = chain ? chain->fence : fence;
>>   
>>   		dma_fence_get(f);
>> -		if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
>> +		if (!dma_fence_add_callback_nested(f, &head->cb,
>> +						   dma_fence_chain_cb)) {
>>   			dma_fence_put(fence);
>>   			return true;
>>   		}
>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>> index 066400ed8841..90a3d5121746 100644
>> --- a/drivers/dma-buf/dma-fence.c
>> +++ b/drivers/dma-buf/dma-fence.c
>> @@ -610,6 +610,37 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
>>   }
>>   EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>>   
>> +static int __dma_fence_add_callback(struct dma_fence *fence,
>> +				    struct dma_fence_cb *cb,
>> +				    dma_fence_func_t func,
>> +				    int nest_level)
>> +{
>> +	unsigned long flags;
>> +	int ret = 0;
>> +
>> +	if (WARN_ON(!fence || !func))
>> +		return -EINVAL;
>> +
>> +	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +		INIT_LIST_HEAD(&cb->node);
>> +		return -ENOENT;
>> +	}
>> +
>> +	spin_lock_irqsave_nested(fence->lock, flags, 0);
> Forgot to hook up nest_level here?

Ah Yes :)


>> +
>> +	if (__dma_fence_enable_signaling(fence)) {
>> +		cb->func = func;
>> +		list_add_tail(&cb->node, &fence->cb_list);
>> +	} else {
>> +		INIT_LIST_HEAD(&cb->node);
>> +		ret = -ENOENT;
>> +	}
>> +
>> +	spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>>   /**
>>    * dma_fence_add_callback - add a callback to be called when the fence
>>    * is signaled
>> @@ -635,33 +666,33 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>>   int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
>>   			   dma_fence_func_t func)
>>   {
>> -	unsigned long flags;
>> -	int ret = 0;
>> -
>> -	if (WARN_ON(!fence || !func))
>> -		return -EINVAL;
>> -
>> -	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> -		INIT_LIST_HEAD(&cb->node);
>> -		return -ENOENT;
>> -	}
>> -
>> -	spin_lock_irqsave(fence->lock, flags);
>> -
>> -	if (__dma_fence_enable_signaling(fence)) {
>> -		cb->func = func;
>> -		list_add_tail(&cb->node, &fence->cb_list);
>> -	} else {
>> -		INIT_LIST_HEAD(&cb->node);
>> -		ret = -ENOENT;
>> -	}
>> -
>> -	spin_unlock_irqrestore(fence->lock, flags);
>> -
>> -	return ret;
>> +	return __dma_fence_add_callback(fence, cb, func, 0);
>>   }
>>   EXPORT_SYMBOL(dma_fence_add_callback);
>>   
> Other than that, I didn't investigate the nesting fails enough to say I can accurately review this. :)

Basically the problem is that within enable_signaling() which is called 
with the dma_fence lock held, we take the dma_fence lock of another 
fence. If that other fence is a dma_fence_array, or a dma_fence_chain 
which in turn tries to lock a dma_fence_array we hit a splat.

But I'll update the commit message with a typical splat.

/Thomas


>
> ~Maarten
>
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 12:31       ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:31 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld, Christian König


On 11/30/21 13:25, Maarten Lankhorst wrote:
> On 30-11-2021 13:19, Thomas Hellström wrote:
>> The locking order for taking two fence locks is implicitly defined in
>> at least two ways in the code:
>>
>> 1) Fence containers first and other fences next, which is defined by
>> the enable_signaling() callbacks of dma_fence_chain and
>> dma_fence_array.
>> 2) Reverse signal order, which is used by __i915_active_fence_set().
>>
>> Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
>> and 2) does not imply 1), and also 1) makes locking order between
>> different containers confusing.
>>
>> Establish 2) and fix up the signal_on_any mode by calling
>> enable_signaling() on such fences unlocked at creation.
>>
>> Cc: linaro-mm-sig@lists.linaro.org
>> Cc: dri-devel@lists.freedesktop.org
>> Cc: Christian König <christian.koenig@amd.com>
>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>>   drivers/dma-buf/dma-fence-array.c | 13 +++--
>>   drivers/dma-buf/dma-fence-chain.c |  3 +-
>>   drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>>   include/linux/dma-fence.h         |  3 ++
>>   4 files changed, 69 insertions(+), 29 deletions(-)
>>
>> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
>> index 3e07f961e2f3..0322b92909fe 100644
>> --- a/drivers/dma-buf/dma-fence-array.c
>> +++ b/drivers/dma-buf/dma-fence-array.c
>> @@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
>>   		 * insufficient).
>>   		 */
>>   		dma_fence_get(&array->base);
>> -		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
>> -					   dma_fence_array_cb_func)) {
>> +		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
>> +						  dma_fence_array_cb_func)) {
>>   			int error = array->fences[i]->error;
>>   
>>   			dma_fence_array_set_pending_error(array, error);
>> @@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>>   {
>>   	struct dma_fence_array *array;
>>   	size_t size = sizeof(*array);
>> +	struct dma_fence *fence;
>>   
>>   	/* Allocate the callback structures behind the array. */
>>   	size += num_fences * sizeof(struct dma_fence_array_cb);
>> @@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>>   	if (!array)
>>   		return NULL;
>>   
>> +	fence = &array->base;
>>   	spin_lock_init(&array->lock);
>> -	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
>> +	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
>>   		       context, seqno);
>>   	init_irq_work(&array->work, irq_dma_fence_array_work);
>>   
>> @@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>>   	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
>>   	array->fences = fences;
>>   
>> -	array->base.error = PENDING_ERROR;
>> +	fence->error = PENDING_ERROR;
>> +
>> +	if (signal_on_any)
>> +		dma_fence_enable_sw_signaling(fence);
>>   
>>   	return array;
>>   }
>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>> index 1b4cb3e5cec9..0518e53880f6 100644
>> --- a/drivers/dma-buf/dma-fence-chain.c
>> +++ b/drivers/dma-buf/dma-fence-chain.c
>> @@ -152,7 +152,8 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence)
>>   		struct dma_fence *f = chain ? chain->fence : fence;
>>   
>>   		dma_fence_get(f);
>> -		if (!dma_fence_add_callback(f, &head->cb, dma_fence_chain_cb)) {
>> +		if (!dma_fence_add_callback_nested(f, &head->cb,
>> +						   dma_fence_chain_cb)) {
>>   			dma_fence_put(fence);
>>   			return true;
>>   		}
>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>> index 066400ed8841..90a3d5121746 100644
>> --- a/drivers/dma-buf/dma-fence.c
>> +++ b/drivers/dma-buf/dma-fence.c
>> @@ -610,6 +610,37 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
>>   }
>>   EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>>   
>> +static int __dma_fence_add_callback(struct dma_fence *fence,
>> +				    struct dma_fence_cb *cb,
>> +				    dma_fence_func_t func,
>> +				    int nest_level)
>> +{
>> +	unsigned long flags;
>> +	int ret = 0;
>> +
>> +	if (WARN_ON(!fence || !func))
>> +		return -EINVAL;
>> +
>> +	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +		INIT_LIST_HEAD(&cb->node);
>> +		return -ENOENT;
>> +	}
>> +
>> +	spin_lock_irqsave_nested(fence->lock, flags, 0);
> Forgot to hook up nest_level here?

Ah Yes :)


>> +
>> +	if (__dma_fence_enable_signaling(fence)) {
>> +		cb->func = func;
>> +		list_add_tail(&cb->node, &fence->cb_list);
>> +	} else {
>> +		INIT_LIST_HEAD(&cb->node);
>> +		ret = -ENOENT;
>> +	}
>> +
>> +	spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>>   /**
>>    * dma_fence_add_callback - add a callback to be called when the fence
>>    * is signaled
>> @@ -635,33 +666,33 @@ EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
>>   int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
>>   			   dma_fence_func_t func)
>>   {
>> -	unsigned long flags;
>> -	int ret = 0;
>> -
>> -	if (WARN_ON(!fence || !func))
>> -		return -EINVAL;
>> -
>> -	if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> -		INIT_LIST_HEAD(&cb->node);
>> -		return -ENOENT;
>> -	}
>> -
>> -	spin_lock_irqsave(fence->lock, flags);
>> -
>> -	if (__dma_fence_enable_signaling(fence)) {
>> -		cb->func = func;
>> -		list_add_tail(&cb->node, &fence->cb_list);
>> -	} else {
>> -		INIT_LIST_HEAD(&cb->node);
>> -		ret = -ENOENT;
>> -	}
>> -
>> -	spin_unlock_irqrestore(fence->lock, flags);
>> -
>> -	return ret;
>> +	return __dma_fence_add_callback(fence, cb, func, 0);
>>   }
>>   EXPORT_SYMBOL(dma_fence_add_callback);
>>   
> Other than that, I didn't investigate the nesting fails enough to say I can accurately review this. :)

Basically the problem is that within enable_signaling() which is called 
with the dma_fence lock held, we take the dma_fence lock of another 
fence. If that other fence is a dma_fence_array, or a dma_fence_chain 
which in turn tries to lock a dma_fence_array we hit a splat.

But I'll update the commit message with a typical splat.

/Thomas


>
> ~Maarten
>
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:19   ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 12:32     ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:32 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: linaro-mm-sig, matthew.auld, Christian König


On 11/30/21 13:19, Thomas Hellström wrote:
> The locking order for taking two fence locks is implicitly defined in
> at least two ways in the code:
>
> 1) Fence containers first and other fences next, which is defined by
> the enable_signaling() callbacks of dma_fence_chain and
> dma_fence_array.
> 2) Reverse signal order, which is used by __i915_active_fence_set().
>
> Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
> and 2) does not imply 1), and also 1) makes locking order between
> different containers confusing.
>
> Establish 2) and fix up the signal_on_any mode by calling
> enable_signaling() on such fences unlocked at creation.
>
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: Christian König <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/dma-buf/dma-fence-array.c | 13 +++--
>   drivers/dma-buf/dma-fence-chain.c |  3 +-
>   drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>   include/linux/dma-fence.h         |  3 ++
>   4 files changed, 69 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
> index 3e07f961e2f3..0322b92909fe 100644
> --- a/drivers/dma-buf/dma-fence-array.c
> +++ b/drivers/dma-buf/dma-fence-array.c
> @@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
>   		 * insufficient).
>   		 */
>   		dma_fence_get(&array->base);
> -		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
> -					   dma_fence_array_cb_func)) {
> +		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
> +						  dma_fence_array_cb_func)) {
>   			int error = array->fences[i]->error;
>   
>   			dma_fence_array_set_pending_error(array, error);
> @@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>   {
>   	struct dma_fence_array *array;
>   	size_t size = sizeof(*array);
> +	struct dma_fence *fence;
>   
>   	/* Allocate the callback structures behind the array. */
>   	size += num_fences * sizeof(struct dma_fence_array_cb);
> @@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>   	if (!array)
>   		return NULL;
>   
> +	fence = &array->base;
>   	spin_lock_init(&array->lock);
> -	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
> +	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
>   		       context, seqno);
>   	init_irq_work(&array->work, irq_dma_fence_array_work);
>   
> @@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>   	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
>   	array->fences = fences;
>   
> -	array->base.error = PENDING_ERROR;
> +	fence->error = PENDING_ERROR;
> +
> +	if (signal_on_any)
> +		dma_fence_enable_sw_signaling(fence);

Oh, this looks strange. Was meant to call the 
dma_fence_array_enable_signaling() without the lock held here.

/Thomas



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 12:32     ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:32 UTC (permalink / raw)
  To: intel-gfx, dri-devel; +Cc: linaro-mm-sig, matthew.auld, Christian König


On 11/30/21 13:19, Thomas Hellström wrote:
> The locking order for taking two fence locks is implicitly defined in
> at least two ways in the code:
>
> 1) Fence containers first and other fences next, which is defined by
> the enable_signaling() callbacks of dma_fence_chain and
> dma_fence_array.
> 2) Reverse signal order, which is used by __i915_active_fence_set().
>
> Now 1) implies 2), except for the signal_on_any mode of dma_fence_array
> and 2) does not imply 1), and also 1) makes locking order between
> different containers confusing.
>
> Establish 2) and fix up the signal_on_any mode by calling
> enable_signaling() on such fences unlocked at creation.
>
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: Christian König <christian.koenig@amd.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/dma-buf/dma-fence-array.c | 13 +++--
>   drivers/dma-buf/dma-fence-chain.c |  3 +-
>   drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>   include/linux/dma-fence.h         |  3 ++
>   4 files changed, 69 insertions(+), 29 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
> index 3e07f961e2f3..0322b92909fe 100644
> --- a/drivers/dma-buf/dma-fence-array.c
> +++ b/drivers/dma-buf/dma-fence-array.c
> @@ -84,8 +84,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
>   		 * insufficient).
>   		 */
>   		dma_fence_get(&array->base);
> -		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
> -					   dma_fence_array_cb_func)) {
> +		if (dma_fence_add_callback_nested(array->fences[i], &cb[i].cb,
> +						  dma_fence_array_cb_func)) {
>   			int error = array->fences[i]->error;
>   
>   			dma_fence_array_set_pending_error(array, error);
> @@ -158,6 +158,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>   {
>   	struct dma_fence_array *array;
>   	size_t size = sizeof(*array);
> +	struct dma_fence *fence;
>   
>   	/* Allocate the callback structures behind the array. */
>   	size += num_fences * sizeof(struct dma_fence_array_cb);
> @@ -165,8 +166,9 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>   	if (!array)
>   		return NULL;
>   
> +	fence = &array->base;
>   	spin_lock_init(&array->lock);
> -	dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
> +	dma_fence_init(fence, &dma_fence_array_ops, &array->lock,
>   		       context, seqno);
>   	init_irq_work(&array->work, irq_dma_fence_array_work);
>   
> @@ -174,7 +176,10 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
>   	atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
>   	array->fences = fences;
>   
> -	array->base.error = PENDING_ERROR;
> +	fence->error = PENDING_ERROR;
> +
> +	if (signal_on_any)
> +		dma_fence_enable_sw_signaling(fence);

Oh, this looks strange. Was meant to call the 
dma_fence_array_enable_signaling() without the lock held here.

/Thomas



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats
  2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 12:36   ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 12:36 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 13:19 schrieb Thomas Hellström:
> Introducing more usage of dma-fence-chain and dma-fence-array in the
> i915 driver we start to hit lockdep splats due to the recursive fence
> locking in the dma-fence-chain and dma-fence-array containers.
> This is a humble suggestion to try to establish a dma-fence locking order
> (patch 1) and to avoid excessive recursive locking in these containers
> (patch 2)

Well completely NAK to this.

This splats are intentional notes that something in the driver code is 
wrong (or we messed up the chain and array containers somehow).

Those two containers are intentionally crafted in a way which allows to 
avoid any dependency between their spinlocks. So as long as you 
correctly use them you should never see a splat.

Please provide the lockdep splat so that we can analyze the problem.

Thanks,
Christian.

>
> Thomas Hellström (2):
>    dma-fence: Avoid establishing a locking order between fence classes
>    dma-fence: Avoid excessive recursive fence locking from
>      enable_signaling() callbacks
>
>   drivers/dma-buf/dma-fence-array.c | 23 +++++++--
>   drivers/dma-buf/dma-fence-chain.c | 12 ++++-
>   drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>   include/linux/dma-fence.h         |  4 ++
>   4 files changed, 89 insertions(+), 29 deletions(-)
>
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: Christian König <christian.koenig@amd.com>
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats
@ 2021-11-30 12:36   ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 12:36 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel; +Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 13:19 schrieb Thomas Hellström:
> Introducing more usage of dma-fence-chain and dma-fence-array in the
> i915 driver we start to hit lockdep splats due to the recursive fence
> locking in the dma-fence-chain and dma-fence-array containers.
> This is a humble suggestion to try to establish a dma-fence locking order
> (patch 1) and to avoid excessive recursive locking in these containers
> (patch 2)

Well completely NAK to this.

This splats are intentional notes that something in the driver code is 
wrong (or we messed up the chain and array containers somehow).

Those two containers are intentionally crafted in a way which allows to 
avoid any dependency between their spinlocks. So as long as you 
correctly use them you should never see a splat.

Please provide the lockdep splat so that we can analyze the problem.

Thanks,
Christian.

>
> Thomas Hellström (2):
>    dma-fence: Avoid establishing a locking order between fence classes
>    dma-fence: Avoid excessive recursive fence locking from
>      enable_signaling() callbacks
>
>   drivers/dma-buf/dma-fence-array.c | 23 +++++++--
>   drivers/dma-buf/dma-fence-chain.c | 12 ++++-
>   drivers/dma-buf/dma-fence.c       | 79 +++++++++++++++++++++----------
>   include/linux/dma-fence.h         |  4 ++
>   4 files changed, 89 insertions(+), 29 deletions(-)
>
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: Christian König <christian.koenig@amd.com>
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:31       ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 12:42         ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 12:42 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> [SNIP]
>> Other than that, I didn't investigate the nesting fails enough to say 
>> I can accurately review this. :)
>
> Basically the problem is that within enable_signaling() which is 
> called with the dma_fence lock held, we take the dma_fence lock of 
> another fence. If that other fence is a dma_fence_array, or a 
> dma_fence_chain which in turn tries to lock a dma_fence_array we hit a 
> splat.

Yeah, I already thought that you constructed something like that.

You get the splat because what you do here is illegal, you can't mix 
dma_fence_array and dma_fence_chain like this or you can end up in a 
stack corruption.

Regards,
Christian.

>
> But I'll update the commit message with a typical splat.
>
> /Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 12:42         ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 12:42 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> [SNIP]
>> Other than that, I didn't investigate the nesting fails enough to say 
>> I can accurately review this. :)
>
> Basically the problem is that within enable_signaling() which is 
> called with the dma_fence lock held, we take the dma_fence lock of 
> another fence. If that other fence is a dma_fence_array, or a 
> dma_fence_chain which in turn tries to lock a dma_fence_array we hit a 
> splat.

Yeah, I already thought that you constructed something like that.

You get the splat because what you do here is illegal, you can't mix 
dma_fence_array and dma_fence_chain like this or you can end up in a 
stack corruption.

Regards,
Christian.

>
> But I'll update the commit message with a typical splat.
>
> /Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:42         ` [Intel-gfx] " Christian König
@ 2021-11-30 12:56           ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:56 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 11/30/21 13:42, Christian König wrote:
> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>> [SNIP]
>>> Other than that, I didn't investigate the nesting fails enough to 
>>> say I can accurately review this. :)
>>
>> Basically the problem is that within enable_signaling() which is 
>> called with the dma_fence lock held, we take the dma_fence lock of 
>> another fence. If that other fence is a dma_fence_array, or a 
>> dma_fence_chain which in turn tries to lock a dma_fence_array we hit 
>> a splat.
>
> Yeah, I already thought that you constructed something like that.
>
> You get the splat because what you do here is illegal, you can't mix 
> dma_fence_array and dma_fence_chain like this or you can end up in a 
> stack corruption.

Hmm. Ok, so what is the stack corruption, is it that the 
enable_signaling() will end up with endless recursion? If so, wouldn't 
it be more usable we break that recursion chain and allow a more general 
use?

Also what are the mixing rules between these? Never use a 
dma-fence-chain as one of the array fences and never use a 
dma-fence-array as a dma-fence-chain fence?

/Thomas




>
> Regards,
> Christian.
>
>>
>> But I'll update the commit message with a typical splat.
>>
>> /Thomas
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 12:56           ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 12:56 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 11/30/21 13:42, Christian König wrote:
> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>> [SNIP]
>>> Other than that, I didn't investigate the nesting fails enough to 
>>> say I can accurately review this. :)
>>
>> Basically the problem is that within enable_signaling() which is 
>> called with the dma_fence lock held, we take the dma_fence lock of 
>> another fence. If that other fence is a dma_fence_array, or a 
>> dma_fence_chain which in turn tries to lock a dma_fence_array we hit 
>> a splat.
>
> Yeah, I already thought that you constructed something like that.
>
> You get the splat because what you do here is illegal, you can't mix 
> dma_fence_array and dma_fence_chain like this or you can end up in a 
> stack corruption.

Hmm. Ok, so what is the stack corruption, is it that the 
enable_signaling() will end up with endless recursion? If so, wouldn't 
it be more usable we break that recursion chain and allow a more general 
use?

Also what are the mixing rules between these? Never use a 
dma-fence-chain as one of the array fences and never use a 
dma-fence-array as a dma-fence-chain fence?

/Thomas




>
> Regards,
> Christian.
>
>>
>> But I'll update the commit message with a typical splat.
>>
>> /Thomas
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Attempt to avoid dma-fence-[chain|array] lockdep splats
  2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
                   ` (3 preceding siblings ...)
  (?)
@ 2021-11-30 13:05 ` Patchwork
  -1 siblings, 0 replies; 65+ messages in thread
From: Patchwork @ 2021-11-30 13:05 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

== Series Details ==

Series: Attempt to avoid dma-fence-[chain|array] lockdep splats
URL   : https://patchwork.freedesktop.org/series/97410/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 12:56           ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 13:26             ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 13:26 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>
> On 11/30/21 13:42, Christian König wrote:
>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>> [SNIP]
>>>> Other than that, I didn't investigate the nesting fails enough to 
>>>> say I can accurately review this. :)
>>>
>>> Basically the problem is that within enable_signaling() which is 
>>> called with the dma_fence lock held, we take the dma_fence lock of 
>>> another fence. If that other fence is a dma_fence_array, or a 
>>> dma_fence_chain which in turn tries to lock a dma_fence_array we hit 
>>> a splat.
>>
>> Yeah, I already thought that you constructed something like that.
>>
>> You get the splat because what you do here is illegal, you can't mix 
>> dma_fence_array and dma_fence_chain like this or you can end up in a 
>> stack corruption.
>
> Hmm. Ok, so what is the stack corruption, is it that the 
> enable_signaling() will end up with endless recursion? If so, wouldn't 
> it be more usable we break that recursion chain and allow a more 
> general use?

The problem is that this is not easily possible for dma_fence_array 
containers. Just imagine that you drop the last reference to the 
containing fences during dma_fence_array destruction if any of the 
contained fences is another container you can easily run into recursion 
and with that stack corruption.

That's one of the major reasons I came up with the dma_fence_chain 
container. This one you can chain any number of elements together 
without running into any recursion.

> Also what are the mixing rules between these? Never use a 
> dma-fence-chain as one of the array fences and never use a 
> dma-fence-array as a dma-fence-chain fence?

You can't add any other container to a dma_fence_array, neither other 
dma_fence_array instances nor dma_fence_chain instances.

IIRC at least technically a dma_fence_chain can contain a 
dma_fence_array if you absolutely need that, but Daniel, Jason and I 
already had the same discussion a while back and came to the conclusion 
to avoid that as well if possible.

Regards,
Christian.

>
> /Thomas
>
>
>
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> But I'll update the commit message with a typical splat.
>>>
>>> /Thomas
>>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 13:26             ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 13:26 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>
> On 11/30/21 13:42, Christian König wrote:
>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>> [SNIP]
>>>> Other than that, I didn't investigate the nesting fails enough to 
>>>> say I can accurately review this. :)
>>>
>>> Basically the problem is that within enable_signaling() which is 
>>> called with the dma_fence lock held, we take the dma_fence lock of 
>>> another fence. If that other fence is a dma_fence_array, or a 
>>> dma_fence_chain which in turn tries to lock a dma_fence_array we hit 
>>> a splat.
>>
>> Yeah, I already thought that you constructed something like that.
>>
>> You get the splat because what you do here is illegal, you can't mix 
>> dma_fence_array and dma_fence_chain like this or you can end up in a 
>> stack corruption.
>
> Hmm. Ok, so what is the stack corruption, is it that the 
> enable_signaling() will end up with endless recursion? If so, wouldn't 
> it be more usable we break that recursion chain and allow a more 
> general use?

The problem is that this is not easily possible for dma_fence_array 
containers. Just imagine that you drop the last reference to the 
containing fences during dma_fence_array destruction if any of the 
contained fences is another container you can easily run into recursion 
and with that stack corruption.

That's one of the major reasons I came up with the dma_fence_chain 
container. This one you can chain any number of elements together 
without running into any recursion.

> Also what are the mixing rules between these? Never use a 
> dma-fence-chain as one of the array fences and never use a 
> dma-fence-array as a dma-fence-chain fence?

You can't add any other container to a dma_fence_array, neither other 
dma_fence_array instances nor dma_fence_chain instances.

IIRC at least technically a dma_fence_chain can contain a 
dma_fence_array if you absolutely need that, but Daniel, Jason and I 
already had the same discussion a while back and came to the conclusion 
to avoid that as well if possible.

Regards,
Christian.

>
> /Thomas
>
>
>
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> But I'll update the commit message with a typical splat.
>>>
>>> /Thomas
>>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for Attempt to avoid dma-fence-[chain|array] lockdep splats
  2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
                   ` (4 preceding siblings ...)
  (?)
@ 2021-11-30 13:48 ` Patchwork
  -1 siblings, 0 replies; 65+ messages in thread
From: Patchwork @ 2021-11-30 13:48 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 2813 bytes --]

== Series Details ==

Series: Attempt to avoid dma-fence-[chain|array] lockdep splats
URL   : https://patchwork.freedesktop.org/series/97410/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10943 -> Patchwork_21700
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/index.html

Participating hosts (38 -> 33)
------------------------------

  Missing    (5): bat-dg1-6 bat-dg1-5 fi-bsw-cyan bat-jsl-2 bat-jsl-1 

Known issues
------------

  Here are the changes found in Patchwork_21700 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@workarounds:
    - fi-rkl-guc:         [PASS][1] -> [INCOMPLETE][2] ([i915#4433])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/fi-rkl-guc/igt@i915_selftest@live@workarounds.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/fi-rkl-guc/igt@i915_selftest@live@workarounds.html

  * igt@kms_frontbuffer_tracking@basic:
    - fi-cml-u2:          [PASS][3] -> [DMESG-WARN][4] ([i915#4269])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/fi-cml-u2/igt@kms_frontbuffer_tracking@basic.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/fi-cml-u2/igt@kms_frontbuffer_tracking@basic.html

  
#### Warnings ####

  * igt@runner@aborted:
    - fi-rkl-guc:         [FAIL][5] ([i915#3928] / [i915#4312]) -> [FAIL][6] ([i915#2426] / [i915#3928] / [i915#4312])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/fi-rkl-guc/igt@runner@aborted.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/fi-rkl-guc/igt@runner@aborted.html

  
  [i915#2426]: https://gitlab.freedesktop.org/drm/intel/issues/2426
  [i915#3928]: https://gitlab.freedesktop.org/drm/intel/issues/3928
  [i915#4269]: https://gitlab.freedesktop.org/drm/intel/issues/4269
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4433]: https://gitlab.freedesktop.org/drm/intel/issues/4433


Build changes
-------------

  * Linux: CI_DRM_10943 -> Patchwork_21700

  CI-20190529: 20190529
  CI_DRM_10943: f506b61984977c7785a54b8860720bfb5334aa08 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6295: 2d7f671b872ed856a97957051098974be2380019 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_21700: 185999e31f8f0463e2ad850652473fc03f82b17b @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

185999e31f8f dma-fence: Avoid excessive recursive fence locking from enable_signaling() callbacks
641529efe715 dma-fence: Avoid establishing a locking order between fence classes

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/index.html

[-- Attachment #2: Type: text/html, Size: 3600 bytes --]

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 13:26             ` [Intel-gfx] " Christian König
@ 2021-11-30 14:35               ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 14:35 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
> > 
> > On 11/30/21 13:42, Christian König wrote:
> > > Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> > > > [SNIP]
> > > > > Other than that, I didn't investigate the nesting fails
> > > > > enough to 
> > > > > say I can accurately review this. :)
> > > > 
> > > > Basically the problem is that within enable_signaling() which
> > > > is 
> > > > called with the dma_fence lock held, we take the dma_fence lock
> > > > of 
> > > > another fence. If that other fence is a dma_fence_array, or a 
> > > > dma_fence_chain which in turn tries to lock a dma_fence_array
> > > > we hit 
> > > > a splat.
> > > 
> > > Yeah, I already thought that you constructed something like that.
> > > 
> > > You get the splat because what you do here is illegal, you can't
> > > mix 
> > > dma_fence_array and dma_fence_chain like this or you can end up
> > > in a 
> > > stack corruption.
> > 
> > Hmm. Ok, so what is the stack corruption, is it that the 
> > enable_signaling() will end up with endless recursion? If so,
> > wouldn't 
> > it be more usable we break that recursion chain and allow a more 
> > general use?
> 
> The problem is that this is not easily possible for dma_fence_array 
> containers. Just imagine that you drop the last reference to the 
> containing fences during dma_fence_array destruction if any of the 
> contained fences is another container you can easily run into
> recursion 
> and with that stack corruption.

Indeed, that would require some deeper surgery.

> 
> That's one of the major reasons I came up with the dma_fence_chain 
> container. This one you can chain any number of elements together 
> without running into any recursion.
> 
> > Also what are the mixing rules between these? Never use a 
> > dma-fence-chain as one of the array fences and never use a 
> > dma-fence-array as a dma-fence-chain fence?
> 
> You can't add any other container to a dma_fence_array, neither other
> dma_fence_array instances nor dma_fence_chain instances.
> 
> IIRC at least technically a dma_fence_chain can contain a 
> dma_fence_array if you absolutely need that, but Daniel, Jason and I 
> already had the same discussion a while back and came to the
> conclusion 
> to avoid that as well if possible.

Yes, this is actually the use-case. But what I can't easily guarantee
is that that dma_fence_chain isn't fed into a dma_fence_array somewhere
else. How do you typically avoid that?

Meanwhile I guess I need to take a different approach in the driver to
avoid this altogether.

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > 
> > 
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > 
> > > > But I'll update the commit message with a typical splat.
> > > > 
> > > > /Thomas
> > > 
> 



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 14:35               ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 14:35 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
> > 
> > On 11/30/21 13:42, Christian König wrote:
> > > Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> > > > [SNIP]
> > > > > Other than that, I didn't investigate the nesting fails
> > > > > enough to 
> > > > > say I can accurately review this. :)
> > > > 
> > > > Basically the problem is that within enable_signaling() which
> > > > is 
> > > > called with the dma_fence lock held, we take the dma_fence lock
> > > > of 
> > > > another fence. If that other fence is a dma_fence_array, or a 
> > > > dma_fence_chain which in turn tries to lock a dma_fence_array
> > > > we hit 
> > > > a splat.
> > > 
> > > Yeah, I already thought that you constructed something like that.
> > > 
> > > You get the splat because what you do here is illegal, you can't
> > > mix 
> > > dma_fence_array and dma_fence_chain like this or you can end up
> > > in a 
> > > stack corruption.
> > 
> > Hmm. Ok, so what is the stack corruption, is it that the 
> > enable_signaling() will end up with endless recursion? If so,
> > wouldn't 
> > it be more usable we break that recursion chain and allow a more 
> > general use?
> 
> The problem is that this is not easily possible for dma_fence_array 
> containers. Just imagine that you drop the last reference to the 
> containing fences during dma_fence_array destruction if any of the 
> contained fences is another container you can easily run into
> recursion 
> and with that stack corruption.

Indeed, that would require some deeper surgery.

> 
> That's one of the major reasons I came up with the dma_fence_chain 
> container. This one you can chain any number of elements together 
> without running into any recursion.
> 
> > Also what are the mixing rules between these? Never use a 
> > dma-fence-chain as one of the array fences and never use a 
> > dma-fence-array as a dma-fence-chain fence?
> 
> You can't add any other container to a dma_fence_array, neither other
> dma_fence_array instances nor dma_fence_chain instances.
> 
> IIRC at least technically a dma_fence_chain can contain a 
> dma_fence_array if you absolutely need that, but Daniel, Jason and I 
> already had the same discussion a while back and came to the
> conclusion 
> to avoid that as well if possible.

Yes, this is actually the use-case. But what I can't easily guarantee
is that that dma_fence_chain isn't fed into a dma_fence_array somewhere
else. How do you typically avoid that?

Meanwhile I guess I need to take a different approach in the driver to
avoid this altogether.

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > 
> > 
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > 
> > > > But I'll update the commit message with a typical splat.
> > > > 
> > > > /Thomas
> > > 
> 



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 14:35               ` [Intel-gfx] " Thomas Hellström
@ 2021-11-30 15:02                 ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 15:02 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 15:35 schrieb Thomas Hellström:
> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>> On 11/30/21 13:42, Christian König wrote:
>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>> [SNIP]
>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>> enough to
>>>>>> say I can accurately review this. :)
>>>>> Basically the problem is that within enable_signaling() which
>>>>> is
>>>>> called with the dma_fence lock held, we take the dma_fence lock
>>>>> of
>>>>> another fence. If that other fence is a dma_fence_array, or a
>>>>> dma_fence_chain which in turn tries to lock a dma_fence_array
>>>>> we hit
>>>>> a splat.
>>>> Yeah, I already thought that you constructed something like that.
>>>>
>>>> You get the splat because what you do here is illegal, you can't
>>>> mix
>>>> dma_fence_array and dma_fence_chain like this or you can end up
>>>> in a
>>>> stack corruption.
>>> Hmm. Ok, so what is the stack corruption, is it that the
>>> enable_signaling() will end up with endless recursion? If so,
>>> wouldn't
>>> it be more usable we break that recursion chain and allow a more
>>> general use?
>> The problem is that this is not easily possible for dma_fence_array
>> containers. Just imagine that you drop the last reference to the
>> containing fences during dma_fence_array destruction if any of the
>> contained fences is another container you can easily run into
>> recursion
>> and with that stack corruption.
> Indeed, that would require some deeper surgery.
>
>> That's one of the major reasons I came up with the dma_fence_chain
>> container. This one you can chain any number of elements together
>> without running into any recursion.
>>
>>> Also what are the mixing rules between these? Never use a
>>> dma-fence-chain as one of the array fences and never use a
>>> dma-fence-array as a dma-fence-chain fence?
>> You can't add any other container to a dma_fence_array, neither other
>> dma_fence_array instances nor dma_fence_chain instances.
>>
>> IIRC at least technically a dma_fence_chain can contain a
>> dma_fence_array if you absolutely need that, but Daniel, Jason and I
>> already had the same discussion a while back and came to the
>> conclusion
>> to avoid that as well if possible.
> Yes, this is actually the use-case. But what I can't easily guarantee
> is that that dma_fence_chain isn't fed into a dma_fence_array somewhere
> else. How do you typically avoid that?
>
> Meanwhile I guess I need to take a different approach in the driver to
> avoid this altogether.

Jason and I came up with a deep dive iterator for his use case, but I 
think we don't want to use that any more after my dma_resv rework.

In other words when you need to create a new dma_fence_array you flatten 
out the existing construct which is at worst case 
dma_fence_chain->dma_fence_array->dma_fence.

Regards,
Christian.

>
> /Thomas
>
>
>> Regards,
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> But I'll update the commit message with a typical splat.
>>>>>
>>>>> /Thomas
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 15:02                 ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-11-30 15:02 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 15:35 schrieb Thomas Hellström:
> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>> On 11/30/21 13:42, Christian König wrote:
>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>> [SNIP]
>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>> enough to
>>>>>> say I can accurately review this. :)
>>>>> Basically the problem is that within enable_signaling() which
>>>>> is
>>>>> called with the dma_fence lock held, we take the dma_fence lock
>>>>> of
>>>>> another fence. If that other fence is a dma_fence_array, or a
>>>>> dma_fence_chain which in turn tries to lock a dma_fence_array
>>>>> we hit
>>>>> a splat.
>>>> Yeah, I already thought that you constructed something like that.
>>>>
>>>> You get the splat because what you do here is illegal, you can't
>>>> mix
>>>> dma_fence_array and dma_fence_chain like this or you can end up
>>>> in a
>>>> stack corruption.
>>> Hmm. Ok, so what is the stack corruption, is it that the
>>> enable_signaling() will end up with endless recursion? If so,
>>> wouldn't
>>> it be more usable we break that recursion chain and allow a more
>>> general use?
>> The problem is that this is not easily possible for dma_fence_array
>> containers. Just imagine that you drop the last reference to the
>> containing fences during dma_fence_array destruction if any of the
>> contained fences is another container you can easily run into
>> recursion
>> and with that stack corruption.
> Indeed, that would require some deeper surgery.
>
>> That's one of the major reasons I came up with the dma_fence_chain
>> container. This one you can chain any number of elements together
>> without running into any recursion.
>>
>>> Also what are the mixing rules between these? Never use a
>>> dma-fence-chain as one of the array fences and never use a
>>> dma-fence-array as a dma-fence-chain fence?
>> You can't add any other container to a dma_fence_array, neither other
>> dma_fence_array instances nor dma_fence_chain instances.
>>
>> IIRC at least technically a dma_fence_chain can contain a
>> dma_fence_array if you absolutely need that, but Daniel, Jason and I
>> already had the same discussion a while back and came to the
>> conclusion
>> to avoid that as well if possible.
> Yes, this is actually the use-case. But what I can't easily guarantee
> is that that dma_fence_chain isn't fed into a dma_fence_array somewhere
> else. How do you typically avoid that?
>
> Meanwhile I guess I need to take a different approach in the driver to
> avoid this altogether.

Jason and I came up with a deep dive iterator for his use case, but I 
think we don't want to use that any more after my dma_resv rework.

In other words when you need to create a new dma_fence_array you flatten 
out the existing construct which is at worst case 
dma_fence_chain->dma_fence_array->dma_fence.

Regards,
Christian.

>
> /Thomas
>
>
>> Regards,
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> But I'll update the commit message with a typical splat.
>>>>>
>>>>> /Thomas
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for Attempt to avoid dma-fence-[chain|array] lockdep splats
  2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
                   ` (5 preceding siblings ...)
  (?)
@ 2021-11-30 17:47 ` Patchwork
  -1 siblings, 0 replies; 65+ messages in thread
From: Patchwork @ 2021-11-30 17:47 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30278 bytes --]

== Series Details ==

Series: Attempt to avoid dma-fence-[chain|array] lockdep splats
URL   : https://patchwork.freedesktop.org/series/97410/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10943_full -> Patchwork_21700_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (11 -> 11)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in Patchwork_21700_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_isolation@preservation-s3@vcs0:
    - shard-skl:          [PASS][1] -> [INCOMPLETE][2] ([i915#198]) +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl9/igt@gem_ctx_isolation@preservation-s3@vcs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@gem_ctx_isolation@preservation-s3@vcs0.html

  * igt@gem_exec_balancer@parallel-out-fence:
    - shard-iclb:         NOTRUN -> [SKIP][3] ([i915#4525])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb5/igt@gem_exec_balancer@parallel-out-fence.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [PASS][4] -> [FAIL][5] ([i915#2846])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-glk2/igt@gem_exec_fair@basic-deadline.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-glk7/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-tglb:         [PASS][6] -> [FAIL][7] ([i915#2842])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-tglb2/igt@gem_exec_fair@basic-none-share@rcs0.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-tglb5/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-apl:          [PASS][8] -> [FAIL][9] ([i915#2842])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl3/igt@gem_exec_fair@basic-none@vcs0.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl8/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [PASS][10] -> [FAIL][11] ([i915#2842]) +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-glk9/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-glk3/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_userptr_blits@input-checking:
    - shard-skl:          NOTRUN -> [DMESG-WARN][12] ([i915#3002])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@vma-merge:
    - shard-skl:          NOTRUN -> [FAIL][13] ([i915#3318])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@gem_userptr_blits@vma-merge.html

  * igt@i915_suspend@sysfs-reader:
    - shard-apl:          [PASS][14] -> [DMESG-WARN][15] ([i915#180]) +3 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl6/igt@i915_suspend@sysfs-reader.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl3/igt@i915_suspend@sysfs-reader.html

  * igt@kms_async_flips@alternate-sync-async-flip:
    - shard-iclb:         [PASS][16] -> [FAIL][17] ([i915#2521])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb8/igt@kms_async_flips@alternate-sync-async-flip.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb1/igt@kms_async_flips@alternate-sync-async-flip.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][18] ([i915#3743]) +1 similar issue
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_rc_ccs_cc:
    - shard-skl:          NOTRUN -> [SKIP][19] ([fdo#109271] / [i915#3886]) +10 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][20] ([fdo#109271] / [i915#3886]) +2 similar issues
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl2/igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-d-bad-aux-stride-y_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][21] ([i915#3689])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-tglb3/igt@kms_ccs@pipe-d-bad-aux-stride-y_tiled_ccs.html
    - shard-iclb:         NOTRUN -> [SKIP][22] ([fdo#109278])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb5/igt@kms_ccs@pipe-d-bad-aux-stride-y_tiled_ccs.html

  * igt@kms_color@pipe-a-ctm-red-to-blue:
    - shard-skl:          NOTRUN -> [DMESG-WARN][23] ([i915#1982])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@kms_color@pipe-a-ctm-red-to-blue.html

  * igt@kms_color_chamelium@pipe-a-ctm-negative:
    - shard-kbl:          NOTRUN -> [SKIP][24] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl2/igt@kms_color_chamelium@pipe-a-ctm-negative.html

  * igt@kms_color_chamelium@pipe-b-ctm-max:
    - shard-skl:          NOTRUN -> [SKIP][25] ([fdo#109271] / [fdo#111827]) +11 similar issues
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl4/igt@kms_color_chamelium@pipe-b-ctm-max.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-kbl:          [PASS][26] -> [DMESG-WARN][27] ([i915#180]) +6 similar issues
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl2/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl7/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
    - shard-iclb:         NOTRUN -> [SKIP][28] ([fdo#109274] / [fdo#109278])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb5/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html
    - shard-tglb:         NOTRUN -> [SKIP][29] ([fdo#111825])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-tglb3/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html

  * igt@kms_flip@2x-busy-flip:
    - shard-kbl:          NOTRUN -> [SKIP][30] ([fdo#109271]) +34 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl2/igt@kms_flip@2x-busy-flip.html

  * igt@kms_flip@wf_vblank-ts-check@c-edp1:
    - shard-skl:          NOTRUN -> [FAIL][31] ([i915#2122])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_flip@wf_vblank-ts-check@c-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs:
    - shard-skl:          NOTRUN -> [INCOMPLETE][32] ([i915#3699])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-blt:
    - shard-skl:          NOTRUN -> [SKIP][33] ([fdo#109271]) +144 similar issues
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-cpu:
    - shard-iclb:         NOTRUN -> [SKIP][34] ([fdo#109280]) +2 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch:
    - shard-skl:          [PASS][35] -> [FAIL][36] ([i915#1188])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl10/igt@kms_hdr@bpc-switch.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl8/igt@kms_hdr@bpc-switch.html

  * igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c:
    - shard-iclb:         NOTRUN -> [SKIP][37] ([fdo#109289])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb5/igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c.html
    - shard-tglb:         NOTRUN -> [SKIP][38] ([fdo#109289])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-tglb3/igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-max:
    - shard-skl:          NOTRUN -> [FAIL][39] ([fdo#108145] / [i915#265]) +2 similar issues
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl4/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-max.html

  * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
    - shard-skl:          [PASS][40] -> [FAIL][41] ([fdo#108145] / [i915#265])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl1/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-basic:
    - shard-kbl:          NOTRUN -> [FAIL][42] ([fdo#108145] / [i915#265])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl2/igt@kms_plane_alpha_blend@pipe-c-alpha-basic.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-2:
    - shard-skl:          NOTRUN -> [SKIP][43] ([fdo#109271] / [i915#658]) +3 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl4/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-2.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2:
    - shard-kbl:          NOTRUN -> [SKIP][44] ([fdo#109271] / [i915#658])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl2/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html

  * igt@kms_psr@psr2_sprite_mmap_gtt:
    - shard-iclb:         [PASS][45] -> [SKIP][46] ([fdo#109441]) +2 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb2/igt@kms_psr@psr2_sprite_mmap_gtt.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb4/igt@kms_psr@psr2_sprite_mmap_gtt.html

  * igt@perf@polling-parameterized:
    - shard-tglb:         [PASS][47] -> [FAIL][48] ([i915#1542])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-tglb5/igt@perf@polling-parameterized.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-tglb6/igt@perf@polling-parameterized.html

  * igt@sysfs_clients@fair-0:
    - shard-skl:          NOTRUN -> [SKIP][49] ([fdo#109271] / [i915#2994]) +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl8/igt@sysfs_clients@fair-0.html

  
#### Possible fixes ####

  * igt@gem_eio@unwedge-stress:
    - {shard-rkl}:        ([TIMEOUT][50], [TIMEOUT][51]) ([i915#3063]) -> [PASS][52]
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@gem_eio@unwedge-stress.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-4/igt@gem_eio@unwedge-stress.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-iclb:         [FAIL][53] ([i915#2842]) -> [PASS][54]
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb1/igt@gem_exec_fair@basic-none-share@rcs0.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb5/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@vecs0:
    - shard-apl:          [FAIL][55] ([i915#2842]) -> [PASS][56]
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl3/igt@gem_exec_fair@basic-none@vecs0.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl8/igt@gem_exec_fair@basic-none@vecs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglb:         [FAIL][57] ([i915#2842]) -> [PASS][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-tglb5/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@i915_pm_backlight@fade_with_suspend:
    - {shard-rkl}:        [SKIP][59] ([i915#3012]) -> [PASS][60]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@i915_pm_backlight@fade_with_suspend.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@i915_pm_backlight@fade_with_suspend.html

  * igt@i915_pm_rpm@pm-tiling:
    - {shard-rkl}:        [SKIP][61] ([fdo#109308]) -> [PASS][62] +1 similar issue
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@i915_pm_rpm@pm-tiling.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@i915_pm_rpm@pm-tiling.html

  * igt@i915_suspend@forcewake:
    - shard-kbl:          [DMESG-WARN][63] ([i915#180]) -> [PASS][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl1/igt@i915_suspend@forcewake.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl2/igt@i915_suspend@forcewake.html

  * igt@kms_async_flips@alternate-sync-async-flip:
    - shard-skl:          [FAIL][65] ([i915#2521]) -> [PASS][66]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl8/igt@kms_async_flips@alternate-sync-async-flip.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl9/igt@kms_async_flips@alternate-sync-async-flip.html

  * igt@kms_atomic@plane-overlay-legacy:
    - {shard-rkl}:        ([SKIP][67], [SKIP][68]) ([i915#1845]) -> [PASS][69] +1 similar issue
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-4/igt@kms_atomic@plane-overlay-legacy.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_atomic@plane-overlay-legacy.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_atomic@plane-overlay-legacy.html

  * igt@kms_color@pipe-b-ctm-0-75:
    - {shard-rkl}:        [SKIP][70] ([i915#1149] / [i915#1849] / [i915#4070]) -> [PASS][71] +1 similar issue
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_color@pipe-b-ctm-0-75.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_color@pipe-b-ctm-0-75.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x85-sliding:
    - {shard-rkl}:        [SKIP][72] ([fdo#112022] / [i915#4070]) -> [PASS][73] +2 similar issues
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_cursor_crc@pipe-a-cursor-256x85-sliding.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_cursor_crc@pipe-a-cursor-256x85-sliding.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-skl:          [FAIL][74] ([i915#2346]) -> [PASS][75]
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - {shard-rkl}:        [SKIP][76] ([fdo#111825] / [i915#4070]) -> [PASS][77] +2 similar issues
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-mmap-wc-ytiled:
    - {shard-rkl}:        [SKIP][78] ([fdo#111314]) -> [PASS][79] +4 similar issues
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-wc-ytiled.html
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-wc-ytiled.html

  * igt@kms_flip@flip-vs-absolute-wf_vblank-interruptible@a-edp1:
    - shard-skl:          [FAIL][80] ([i915#2122]) -> [PASS][81]
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl5/igt@kms_flip@flip-vs-absolute-wf_vblank-interruptible@a-edp1.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl9/igt@kms_flip@flip-vs-absolute-wf_vblank-interruptible@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@b-dp1:
    - shard-apl:          [FAIL][82] ([i915#79]) -> [PASS][83] +1 similar issue
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl2/igt@kms_flip@flip-vs-expired-vblank@b-dp1.html
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl6/igt@kms_flip@flip-vs-expired-vblank@b-dp1.html

  * igt@kms_flip@flip-vs-expired-vblank@c-edp1:
    - shard-skl:          [FAIL][84] ([i915#79]) -> [PASS][85]
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl6/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl4/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile:
    - shard-iclb:         [SKIP][86] ([i915#3701]) -> [PASS][87]
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb4/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render:
    - {shard-rkl}:        ([SKIP][88], [SKIP][89]) ([i915#1849] / [i915#4098]) -> [PASS][90] +3 similar issues
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render:
    - {shard-rkl}:        [SKIP][91] ([i915#1849]) -> [PASS][92] +15 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-indfb-draw-render.html

  * igt@kms_hdr@bpc-switch-dpms:
    - shard-skl:          [FAIL][93] ([i915#1188]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl4/igt@kms_hdr@bpc-switch-dpms.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_hdr@bpc-switch-dpms.html

  * igt@kms_invalid_mode@uint-max-clock:
    - {shard-rkl}:        [SKIP][95] ([i915#4278]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_invalid_mode@uint-max-clock.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_invalid_mode@uint-max-clock.html

  * igt@kms_plane@pixel-format-source-clamping@pipe-b-planes:
    - {shard-rkl}:        [SKIP][97] ([i915#3558]) -> [PASS][98] +1 similar issue
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_plane@pixel-format-source-clamping@pipe-b-planes.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_plane@pixel-format-source-clamping@pipe-b-planes.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
    - {shard-rkl}:        [SKIP][99] ([i915#1849] / [i915#4070]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [FAIL][101] ([fdo#108145] / [i915#265]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl4/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl5/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_plane_multiple@atomic-pipe-a-tiling-x:
    - {shard-rkl}:        [SKIP][103] ([i915#3558] / [i915#4070]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_plane_multiple@atomic-pipe-a-tiling-x.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_plane_multiple@atomic-pipe-a-tiling-x.html

  * igt@kms_psr@primary_mmap_gtt:
    - {shard-rkl}:        [SKIP][105] ([i915#1072]) -> [PASS][106] +1 similar issue
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_psr@primary_mmap_gtt.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_psr@primary_mmap_gtt.html

  * igt@kms_vblank@pipe-a-ts-continuation-idle:
    - {shard-rkl}:        [SKIP][107] ([i915#1845]) -> [PASS][108] +8 similar issues
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@kms_vblank@pipe-a-ts-continuation-idle.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@kms_vblank@pipe-a-ts-continuation-idle.html

  * igt@perf@polling-small-buf:
    - {shard-rkl}:        [FAIL][109] ([i915#1722]) -> [PASS][110]
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@perf@polling-small-buf.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@perf@polling-small-buf.html

  * igt@prime_vgem@basic-fence-flip:
    - {shard-rkl}:        [SKIP][111] ([i915#3708]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-rkl-1/igt@prime_vgem@basic-fence-flip.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-rkl-6/igt@prime_vgem@basic-fence-flip.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc9-dpms:
    - shard-iclb:         [FAIL][113] ([i915#4275]) -> [SKIP][114] ([i915#4281])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb2/igt@i915_pm_dc@dc9-dpms.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb3/igt@i915_pm_dc@dc9-dpms.html

  * igt@i915_pm_rc6_residency@rc6-idle:
    - shard-iclb:         [WARN][115] ([i915#1804] / [i915#2684]) -> [WARN][116] ([i915#2684])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb3/igt@i915_pm_rc6_residency@rc6-idle.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb8/igt@i915_pm_rc6_residency@rc6-idle.html

  * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3:
    - shard-iclb:         [SKIP][117] ([i915#2920]) -> [SKIP][118] ([i915#658]) +2 similar issues
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb2/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb3/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-3:
    - shard-iclb:         [SKIP][119] ([i915#658]) -> [SKIP][120] ([i915#2920]) +2 similar issues
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-iclb8/igt@kms_psr2_sf@plane-move-sf-dmg-area-3.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-iclb2/igt@kms_psr2_sf@plane-move-sf-dmg-area-3.html

  * igt@runner@aborted:
    - shard-kbl:          ([FAIL][121], [FAIL][122], [FAIL][123], [FAIL][124], [FAIL][125], [FAIL][126], [FAIL][127], [FAIL][128]) ([i915#180] / [i915#2426] / [i915#3002] / [i915#3363] / [i915#4312]) -> ([FAIL][129], [FAIL][130], [FAIL][131], [FAIL][132], [FAIL][133], [FAIL][134], [FAIL][135], [FAIL][136], [FAIL][137], [FAIL][138]) ([i915#1436] / [i915#180] / [i915#1814] / [i915#2426] / [i915#3363] / [i915#4312])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl4/igt@runner@aborted.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl6/igt@runner@aborted.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl1/igt@runner@aborted.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl1/igt@runner@aborted.html
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl2/igt@runner@aborted.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl1/igt@runner@aborted.html
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl3/igt@runner@aborted.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-kbl1/igt@runner@aborted.html
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl7/igt@runner@aborted.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl6/igt@runner@aborted.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl6/igt@runner@aborted.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl7/igt@runner@aborted.html
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl4/igt@runner@aborted.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl4/igt@runner@aborted.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl6/igt@runner@aborted.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl6/igt@runner@aborted.html
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl1/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-kbl1/igt@runner@aborted.html
    - shard-apl:          ([FAIL][139], [FAIL][140], [FAIL][141]) ([i915#2426] / [i915#3002] / [i915#3363] / [i915#4312]) -> ([FAIL][142], [FAIL][143], [FAIL][144], [FAIL][145], [FAIL][146], [FAIL][147], [FAIL][148]) ([fdo#109271] / [i915#180] / [i915#2426] / [i915#3002] / [i915#3363] / [i915#4312])
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl4/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl2/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-apl2/igt@runner@aborted.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl1/igt@runner@aborted.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl8/igt@runner@aborted.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl6/igt@runner@aborted.html
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl6/igt@runner@aborted.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl3/igt@runner@aborted.html
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl6/igt@runner@aborted.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-apl2/igt@runner@aborted.html
    - shard-skl:          ([FAIL][149], [FAIL][150], [FAIL][151], [FAIL][152], [FAIL][153]) ([i915#1436] / [i915#2029] / [i915#2426] / [i915#3002] / [i915#3363] / [i915#4312]) -> ([FAIL][154], [FAIL][155], [FAIL][156], [FAIL][157]) ([i915#1814] / [i915#2029] / [i915#2426] / [i915#3002] / [i915#3363] / [i915#4312])
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl5/igt@runner@aborted.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl1/igt@runner@aborted.html
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl10/igt@runner@aborted.html
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl6/igt@runner@aborted.html
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10943/shard-skl1/igt@runner@aborted.html
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl4/igt@runner@aborted.html
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@runner@aborted.html
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@runner@aborted.html
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/shard-skl6/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
  [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308
  [fdo#109312]: https://bugs.freedesktop.org/show_bug.cgi?id=109312
  [fdo#109313]: https://bugs.freedesktop.org/show_b

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21700/index.html

[-- Attachment #2: Type: text/html, Size: 33707 bytes --]

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 15:02                 ` [Intel-gfx] " Christian König
@ 2021-11-30 18:12                   ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 18:12 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
> > On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
> > > Am 30.11.21 um 13:56 schrieb Thomas Hellström:
> > > > On 11/30/21 13:42, Christian König wrote:
> > > > > Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> > > > > > [SNIP]
> > > > > > > Other than that, I didn't investigate the nesting fails
> > > > > > > enough to
> > > > > > > say I can accurately review this. :)
> > > > > > Basically the problem is that within enable_signaling()
> > > > > > which
> > > > > > is
> > > > > > called with the dma_fence lock held, we take the dma_fence
> > > > > > lock
> > > > > > of
> > > > > > another fence. If that other fence is a dma_fence_array, or
> > > > > > a
> > > > > > dma_fence_chain which in turn tries to lock a
> > > > > > dma_fence_array
> > > > > > we hit
> > > > > > a splat.
> > > > > Yeah, I already thought that you constructed something like
> > > > > that.
> > > > > 
> > > > > You get the splat because what you do here is illegal, you
> > > > > can't
> > > > > mix
> > > > > dma_fence_array and dma_fence_chain like this or you can end
> > > > > up
> > > > > in a
> > > > > stack corruption.
> > > > Hmm. Ok, so what is the stack corruption, is it that the
> > > > enable_signaling() will end up with endless recursion? If so,
> > > > wouldn't
> > > > it be more usable we break that recursion chain and allow a
> > > > more
> > > > general use?
> > > The problem is that this is not easily possible for
> > > dma_fence_array
> > > containers. Just imagine that you drop the last reference to the
> > > containing fences during dma_fence_array destruction if any of
> > > the
> > > contained fences is another container you can easily run into
> > > recursion
> > > and with that stack corruption.
> > Indeed, that would require some deeper surgery.
> > 
> > > That's one of the major reasons I came up with the
> > > dma_fence_chain
> > > container. This one you can chain any number of elements together
> > > without running into any recursion.
> > > 
> > > > Also what are the mixing rules between these? Never use a
> > > > dma-fence-chain as one of the array fences and never use a
> > > > dma-fence-array as a dma-fence-chain fence?
> > > You can't add any other container to a dma_fence_array, neither
> > > other
> > > dma_fence_array instances nor dma_fence_chain instances.
> > > 
> > > IIRC at least technically a dma_fence_chain can contain a
> > > dma_fence_array if you absolutely need that, but Daniel, Jason
> > > and I
> > > already had the same discussion a while back and came to the
> > > conclusion
> > > to avoid that as well if possible.
> > Yes, this is actually the use-case. But what I can't easily
> > guarantee
> > is that that dma_fence_chain isn't fed into a dma_fence_array
> > somewhere
> > else. How do you typically avoid that?
> > 
> > Meanwhile I guess I need to take a different approach in the driver
> > to
> > avoid this altogether.
> 
> Jason and I came up with a deep dive iterator for his use case, but I
> think we don't want to use that any more after my dma_resv rework.
> 
> In other words when you need to create a new dma_fence_array you
> flatten 
> out the existing construct which is at worst case 
> dma_fence_chain->dma_fence_array->dma_fence.

Ok, Are there any cross-driver contract here, Like every driver using a
dma_fence_array need to check for dma_fence_chain and flatten like
above?

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > /Thomas
> > > > 
> > > > 
> > > > 
> > > > 
> > > > > Regards,
> > > > > Christian.
> > > > > 
> > > > > > But I'll update the commit message with a typical splat.
> > > > > > 
> > > > > > /Thomas
> > 
> 



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 18:12                   ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 18:12 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
> > On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
> > > Am 30.11.21 um 13:56 schrieb Thomas Hellström:
> > > > On 11/30/21 13:42, Christian König wrote:
> > > > > Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> > > > > > [SNIP]
> > > > > > > Other than that, I didn't investigate the nesting fails
> > > > > > > enough to
> > > > > > > say I can accurately review this. :)
> > > > > > Basically the problem is that within enable_signaling()
> > > > > > which
> > > > > > is
> > > > > > called with the dma_fence lock held, we take the dma_fence
> > > > > > lock
> > > > > > of
> > > > > > another fence. If that other fence is a dma_fence_array, or
> > > > > > a
> > > > > > dma_fence_chain which in turn tries to lock a
> > > > > > dma_fence_array
> > > > > > we hit
> > > > > > a splat.
> > > > > Yeah, I already thought that you constructed something like
> > > > > that.
> > > > > 
> > > > > You get the splat because what you do here is illegal, you
> > > > > can't
> > > > > mix
> > > > > dma_fence_array and dma_fence_chain like this or you can end
> > > > > up
> > > > > in a
> > > > > stack corruption.
> > > > Hmm. Ok, so what is the stack corruption, is it that the
> > > > enable_signaling() will end up with endless recursion? If so,
> > > > wouldn't
> > > > it be more usable we break that recursion chain and allow a
> > > > more
> > > > general use?
> > > The problem is that this is not easily possible for
> > > dma_fence_array
> > > containers. Just imagine that you drop the last reference to the
> > > containing fences during dma_fence_array destruction if any of
> > > the
> > > contained fences is another container you can easily run into
> > > recursion
> > > and with that stack corruption.
> > Indeed, that would require some deeper surgery.
> > 
> > > That's one of the major reasons I came up with the
> > > dma_fence_chain
> > > container. This one you can chain any number of elements together
> > > without running into any recursion.
> > > 
> > > > Also what are the mixing rules between these? Never use a
> > > > dma-fence-chain as one of the array fences and never use a
> > > > dma-fence-array as a dma-fence-chain fence?
> > > You can't add any other container to a dma_fence_array, neither
> > > other
> > > dma_fence_array instances nor dma_fence_chain instances.
> > > 
> > > IIRC at least technically a dma_fence_chain can contain a
> > > dma_fence_array if you absolutely need that, but Daniel, Jason
> > > and I
> > > already had the same discussion a while back and came to the
> > > conclusion
> > > to avoid that as well if possible.
> > Yes, this is actually the use-case. But what I can't easily
> > guarantee
> > is that that dma_fence_chain isn't fed into a dma_fence_array
> > somewhere
> > else. How do you typically avoid that?
> > 
> > Meanwhile I guess I need to take a different approach in the driver
> > to
> > avoid this altogether.
> 
> Jason and I came up with a deep dive iterator for his use case, but I
> think we don't want to use that any more after my dma_resv rework.
> 
> In other words when you need to create a new dma_fence_array you
> flatten 
> out the existing construct which is at worst case 
> dma_fence_chain->dma_fence_array->dma_fence.

Ok, Are there any cross-driver contract here, Like every driver using a
dma_fence_array need to check for dma_fence_chain and flatten like
above?

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > /Thomas
> > > > 
> > > > 
> > > > 
> > > > 
> > > > > Regards,
> > > > > Christian.
> > > > > 
> > > > > > But I'll update the commit message with a typical splat.
> > > > > > 
> > > > > > /Thomas
> > 
> 



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 18:12                   ` Thomas Hellström
@ 2021-11-30 19:27                     ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 19:27 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 11/30/21 19:12, Thomas Hellström wrote:
> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>> [SNIP]
>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>> enough to
>>>>>>>> say I can accurately review this. :)
>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>> which
>>>>>>> is
>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>> lock
>>>>>>> of
>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>> a
>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>> dma_fence_array
>>>>>>> we hit
>>>>>>> a splat.
>>>>>> Yeah, I already thought that you constructed something like
>>>>>> that.
>>>>>>
>>>>>> You get the splat because what you do here is illegal, you
>>>>>> can't
>>>>>> mix
>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>> up
>>>>>> in a
>>>>>> stack corruption.
>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>> wouldn't
>>>>> it be more usable we break that recursion chain and allow a
>>>>> more
>>>>> general use?
>>>> The problem is that this is not easily possible for
>>>> dma_fence_array
>>>> containers. Just imagine that you drop the last reference to the
>>>> containing fences during dma_fence_array destruction if any of
>>>> the
>>>> contained fences is another container you can easily run into
>>>> recursion
>>>> and with that stack corruption.
>>> Indeed, that would require some deeper surgery.
>>>
>>>> That's one of the major reasons I came up with the
>>>> dma_fence_chain
>>>> container. This one you can chain any number of elements together
>>>> without running into any recursion.
>>>>
>>>>> Also what are the mixing rules between these? Never use a
>>>>> dma-fence-chain as one of the array fences and never use a
>>>>> dma-fence-array as a dma-fence-chain fence?
>>>> You can't add any other container to a dma_fence_array, neither
>>>> other
>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>
>>>> IIRC at least technically a dma_fence_chain can contain a
>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>> and I
>>>> already had the same discussion a while back and came to the
>>>> conclusion
>>>> to avoid that as well if possible.
>>> Yes, this is actually the use-case. But what I can't easily
>>> guarantee
>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>> somewhere
>>> else. How do you typically avoid that?
>>>
>>> Meanwhile I guess I need to take a different approach in the driver
>>> to
>>> avoid this altogether.
>> Jason and I came up with a deep dive iterator for his use case, but I
>> think we don't want to use that any more after my dma_resv rework.
>>
>> In other words when you need to create a new dma_fence_array you
>> flatten
>> out the existing construct which is at worst case
>> dma_fence_chain->dma_fence_array->dma_fence.
> Ok, Are there any cross-driver contract here, Like every driver using a
> dma_fence_array need to check for dma_fence_chain and flatten like
> above?
>
> /Thomas

Oh, and a follow up question:

If there was a way to break the recursion on final put() (using the same 
basic approach as patch 2 in this series uses to break recursion in 
enable_signaling()), so that none of these containers did require any 
special treatment, would it be worth pursuing? I guess it might be 
possible by having the callbacks drop the references rather than the 
loop in the final put. + a couple of changes in code iterating over the 
fence pointers.

/Thomas

>
>> Regards,
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> /Thomas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>> But I'll update the commit message with a typical splat.
>>>>>>>
>>>>>>> /Thomas

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-11-30 19:27                     ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-11-30 19:27 UTC (permalink / raw)
  To: Christian König, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 11/30/21 19:12, Thomas Hellström wrote:
> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>> [SNIP]
>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>> enough to
>>>>>>>> say I can accurately review this. :)
>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>> which
>>>>>>> is
>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>> lock
>>>>>>> of
>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>> a
>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>> dma_fence_array
>>>>>>> we hit
>>>>>>> a splat.
>>>>>> Yeah, I already thought that you constructed something like
>>>>>> that.
>>>>>>
>>>>>> You get the splat because what you do here is illegal, you
>>>>>> can't
>>>>>> mix
>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>> up
>>>>>> in a
>>>>>> stack corruption.
>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>> wouldn't
>>>>> it be more usable we break that recursion chain and allow a
>>>>> more
>>>>> general use?
>>>> The problem is that this is not easily possible for
>>>> dma_fence_array
>>>> containers. Just imagine that you drop the last reference to the
>>>> containing fences during dma_fence_array destruction if any of
>>>> the
>>>> contained fences is another container you can easily run into
>>>> recursion
>>>> and with that stack corruption.
>>> Indeed, that would require some deeper surgery.
>>>
>>>> That's one of the major reasons I came up with the
>>>> dma_fence_chain
>>>> container. This one you can chain any number of elements together
>>>> without running into any recursion.
>>>>
>>>>> Also what are the mixing rules between these? Never use a
>>>>> dma-fence-chain as one of the array fences and never use a
>>>>> dma-fence-array as a dma-fence-chain fence?
>>>> You can't add any other container to a dma_fence_array, neither
>>>> other
>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>
>>>> IIRC at least technically a dma_fence_chain can contain a
>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>> and I
>>>> already had the same discussion a while back and came to the
>>>> conclusion
>>>> to avoid that as well if possible.
>>> Yes, this is actually the use-case. But what I can't easily
>>> guarantee
>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>> somewhere
>>> else. How do you typically avoid that?
>>>
>>> Meanwhile I guess I need to take a different approach in the driver
>>> to
>>> avoid this altogether.
>> Jason and I came up with a deep dive iterator for his use case, but I
>> think we don't want to use that any more after my dma_resv rework.
>>
>> In other words when you need to create a new dma_fence_array you
>> flatten
>> out the existing construct which is at worst case
>> dma_fence_chain->dma_fence_array->dma_fence.
> Ok, Are there any cross-driver contract here, Like every driver using a
> dma_fence_array need to check for dma_fence_chain and flatten like
> above?
>
> /Thomas

Oh, and a follow up question:

If there was a way to break the recursion on final put() (using the same 
basic approach as patch 2 in this series uses to break recursion in 
enable_signaling()), so that none of these containers did require any 
special treatment, would it be worth pursuing? I guess it might be 
possible by having the callbacks drop the references rather than the 
loop in the final put. + a couple of changes in code iterating over the 
fence pointers.

/Thomas

>
>> Regards,
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> /Thomas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>> But I'll update the commit message with a typical splat.
>>>>>>>
>>>>>>> /Thomas

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-11-30 19:27                     ` [Intel-gfx] " Thomas Hellström
@ 2021-12-01  7:05                       ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01  7:05 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 20:27 schrieb Thomas Hellström:
>
> On 11/30/21 19:12, Thomas Hellström wrote:
>> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>>> [SNIP]
>>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>>> enough to
>>>>>>>>> say I can accurately review this. :)
>>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>>> which
>>>>>>>> is
>>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>>> lock
>>>>>>>> of
>>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>>> a
>>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>>> dma_fence_array
>>>>>>>> we hit
>>>>>>>> a splat.
>>>>>>> Yeah, I already thought that you constructed something like
>>>>>>> that.
>>>>>>>
>>>>>>> You get the splat because what you do here is illegal, you
>>>>>>> can't
>>>>>>> mix
>>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>>> up
>>>>>>> in a
>>>>>>> stack corruption.
>>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>>> wouldn't
>>>>>> it be more usable we break that recursion chain and allow a
>>>>>> more
>>>>>> general use?
>>>>> The problem is that this is not easily possible for
>>>>> dma_fence_array
>>>>> containers. Just imagine that you drop the last reference to the
>>>>> containing fences during dma_fence_array destruction if any of
>>>>> the
>>>>> contained fences is another container you can easily run into
>>>>> recursion
>>>>> and with that stack corruption.
>>>> Indeed, that would require some deeper surgery.
>>>>
>>>>> That's one of the major reasons I came up with the
>>>>> dma_fence_chain
>>>>> container. This one you can chain any number of elements together
>>>>> without running into any recursion.
>>>>>
>>>>>> Also what are the mixing rules between these? Never use a
>>>>>> dma-fence-chain as one of the array fences and never use a
>>>>>> dma-fence-array as a dma-fence-chain fence?
>>>>> You can't add any other container to a dma_fence_array, neither
>>>>> other
>>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>>
>>>>> IIRC at least technically a dma_fence_chain can contain a
>>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>>> and I
>>>>> already had the same discussion a while back and came to the
>>>>> conclusion
>>>>> to avoid that as well if possible.
>>>> Yes, this is actually the use-case. But what I can't easily
>>>> guarantee
>>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>>> somewhere
>>>> else. How do you typically avoid that?
>>>>
>>>> Meanwhile I guess I need to take a different approach in the driver
>>>> to
>>>> avoid this altogether.
>>> Jason and I came up with a deep dive iterator for his use case, but I
>>> think we don't want to use that any more after my dma_resv rework.
>>>
>>> In other words when you need to create a new dma_fence_array you
>>> flatten
>>> out the existing construct which is at worst case
>>> dma_fence_chain->dma_fence_array->dma_fence.
>> Ok, Are there any cross-driver contract here, Like every driver using a
>> dma_fence_array need to check for dma_fence_chain and flatten like
>> above?

So far we only discussed that on the mailing list but haven't made any 
documentation for that.

>>
>> /Thomas
>
> Oh, and a follow up question:
>
> If there was a way to break the recursion on final put() (using the 
> same basic approach as patch 2 in this series uses to break recursion 
> in enable_signaling()), so that none of these containers did require 
> any special treatment, would it be worth pursuing? I guess it might be 
> possible by having the callbacks drop the references rather than the 
> loop in the final put. + a couple of changes in code iterating over 
> the fence pointers.

That won't really help, you just move the recursion from the final put 
into the callback.

What could be possible is to use an work item for any possible 
operation, e.g. enabling, signaling and destruction. But in the last 
discussion everybody agreed that it is better to just flatten out the array.

Christian.

>
>
> /Thomas
>
>>
>>> Regards,
>>> Christian.
>>>
>>>> /Thomas
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>> /Thomas
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> But I'll update the commit message with a typical splat.
>>>>>>>>
>>>>>>>> /Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01  7:05                       ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01  7:05 UTC (permalink / raw)
  To: Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 30.11.21 um 20:27 schrieb Thomas Hellström:
>
> On 11/30/21 19:12, Thomas Hellström wrote:
>> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>>> [SNIP]
>>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>>> enough to
>>>>>>>>> say I can accurately review this. :)
>>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>>> which
>>>>>>>> is
>>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>>> lock
>>>>>>>> of
>>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>>> a
>>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>>> dma_fence_array
>>>>>>>> we hit
>>>>>>>> a splat.
>>>>>>> Yeah, I already thought that you constructed something like
>>>>>>> that.
>>>>>>>
>>>>>>> You get the splat because what you do here is illegal, you
>>>>>>> can't
>>>>>>> mix
>>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>>> up
>>>>>>> in a
>>>>>>> stack corruption.
>>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>>> wouldn't
>>>>>> it be more usable we break that recursion chain and allow a
>>>>>> more
>>>>>> general use?
>>>>> The problem is that this is not easily possible for
>>>>> dma_fence_array
>>>>> containers. Just imagine that you drop the last reference to the
>>>>> containing fences during dma_fence_array destruction if any of
>>>>> the
>>>>> contained fences is another container you can easily run into
>>>>> recursion
>>>>> and with that stack corruption.
>>>> Indeed, that would require some deeper surgery.
>>>>
>>>>> That's one of the major reasons I came up with the
>>>>> dma_fence_chain
>>>>> container. This one you can chain any number of elements together
>>>>> without running into any recursion.
>>>>>
>>>>>> Also what are the mixing rules between these? Never use a
>>>>>> dma-fence-chain as one of the array fences and never use a
>>>>>> dma-fence-array as a dma-fence-chain fence?
>>>>> You can't add any other container to a dma_fence_array, neither
>>>>> other
>>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>>
>>>>> IIRC at least technically a dma_fence_chain can contain a
>>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>>> and I
>>>>> already had the same discussion a while back and came to the
>>>>> conclusion
>>>>> to avoid that as well if possible.
>>>> Yes, this is actually the use-case. But what I can't easily
>>>> guarantee
>>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>>> somewhere
>>>> else. How do you typically avoid that?
>>>>
>>>> Meanwhile I guess I need to take a different approach in the driver
>>>> to
>>>> avoid this altogether.
>>> Jason and I came up with a deep dive iterator for his use case, but I
>>> think we don't want to use that any more after my dma_resv rework.
>>>
>>> In other words when you need to create a new dma_fence_array you
>>> flatten
>>> out the existing construct which is at worst case
>>> dma_fence_chain->dma_fence_array->dma_fence.
>> Ok, Are there any cross-driver contract here, Like every driver using a
>> dma_fence_array need to check for dma_fence_chain and flatten like
>> above?

So far we only discussed that on the mailing list but haven't made any 
documentation for that.

>>
>> /Thomas
>
> Oh, and a follow up question:
>
> If there was a way to break the recursion on final put() (using the 
> same basic approach as patch 2 in this series uses to break recursion 
> in enable_signaling()), so that none of these containers did require 
> any special treatment, would it be worth pursuing? I guess it might be 
> possible by having the callbacks drop the references rather than the 
> loop in the final put. + a couple of changes in code iterating over 
> the fence pointers.

That won't really help, you just move the recursion from the final put 
into the callback.

What could be possible is to use an work item for any possible 
operation, e.g. enabling, signaling and destruction. But in the last 
discussion everybody agreed that it is better to just flatten out the array.

Christian.

>
>
> /Thomas
>
>>
>>> Regards,
>>> Christian.
>>>
>>>> /Thomas
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>> /Thomas
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> But I'll update the commit message with a typical splat.
>>>>>>>>
>>>>>>>> /Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01  7:05                       ` [Intel-gfx] " Christian König
@ 2021-12-01  8:23                         ` Thomas Hellström (Intel)
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01  8:23 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 08:05, Christian König wrote:
> Am 30.11.21 um 20:27 schrieb Thomas Hellström:
>>
>> On 11/30/21 19:12, Thomas Hellström wrote:
>>> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>>>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>>>> [SNIP]
>>>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>>>> enough to
>>>>>>>>>> say I can accurately review this. :)
>>>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>>>> which
>>>>>>>>> is
>>>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>>>> lock
>>>>>>>>> of
>>>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>>>> a
>>>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>>>> dma_fence_array
>>>>>>>>> we hit
>>>>>>>>> a splat.
>>>>>>>> Yeah, I already thought that you constructed something like
>>>>>>>> that.
>>>>>>>>
>>>>>>>> You get the splat because what you do here is illegal, you
>>>>>>>> can't
>>>>>>>> mix
>>>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>>>> up
>>>>>>>> in a
>>>>>>>> stack corruption.
>>>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>>>> wouldn't
>>>>>>> it be more usable we break that recursion chain and allow a
>>>>>>> more
>>>>>>> general use?
>>>>>> The problem is that this is not easily possible for
>>>>>> dma_fence_array
>>>>>> containers. Just imagine that you drop the last reference to the
>>>>>> containing fences during dma_fence_array destruction if any of
>>>>>> the
>>>>>> contained fences is another container you can easily run into
>>>>>> recursion
>>>>>> and with that stack corruption.
>>>>> Indeed, that would require some deeper surgery.
>>>>>
>>>>>> That's one of the major reasons I came up with the
>>>>>> dma_fence_chain
>>>>>> container. This one you can chain any number of elements together
>>>>>> without running into any recursion.
>>>>>>
>>>>>>> Also what are the mixing rules between these? Never use a
>>>>>>> dma-fence-chain as one of the array fences and never use a
>>>>>>> dma-fence-array as a dma-fence-chain fence?
>>>>>> You can't add any other container to a dma_fence_array, neither
>>>>>> other
>>>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>>>
>>>>>> IIRC at least technically a dma_fence_chain can contain a
>>>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>>>> and I
>>>>>> already had the same discussion a while back and came to the
>>>>>> conclusion
>>>>>> to avoid that as well if possible.
>>>>> Yes, this is actually the use-case. But what I can't easily
>>>>> guarantee
>>>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>>>> somewhere
>>>>> else. How do you typically avoid that?
>>>>>
>>>>> Meanwhile I guess I need to take a different approach in the driver
>>>>> to
>>>>> avoid this altogether.
>>>> Jason and I came up with a deep dive iterator for his use case, but I
>>>> think we don't want to use that any more after my dma_resv rework.
>>>>
>>>> In other words when you need to create a new dma_fence_array you
>>>> flatten
>>>> out the existing construct which is at worst case
>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>> Ok, Are there any cross-driver contract here, Like every driver using a
>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>> above?
>
> So far we only discussed that on the mailing list but haven't made any 
> documentation for that.

OK, one other cross-driver pitfall I see is if someone accidently joins 
two fence chains together by creating a fence chain unknowingly using 
another fence chain as the @fence argument?

The third cross-driver pitfall IMHO is the locking dependency these 
containers add. Other drivers (read at least i915) may have defined 
slightly different locking orders and that should also be addressed if 
needed, but that requires a cross driver agreement what the locking 
orders really are. Patch 1 actually addresses this, while keeping the 
container lockdep warnings for deep recursions, so at least I think that 
could serve as a discussion starter.

>
>
>>>
>>> /Thomas
>>
>> Oh, and a follow up question:
>>
>> If there was a way to break the recursion on final put() (using the 
>> same basic approach as patch 2 in this series uses to break recursion 
>> in enable_signaling()), so that none of these containers did require 
>> any special treatment, would it be worth pursuing? I guess it might 
>> be possible by having the callbacks drop the references rather than 
>> the loop in the final put. + a couple of changes in code iterating 
>> over the fence pointers.
>
> That won't really help, you just move the recursion from the final put 
> into the callback.

How do we recurse from the callback? The introduced fence_put() of 
individual fence pointers
doesn't recurse anymore (at most 1 level), and any callback recursion is 
broken by the irq_work?

I figure the big amount of work would be to adjust code that iterates 
over the individual fence pointers to recognize that they are rcu protected.

Thanks,

/Thomas



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01  8:23                         ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01  8:23 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 08:05, Christian König wrote:
> Am 30.11.21 um 20:27 schrieb Thomas Hellström:
>>
>> On 11/30/21 19:12, Thomas Hellström wrote:
>>> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>>>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>>>> [SNIP]
>>>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>>>> enough to
>>>>>>>>>> say I can accurately review this. :)
>>>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>>>> which
>>>>>>>>> is
>>>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>>>> lock
>>>>>>>>> of
>>>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>>>> a
>>>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>>>> dma_fence_array
>>>>>>>>> we hit
>>>>>>>>> a splat.
>>>>>>>> Yeah, I already thought that you constructed something like
>>>>>>>> that.
>>>>>>>>
>>>>>>>> You get the splat because what you do here is illegal, you
>>>>>>>> can't
>>>>>>>> mix
>>>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>>>> up
>>>>>>>> in a
>>>>>>>> stack corruption.
>>>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>>>> wouldn't
>>>>>>> it be more usable we break that recursion chain and allow a
>>>>>>> more
>>>>>>> general use?
>>>>>> The problem is that this is not easily possible for
>>>>>> dma_fence_array
>>>>>> containers. Just imagine that you drop the last reference to the
>>>>>> containing fences during dma_fence_array destruction if any of
>>>>>> the
>>>>>> contained fences is another container you can easily run into
>>>>>> recursion
>>>>>> and with that stack corruption.
>>>>> Indeed, that would require some deeper surgery.
>>>>>
>>>>>> That's one of the major reasons I came up with the
>>>>>> dma_fence_chain
>>>>>> container. This one you can chain any number of elements together
>>>>>> without running into any recursion.
>>>>>>
>>>>>>> Also what are the mixing rules between these? Never use a
>>>>>>> dma-fence-chain as one of the array fences and never use a
>>>>>>> dma-fence-array as a dma-fence-chain fence?
>>>>>> You can't add any other container to a dma_fence_array, neither
>>>>>> other
>>>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>>>
>>>>>> IIRC at least technically a dma_fence_chain can contain a
>>>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>>>> and I
>>>>>> already had the same discussion a while back and came to the
>>>>>> conclusion
>>>>>> to avoid that as well if possible.
>>>>> Yes, this is actually the use-case. But what I can't easily
>>>>> guarantee
>>>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>>>> somewhere
>>>>> else. How do you typically avoid that?
>>>>>
>>>>> Meanwhile I guess I need to take a different approach in the driver
>>>>> to
>>>>> avoid this altogether.
>>>> Jason and I came up with a deep dive iterator for his use case, but I
>>>> think we don't want to use that any more after my dma_resv rework.
>>>>
>>>> In other words when you need to create a new dma_fence_array you
>>>> flatten
>>>> out the existing construct which is at worst case
>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>> Ok, Are there any cross-driver contract here, Like every driver using a
>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>> above?
>
> So far we only discussed that on the mailing list but haven't made any 
> documentation for that.

OK, one other cross-driver pitfall I see is if someone accidently joins 
two fence chains together by creating a fence chain unknowingly using 
another fence chain as the @fence argument?

The third cross-driver pitfall IMHO is the locking dependency these 
containers add. Other drivers (read at least i915) may have defined 
slightly different locking orders and that should also be addressed if 
needed, but that requires a cross driver agreement what the locking 
orders really are. Patch 1 actually addresses this, while keeping the 
container lockdep warnings for deep recursions, so at least I think that 
could serve as a discussion starter.

>
>
>>>
>>> /Thomas
>>
>> Oh, and a follow up question:
>>
>> If there was a way to break the recursion on final put() (using the 
>> same basic approach as patch 2 in this series uses to break recursion 
>> in enable_signaling()), so that none of these containers did require 
>> any special treatment, would it be worth pursuing? I guess it might 
>> be possible by having the callbacks drop the references rather than 
>> the loop in the final put. + a couple of changes in code iterating 
>> over the fence pointers.
>
> That won't really help, you just move the recursion from the final put 
> into the callback.

How do we recurse from the callback? The introduced fence_put() of 
individual fence pointers
doesn't recurse anymore (at most 1 level), and any callback recursion is 
broken by the irq_work?

I figure the big amount of work would be to adjust code that iterates 
over the individual fence pointers to recognize that they are rcu protected.

Thanks,

/Thomas



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01  8:23                         ` [Intel-gfx] " Thomas Hellström (Intel)
@ 2021-12-01  8:36                           ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01  8:36 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
>  [SNIP]
>>>>> Jason and I came up with a deep dive iterator for his use case, but I
>>>>> think we don't want to use that any more after my dma_resv rework.
>>>>>
>>>>> In other words when you need to create a new dma_fence_array you
>>>>> flatten
>>>>> out the existing construct which is at worst case
>>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>>> Ok, Are there any cross-driver contract here, Like every driver 
>>>> using a
>>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>>> above?
>>
>> So far we only discussed that on the mailing list but haven't made 
>> any documentation for that.
>
> OK, one other cross-driver pitfall I see is if someone accidently 
> joins two fence chains together by creating a fence chain unknowingly 
> using another fence chain as the @fence argument?

That would indeed be illegal and we should probably add a WARN_ON() for 
that.

>
> The third cross-driver pitfall IMHO is the locking dependency these 
> containers add. Other drivers (read at least i915) may have defined 
> slightly different locking orders and that should also be addressed if 
> needed, but that requires a cross driver agreement what the locking 
> orders really are. Patch 1 actually addresses this, while keeping the 
> container lockdep warnings for deep recursions, so at least I think 
> that could serve as a discussion starter.

No, drivers should never make any assumptions on that.

E.g. when you need to take a look from a callback you must guarantee 
that you never have that lock taken when you call any of the dma_fence 
functions. Your patch breaks the lockdep annotation for that.

What we could do is to avoid all this by not calling the callback with 
the lock held in the first place.

>>
>>>>
>>>> /Thomas
>>>
>>> Oh, and a follow up question:
>>>
>>> If there was a way to break the recursion on final put() (using the 
>>> same basic approach as patch 2 in this series uses to break 
>>> recursion in enable_signaling()), so that none of these containers 
>>> did require any special treatment, would it be worth pursuing? I 
>>> guess it might be possible by having the callbacks drop the 
>>> references rather than the loop in the final put. + a couple of 
>>> changes in code iterating over the fence pointers.
>>
>> That won't really help, you just move the recursion from the final 
>> put into the callback.
>
> How do we recurse from the callback? The introduced fence_put() of 
> individual fence pointers
> doesn't recurse anymore (at most 1 level), and any callback recursion 
> is broken by the irq_work?

Yeah, but then you would need to take another lock to avoid racing with 
dma_fence_array_signaled().

>
> I figure the big amount of work would be to adjust code that iterates 
> over the individual fence pointers to recognize that they are rcu 
> protected.

Could be that we could solve this with RCU, but that sounds like a lot 
of churn for no gain at all.

In other words even with the problems solved I think it would be a 
really bad idea to allow chaining of dma_fence_array objects.

Christian.

>
>
> Thanks,
>
> /Thomas
>
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01  8:36                           ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01  8:36 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
>  [SNIP]
>>>>> Jason and I came up with a deep dive iterator for his use case, but I
>>>>> think we don't want to use that any more after my dma_resv rework.
>>>>>
>>>>> In other words when you need to create a new dma_fence_array you
>>>>> flatten
>>>>> out the existing construct which is at worst case
>>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>>> Ok, Are there any cross-driver contract here, Like every driver 
>>>> using a
>>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>>> above?
>>
>> So far we only discussed that on the mailing list but haven't made 
>> any documentation for that.
>
> OK, one other cross-driver pitfall I see is if someone accidently 
> joins two fence chains together by creating a fence chain unknowingly 
> using another fence chain as the @fence argument?

That would indeed be illegal and we should probably add a WARN_ON() for 
that.

>
> The third cross-driver pitfall IMHO is the locking dependency these 
> containers add. Other drivers (read at least i915) may have defined 
> slightly different locking orders and that should also be addressed if 
> needed, but that requires a cross driver agreement what the locking 
> orders really are. Patch 1 actually addresses this, while keeping the 
> container lockdep warnings for deep recursions, so at least I think 
> that could serve as a discussion starter.

No, drivers should never make any assumptions on that.

E.g. when you need to take a look from a callback you must guarantee 
that you never have that lock taken when you call any of the dma_fence 
functions. Your patch breaks the lockdep annotation for that.

What we could do is to avoid all this by not calling the callback with 
the lock held in the first place.

>>
>>>>
>>>> /Thomas
>>>
>>> Oh, and a follow up question:
>>>
>>> If there was a way to break the recursion on final put() (using the 
>>> same basic approach as patch 2 in this series uses to break 
>>> recursion in enable_signaling()), so that none of these containers 
>>> did require any special treatment, would it be worth pursuing? I 
>>> guess it might be possible by having the callbacks drop the 
>>> references rather than the loop in the final put. + a couple of 
>>> changes in code iterating over the fence pointers.
>>
>> That won't really help, you just move the recursion from the final 
>> put into the callback.
>
> How do we recurse from the callback? The introduced fence_put() of 
> individual fence pointers
> doesn't recurse anymore (at most 1 level), and any callback recursion 
> is broken by the irq_work?

Yeah, but then you would need to take another lock to avoid racing with 
dma_fence_array_signaled().

>
> I figure the big amount of work would be to adjust code that iterates 
> over the individual fence pointers to recognize that they are rcu 
> protected.

Could be that we could solve this with RCU, but that sounds like a lot 
of churn for no gain at all.

In other words even with the problems solved I think it would be a 
really bad idea to allow chaining of dma_fence_array objects.

Christian.

>
>
> Thanks,
>
> /Thomas
>
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01  8:36                           ` [Intel-gfx] " Christian König
@ 2021-12-01 10:15                             ` Thomas Hellström (Intel)
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01 10:15 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 09:36, Christian König wrote:
> Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
>>  [SNIP]
>>>>>> Jason and I came up with a deep dive iterator for his use case, 
>>>>>> but I
>>>>>> think we don't want to use that any more after my dma_resv rework.
>>>>>>
>>>>>> In other words when you need to create a new dma_fence_array you
>>>>>> flatten
>>>>>> out the existing construct which is at worst case
>>>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>>>> Ok, Are there any cross-driver contract here, Like every driver 
>>>>> using a
>>>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>>>> above?
>>>
>>> So far we only discussed that on the mailing list but haven't made 
>>> any documentation for that.
>>
>> OK, one other cross-driver pitfall I see is if someone accidently 
>> joins two fence chains together by creating a fence chain unknowingly 
>> using another fence chain as the @fence argument?
>
> That would indeed be illegal and we should probably add a WARN_ON() 
> for that.
>
>>
>> The third cross-driver pitfall IMHO is the locking dependency these 
>> containers add. Other drivers (read at least i915) may have defined 
>> slightly different locking orders and that should also be addressed 
>> if needed, but that requires a cross driver agreement what the 
>> locking orders really are. Patch 1 actually addresses this, while 
>> keeping the container lockdep warnings for deep recursions, so at 
>> least I think that could serve as a discussion starter.
>
> No, drivers should never make any assumptions on that.

Yes that i915 assumption of taking the lock of the last signaled fence 
first goes back a while in time. We should look at fixing that up, and 
document any (possibly forbidden) assumptions about fence lock locking 
orders to avoid it happening again, if there is no common cross-driver 
locking order that can be agreed.

>
> E.g. when you need to take a look from a callback you must guarantee 
> that you never have that lock taken when you call any of the dma_fence 
> functions. Your patch breaks the lockdep annotation for that.

I'm pretty sure that could be fixed in a satisfactory way if needed.

>
> What we could do is to avoid all this by not calling the callback with 
> the lock held in the first place.

If that's possible that might be a good idea, pls also see below.

>
>>>
>>>>>
>>>>> /Thomas
>>>>
>>>> Oh, and a follow up question:
>>>>
>>>> If there was a way to break the recursion on final put() (using the 
>>>> same basic approach as patch 2 in this series uses to break 
>>>> recursion in enable_signaling()), so that none of these containers 
>>>> did require any special treatment, would it be worth pursuing? I 
>>>> guess it might be possible by having the callbacks drop the 
>>>> references rather than the loop in the final put. + a couple of 
>>>> changes in code iterating over the fence pointers.
>>>
>>> That won't really help, you just move the recursion from the final 
>>> put into the callback.
>>
>> How do we recurse from the callback? The introduced fence_put() of 
>> individual fence pointers
>> doesn't recurse anymore (at most 1 level), and any callback recursion 
>> is broken by the irq_work?
>
> Yeah, but then you would need to take another lock to avoid racing 
> with dma_fence_array_signaled().
>
>>
>> I figure the big amount of work would be to adjust code that iterates 
>> over the individual fence pointers to recognize that they are rcu 
>> protected.
>
> Could be that we could solve this with RCU, but that sounds like a lot 
> of churn for no gain at all.
>
> In other words even with the problems solved I think it would be a 
> really bad idea to allow chaining of dma_fence_array objects.

Yes, that was really the question, Is it worth pursuing this? I'm not 
really suggesting we should allow this as an intentional feature. I'm 
worried, however, that if we allow these containers to start floating 
around cross-driver (or even internally) disguised as ordinary 
dma_fences, they would require a lot of driver special casing, or else 
completely unexpeced WARN_ON()s and lockdep splats would start to turn 
up, scaring people off from using them. And that would be a breeding 
ground for hairy driver-private constructs.

/Thomas


>
> Christian.
>
>>
>>
>> Thanks,
>>
>> /Thomas
>>
>>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01 10:15                             ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01 10:15 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 09:36, Christian König wrote:
> Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
>>  [SNIP]
>>>>>> Jason and I came up with a deep dive iterator for his use case, 
>>>>>> but I
>>>>>> think we don't want to use that any more after my dma_resv rework.
>>>>>>
>>>>>> In other words when you need to create a new dma_fence_array you
>>>>>> flatten
>>>>>> out the existing construct which is at worst case
>>>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>>>> Ok, Are there any cross-driver contract here, Like every driver 
>>>>> using a
>>>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>>>> above?
>>>
>>> So far we only discussed that on the mailing list but haven't made 
>>> any documentation for that.
>>
>> OK, one other cross-driver pitfall I see is if someone accidently 
>> joins two fence chains together by creating a fence chain unknowingly 
>> using another fence chain as the @fence argument?
>
> That would indeed be illegal and we should probably add a WARN_ON() 
> for that.
>
>>
>> The third cross-driver pitfall IMHO is the locking dependency these 
>> containers add. Other drivers (read at least i915) may have defined 
>> slightly different locking orders and that should also be addressed 
>> if needed, but that requires a cross driver agreement what the 
>> locking orders really are. Patch 1 actually addresses this, while 
>> keeping the container lockdep warnings for deep recursions, so at 
>> least I think that could serve as a discussion starter.
>
> No, drivers should never make any assumptions on that.

Yes that i915 assumption of taking the lock of the last signaled fence 
first goes back a while in time. We should look at fixing that up, and 
document any (possibly forbidden) assumptions about fence lock locking 
orders to avoid it happening again, if there is no common cross-driver 
locking order that can be agreed.

>
> E.g. when you need to take a look from a callback you must guarantee 
> that you never have that lock taken when you call any of the dma_fence 
> functions. Your patch breaks the lockdep annotation for that.

I'm pretty sure that could be fixed in a satisfactory way if needed.

>
> What we could do is to avoid all this by not calling the callback with 
> the lock held in the first place.

If that's possible that might be a good idea, pls also see below.

>
>>>
>>>>>
>>>>> /Thomas
>>>>
>>>> Oh, and a follow up question:
>>>>
>>>> If there was a way to break the recursion on final put() (using the 
>>>> same basic approach as patch 2 in this series uses to break 
>>>> recursion in enable_signaling()), so that none of these containers 
>>>> did require any special treatment, would it be worth pursuing? I 
>>>> guess it might be possible by having the callbacks drop the 
>>>> references rather than the loop in the final put. + a couple of 
>>>> changes in code iterating over the fence pointers.
>>>
>>> That won't really help, you just move the recursion from the final 
>>> put into the callback.
>>
>> How do we recurse from the callback? The introduced fence_put() of 
>> individual fence pointers
>> doesn't recurse anymore (at most 1 level), and any callback recursion 
>> is broken by the irq_work?
>
> Yeah, but then you would need to take another lock to avoid racing 
> with dma_fence_array_signaled().
>
>>
>> I figure the big amount of work would be to adjust code that iterates 
>> over the individual fence pointers to recognize that they are rcu 
>> protected.
>
> Could be that we could solve this with RCU, but that sounds like a lot 
> of churn for no gain at all.
>
> In other words even with the problems solved I think it would be a 
> really bad idea to allow chaining of dma_fence_array objects.

Yes, that was really the question, Is it worth pursuing this? I'm not 
really suggesting we should allow this as an intentional feature. I'm 
worried, however, that if we allow these containers to start floating 
around cross-driver (or even internally) disguised as ordinary 
dma_fences, they would require a lot of driver special casing, or else 
completely unexpeced WARN_ON()s and lockdep splats would start to turn 
up, scaring people off from using them. And that would be a breeding 
ground for hairy driver-private constructs.

/Thomas


>
> Christian.
>
>>
>>
>> Thanks,
>>
>> /Thomas
>>
>>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01 10:15                             ` [Intel-gfx] " Thomas Hellström (Intel)
@ 2021-12-01 10:32                               ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01 10:32 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> [SNIP]
>>
>> What we could do is to avoid all this by not calling the callback 
>> with the lock held in the first place.
>
> If that's possible that might be a good idea, pls also see below.

The problem with that is 
dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
could avoid using that or at least allow it to drop the lock then we 
could call the callback without holding it.

Somebody would need to audit the drivers and see if holding the lock is 
really necessary anywhere.

>>
>>>>
>>>>>>
>>>>>> /Thomas
>>>>>
>>>>> Oh, and a follow up question:
>>>>>
>>>>> If there was a way to break the recursion on final put() (using 
>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>> recursion in enable_signaling()), so that none of these containers 
>>>>> did require any special treatment, would it be worth pursuing? I 
>>>>> guess it might be possible by having the callbacks drop the 
>>>>> references rather than the loop in the final put. + a couple of 
>>>>> changes in code iterating over the fence pointers.
>>>>
>>>> That won't really help, you just move the recursion from the final 
>>>> put into the callback.
>>>
>>> How do we recurse from the callback? The introduced fence_put() of 
>>> individual fence pointers
>>> doesn't recurse anymore (at most 1 level), and any callback 
>>> recursion is broken by the irq_work?
>>
>> Yeah, but then you would need to take another lock to avoid racing 
>> with dma_fence_array_signaled().
>>
>>>
>>> I figure the big amount of work would be to adjust code that 
>>> iterates over the individual fence pointers to recognize that they 
>>> are rcu protected.
>>
>> Could be that we could solve this with RCU, but that sounds like a 
>> lot of churn for no gain at all.
>>
>> In other words even with the problems solved I think it would be a 
>> really bad idea to allow chaining of dma_fence_array objects.
>
> Yes, that was really the question, Is it worth pursuing this? I'm not 
> really suggesting we should allow this as an intentional feature. I'm 
> worried, however, that if we allow these containers to start floating 
> around cross-driver (or even internally) disguised as ordinary 
> dma_fences, they would require a lot of driver special casing, or else 
> completely unexpeced WARN_ON()s and lockdep splats would start to turn 
> up, scaring people off from using them. And that would be a breeding 
> ground for hairy driver-private constructs.

Well the question is why we would want to do it?

If it's to avoid inter driver lock dependencies by avoiding to call the 
callback with the spinlock held, then yes please. We had tons of 
problems with that, resulting in irq_work and work_item delegation all 
over the place.

If it's to allow nesting of dma_fence_array instances, then it's most 
likely a really bad idea even if we fix all the locking order problems.

Christian.

>
> /Thomas
>
>
>>
>> Christian.
>>
>>>
>>>
>>> Thanks,
>>>
>>> /Thomas
>>>
>>>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01 10:32                               ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01 10:32 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> [SNIP]
>>
>> What we could do is to avoid all this by not calling the callback 
>> with the lock held in the first place.
>
> If that's possible that might be a good idea, pls also see below.

The problem with that is 
dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
could avoid using that or at least allow it to drop the lock then we 
could call the callback without holding it.

Somebody would need to audit the drivers and see if holding the lock is 
really necessary anywhere.

>>
>>>>
>>>>>>
>>>>>> /Thomas
>>>>>
>>>>> Oh, and a follow up question:
>>>>>
>>>>> If there was a way to break the recursion on final put() (using 
>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>> recursion in enable_signaling()), so that none of these containers 
>>>>> did require any special treatment, would it be worth pursuing? I 
>>>>> guess it might be possible by having the callbacks drop the 
>>>>> references rather than the loop in the final put. + a couple of 
>>>>> changes in code iterating over the fence pointers.
>>>>
>>>> That won't really help, you just move the recursion from the final 
>>>> put into the callback.
>>>
>>> How do we recurse from the callback? The introduced fence_put() of 
>>> individual fence pointers
>>> doesn't recurse anymore (at most 1 level), and any callback 
>>> recursion is broken by the irq_work?
>>
>> Yeah, but then you would need to take another lock to avoid racing 
>> with dma_fence_array_signaled().
>>
>>>
>>> I figure the big amount of work would be to adjust code that 
>>> iterates over the individual fence pointers to recognize that they 
>>> are rcu protected.
>>
>> Could be that we could solve this with RCU, but that sounds like a 
>> lot of churn for no gain at all.
>>
>> In other words even with the problems solved I think it would be a 
>> really bad idea to allow chaining of dma_fence_array objects.
>
> Yes, that was really the question, Is it worth pursuing this? I'm not 
> really suggesting we should allow this as an intentional feature. I'm 
> worried, however, that if we allow these containers to start floating 
> around cross-driver (or even internally) disguised as ordinary 
> dma_fences, they would require a lot of driver special casing, or else 
> completely unexpeced WARN_ON()s and lockdep splats would start to turn 
> up, scaring people off from using them. And that would be a breeding 
> ground for hairy driver-private constructs.

Well the question is why we would want to do it?

If it's to avoid inter driver lock dependencies by avoiding to call the 
callback with the spinlock held, then yes please. We had tons of 
problems with that, resulting in irq_work and work_item delegation all 
over the place.

If it's to allow nesting of dma_fence_array instances, then it's most 
likely a really bad idea even if we fix all the locking order problems.

Christian.

>
> /Thomas
>
>
>>
>> Christian.
>>
>>>
>>>
>>> Thanks,
>>>
>>> /Thomas
>>>
>>>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01 10:32                               ` [Intel-gfx] " Christian König
@ 2021-12-01 11:04                                 ` Thomas Hellström (Intel)
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01 11:04 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 11:32, Christian König wrote:
> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>> [SNIP]
>>>
>>> What we could do is to avoid all this by not calling the callback 
>>> with the lock held in the first place.
>>
>> If that's possible that might be a good idea, pls also see below.
>
> The problem with that is 
> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
> could avoid using that or at least allow it to drop the lock then we 
> could call the callback without holding it.
>
> Somebody would need to audit the drivers and see if holding the lock 
> is really necessary anywhere.
>
>>>
>>>>>
>>>>>>>
>>>>>>> /Thomas
>>>>>>
>>>>>> Oh, and a follow up question:
>>>>>>
>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>> containers did require any special treatment, would it be worth 
>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>
>>>>> That won't really help, you just move the recursion from the final 
>>>>> put into the callback.
>>>>
>>>> How do we recurse from the callback? The introduced fence_put() of 
>>>> individual fence pointers
>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>> recursion is broken by the irq_work?
>>>
>>> Yeah, but then you would need to take another lock to avoid racing 
>>> with dma_fence_array_signaled().
>>>
>>>>
>>>> I figure the big amount of work would be to adjust code that 
>>>> iterates over the individual fence pointers to recognize that they 
>>>> are rcu protected.
>>>
>>> Could be that we could solve this with RCU, but that sounds like a 
>>> lot of churn for no gain at all.
>>>
>>> In other words even with the problems solved I think it would be a 
>>> really bad idea to allow chaining of dma_fence_array objects.
>>
>> Yes, that was really the question, Is it worth pursuing this? I'm not 
>> really suggesting we should allow this as an intentional feature. I'm 
>> worried, however, that if we allow these containers to start floating 
>> around cross-driver (or even internally) disguised as ordinary 
>> dma_fences, they would require a lot of driver special casing, or 
>> else completely unexpeced WARN_ON()s and lockdep splats would start 
>> to turn up, scaring people off from using them. And that would be a 
>> breeding ground for hairy driver-private constructs.
>
> Well the question is why we would want to do it?
>
> If it's to avoid inter driver lock dependencies by avoiding to call 
> the callback with the spinlock held, then yes please. We had tons of 
> problems with that, resulting in irq_work and work_item delegation all 
> over the place.

Yes, that sounds like something desirable, but in these containers, 
what's causing the lock dependencies is the enable_signaling() callback 
that is typically called locked.


>
> If it's to allow nesting of dma_fence_array instances, then it's most 
> likely a really bad idea even if we fix all the locking order problems.

Well I think my use-case where I hit a dead end may illustrate what 
worries me here:

1) We use a dma-fence-array to coalesce all dependencies for ttm object 
migration.
2) We use a dma-fence-chain to order the resulting dm_fence into a 
timeline because the TTM resource manager code requires that.

Initially seemingly harmless to me.

But after a sequence evict->alloc->clear, the dma-fence-chain feeds into 
the dma-fence-array for the clearing operation. Code still works fine, 
and no deep recursion, no warnings. But if I were to add another driver 
to the system that instead feeds a dma-fence-array into a 
dma-fence-chain, this would give me a lockdep splat.

So then if somebody were to come up with the splendid idea of using a 
dma-fence-chain to initially coalesce fences, I'd hit the same problem 
or risk illegaly joining two dma-fence-chains together.

To fix this, I would need to look at the incoming fences and iterate 
over any dma-fence-array or dma-fence-chain that is fed into the 
dma-fence-array to flatten out the input. In fact all dma-fence-array 
users would need to do that, and even dma-fence-chain users watching out 
for not joining chains together or accidently add an array that perhaps 
came as a disguised dma-fence from antother driver.

So the purpose to me would be to allow these containers as input to 
eachother without a lot of in-driver special-casing, be it by breaking 
recursion on built-in flattening to avoid

a) Hitting issues in the future or with existing interoperating drivers.
b) Avoid driver-private containers that also might break the 
interoperability. (For example the i915 currently driver-private 
dma_fence_work avoid all these problems, but we're attempting to address 
issues in common code rather than re-inventing stuff internally).

/Thomas


>
> Christian.
>
>>
>> /Thomas
>>
>>
>>>
>>> Christian.
>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> /Thomas
>>>>
>>>>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01 11:04                                 ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01 11:04 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 11:32, Christian König wrote:
> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>> [SNIP]
>>>
>>> What we could do is to avoid all this by not calling the callback 
>>> with the lock held in the first place.
>>
>> If that's possible that might be a good idea, pls also see below.
>
> The problem with that is 
> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
> could avoid using that or at least allow it to drop the lock then we 
> could call the callback without holding it.
>
> Somebody would need to audit the drivers and see if holding the lock 
> is really necessary anywhere.
>
>>>
>>>>>
>>>>>>>
>>>>>>> /Thomas
>>>>>>
>>>>>> Oh, and a follow up question:
>>>>>>
>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>> containers did require any special treatment, would it be worth 
>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>
>>>>> That won't really help, you just move the recursion from the final 
>>>>> put into the callback.
>>>>
>>>> How do we recurse from the callback? The introduced fence_put() of 
>>>> individual fence pointers
>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>> recursion is broken by the irq_work?
>>>
>>> Yeah, but then you would need to take another lock to avoid racing 
>>> with dma_fence_array_signaled().
>>>
>>>>
>>>> I figure the big amount of work would be to adjust code that 
>>>> iterates over the individual fence pointers to recognize that they 
>>>> are rcu protected.
>>>
>>> Could be that we could solve this with RCU, but that sounds like a 
>>> lot of churn for no gain at all.
>>>
>>> In other words even with the problems solved I think it would be a 
>>> really bad idea to allow chaining of dma_fence_array objects.
>>
>> Yes, that was really the question, Is it worth pursuing this? I'm not 
>> really suggesting we should allow this as an intentional feature. I'm 
>> worried, however, that if we allow these containers to start floating 
>> around cross-driver (or even internally) disguised as ordinary 
>> dma_fences, they would require a lot of driver special casing, or 
>> else completely unexpeced WARN_ON()s and lockdep splats would start 
>> to turn up, scaring people off from using them. And that would be a 
>> breeding ground for hairy driver-private constructs.
>
> Well the question is why we would want to do it?
>
> If it's to avoid inter driver lock dependencies by avoiding to call 
> the callback with the spinlock held, then yes please. We had tons of 
> problems with that, resulting in irq_work and work_item delegation all 
> over the place.

Yes, that sounds like something desirable, but in these containers, 
what's causing the lock dependencies is the enable_signaling() callback 
that is typically called locked.


>
> If it's to allow nesting of dma_fence_array instances, then it's most 
> likely a really bad idea even if we fix all the locking order problems.

Well I think my use-case where I hit a dead end may illustrate what 
worries me here:

1) We use a dma-fence-array to coalesce all dependencies for ttm object 
migration.
2) We use a dma-fence-chain to order the resulting dm_fence into a 
timeline because the TTM resource manager code requires that.

Initially seemingly harmless to me.

But after a sequence evict->alloc->clear, the dma-fence-chain feeds into 
the dma-fence-array for the clearing operation. Code still works fine, 
and no deep recursion, no warnings. But if I were to add another driver 
to the system that instead feeds a dma-fence-array into a 
dma-fence-chain, this would give me a lockdep splat.

So then if somebody were to come up with the splendid idea of using a 
dma-fence-chain to initially coalesce fences, I'd hit the same problem 
or risk illegaly joining two dma-fence-chains together.

To fix this, I would need to look at the incoming fences and iterate 
over any dma-fence-array or dma-fence-chain that is fed into the 
dma-fence-array to flatten out the input. In fact all dma-fence-array 
users would need to do that, and even dma-fence-chain users watching out 
for not joining chains together or accidently add an array that perhaps 
came as a disguised dma-fence from antother driver.

So the purpose to me would be to allow these containers as input to 
eachother without a lot of in-driver special-casing, be it by breaking 
recursion on built-in flattening to avoid

a) Hitting issues in the future or with existing interoperating drivers.
b) Avoid driver-private containers that also might break the 
interoperability. (For example the i915 currently driver-private 
dma_fence_work avoid all these problems, but we're attempting to address 
issues in common code rather than re-inventing stuff internally).

/Thomas


>
> Christian.
>
>>
>> /Thomas
>>
>>
>>>
>>> Christian.
>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> /Thomas
>>>>
>>>>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01 11:04                                 ` [Intel-gfx] " Thomas Hellström (Intel)
@ 2021-12-01 11:25                                   ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01 11:25 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 11:32, Christian König wrote:
>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>> [SNIP]
>>>>
>>>> What we could do is to avoid all this by not calling the callback 
>>>> with the lock held in the first place.
>>>
>>> If that's possible that might be a good idea, pls also see below.
>>
>> The problem with that is 
>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
>> could avoid using that or at least allow it to drop the lock then we 
>> could call the callback without holding it.
>>
>> Somebody would need to audit the drivers and see if holding the lock 
>> is really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>>>>
>>>>>>>> /Thomas
>>>>>>>
>>>>>>> Oh, and a follow up question:
>>>>>>>
>>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>>> containers did require any special treatment, would it be worth 
>>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>>
>>>>>> That won't really help, you just move the recursion from the 
>>>>>> final put into the callback.
>>>>>
>>>>> How do we recurse from the callback? The introduced fence_put() of 
>>>>> individual fence pointers
>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>> recursion is broken by the irq_work?
>>>>
>>>> Yeah, but then you would need to take another lock to avoid racing 
>>>> with dma_fence_array_signaled().
>>>>
>>>>>
>>>>> I figure the big amount of work would be to adjust code that 
>>>>> iterates over the individual fence pointers to recognize that they 
>>>>> are rcu protected.
>>>>
>>>> Could be that we could solve this with RCU, but that sounds like a 
>>>> lot of churn for no gain at all.
>>>>
>>>> In other words even with the problems solved I think it would be a 
>>>> really bad idea to allow chaining of dma_fence_array objects.
>>>
>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>> not really suggesting we should allow this as an intentional 
>>> feature. I'm worried, however, that if we allow these containers to 
>>> start floating around cross-driver (or even internally) disguised as 
>>> ordinary dma_fences, they would require a lot of driver special 
>>> casing, or else completely unexpeced WARN_ON()s and lockdep splats 
>>> would start to turn up, scaring people off from using them. And that 
>>> would be a breeding ground for hairy driver-private constructs.
>>
>> Well the question is why we would want to do it?
>>
>> If it's to avoid inter driver lock dependencies by avoiding to call 
>> the callback with the spinlock held, then yes please. We had tons of 
>> problems with that, resulting in irq_work and work_item delegation 
>> all over the place.
>
> Yes, that sounds like something desirable, but in these containers, 
> what's causing the lock dependencies is the enable_signaling() 
> callback that is typically called locked.
>
>
>>
>> If it's to allow nesting of dma_fence_array instances, then it's most 
>> likely a really bad idea even if we fix all the locking order problems.
>
> Well I think my use-case where I hit a dead end may illustrate what 
> worries me here:
>
> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
> object migration.
> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
> timeline because the TTM resource manager code requires that.
>
> Initially seemingly harmless to me.
>
> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
> into the dma-fence-array for the clearing operation. Code still works 
> fine, and no deep recursion, no warnings. But if I were to add another 
> driver to the system that instead feeds a dma-fence-array into a 
> dma-fence-chain, this would give me a lockdep splat.
>
> So then if somebody were to come up with the splendid idea of using a 
> dma-fence-chain to initially coalesce fences, I'd hit the same problem 
> or risk illegaly joining two dma-fence-chains together.
>
> To fix this, I would need to look at the incoming fences and iterate 
> over any dma-fence-array or dma-fence-chain that is fed into the 
> dma-fence-array to flatten out the input. In fact all dma-fence-array 
> users would need to do that, and even dma-fence-chain users watching 
> out for not joining chains together or accidently add an array that 
> perhaps came as a disguised dma-fence from antother driver.
>
> So the purpose to me would be to allow these containers as input to 
> eachother without a lot of in-driver special-casing, be it by breaking 
> recursion on built-in flattening to avoid
>
> a) Hitting issues in the future or with existing interoperating drivers.
> b) Avoid driver-private containers that also might break the 
> interoperability. (For example the i915 currently driver-private 
> dma_fence_work avoid all these problems, but we're attempting to 
> address issues in common code rather than re-inventing stuff internally).

I don't think that a dma_fence_array or dma_fence_chain is the right 
thing to begin with in those use cases.

When you want to coalesce the dependencies for a job you could either 
use an xarray like Daniel did for the scheduler or some hashtable like 
we use in amdgpu. But I don't see the need for exposing the dma_fence 
interface for those.

And why do you use dma_fence_chain to generate a timeline for TTM? That 
should come naturally because all the moves must be ordered.

Regards,
Christian.




^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01 11:25                                   ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-01 11:25 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 11:32, Christian König wrote:
>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>> [SNIP]
>>>>
>>>> What we could do is to avoid all this by not calling the callback 
>>>> with the lock held in the first place.
>>>
>>> If that's possible that might be a good idea, pls also see below.
>>
>> The problem with that is 
>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
>> could avoid using that or at least allow it to drop the lock then we 
>> could call the callback without holding it.
>>
>> Somebody would need to audit the drivers and see if holding the lock 
>> is really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>>>>
>>>>>>>> /Thomas
>>>>>>>
>>>>>>> Oh, and a follow up question:
>>>>>>>
>>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>>> containers did require any special treatment, would it be worth 
>>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>>
>>>>>> That won't really help, you just move the recursion from the 
>>>>>> final put into the callback.
>>>>>
>>>>> How do we recurse from the callback? The introduced fence_put() of 
>>>>> individual fence pointers
>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>> recursion is broken by the irq_work?
>>>>
>>>> Yeah, but then you would need to take another lock to avoid racing 
>>>> with dma_fence_array_signaled().
>>>>
>>>>>
>>>>> I figure the big amount of work would be to adjust code that 
>>>>> iterates over the individual fence pointers to recognize that they 
>>>>> are rcu protected.
>>>>
>>>> Could be that we could solve this with RCU, but that sounds like a 
>>>> lot of churn for no gain at all.
>>>>
>>>> In other words even with the problems solved I think it would be a 
>>>> really bad idea to allow chaining of dma_fence_array objects.
>>>
>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>> not really suggesting we should allow this as an intentional 
>>> feature. I'm worried, however, that if we allow these containers to 
>>> start floating around cross-driver (or even internally) disguised as 
>>> ordinary dma_fences, they would require a lot of driver special 
>>> casing, or else completely unexpeced WARN_ON()s and lockdep splats 
>>> would start to turn up, scaring people off from using them. And that 
>>> would be a breeding ground for hairy driver-private constructs.
>>
>> Well the question is why we would want to do it?
>>
>> If it's to avoid inter driver lock dependencies by avoiding to call 
>> the callback with the spinlock held, then yes please. We had tons of 
>> problems with that, resulting in irq_work and work_item delegation 
>> all over the place.
>
> Yes, that sounds like something desirable, but in these containers, 
> what's causing the lock dependencies is the enable_signaling() 
> callback that is typically called locked.
>
>
>>
>> If it's to allow nesting of dma_fence_array instances, then it's most 
>> likely a really bad idea even if we fix all the locking order problems.
>
> Well I think my use-case where I hit a dead end may illustrate what 
> worries me here:
>
> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
> object migration.
> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
> timeline because the TTM resource manager code requires that.
>
> Initially seemingly harmless to me.
>
> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
> into the dma-fence-array for the clearing operation. Code still works 
> fine, and no deep recursion, no warnings. But if I were to add another 
> driver to the system that instead feeds a dma-fence-array into a 
> dma-fence-chain, this would give me a lockdep splat.
>
> So then if somebody were to come up with the splendid idea of using a 
> dma-fence-chain to initially coalesce fences, I'd hit the same problem 
> or risk illegaly joining two dma-fence-chains together.
>
> To fix this, I would need to look at the incoming fences and iterate 
> over any dma-fence-array or dma-fence-chain that is fed into the 
> dma-fence-array to flatten out the input. In fact all dma-fence-array 
> users would need to do that, and even dma-fence-chain users watching 
> out for not joining chains together or accidently add an array that 
> perhaps came as a disguised dma-fence from antother driver.
>
> So the purpose to me would be to allow these containers as input to 
> eachother without a lot of in-driver special-casing, be it by breaking 
> recursion on built-in flattening to avoid
>
> a) Hitting issues in the future or with existing interoperating drivers.
> b) Avoid driver-private containers that also might break the 
> interoperability. (For example the i915 currently driver-private 
> dma_fence_work avoid all these problems, but we're attempting to 
> address issues in common code rather than re-inventing stuff internally).

I don't think that a dma_fence_array or dma_fence_chain is the right 
thing to begin with in those use cases.

When you want to coalesce the dependencies for a job you could either 
use an xarray like Daniel did for the scheduler or some hashtable like 
we use in amdgpu. But I don't see the need for exposing the dma_fence 
interface for those.

And why do you use dma_fence_chain to generate a timeline for TTM? That 
should come naturally because all the moves must be ordered.

Regards,
Christian.




^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01 11:25                                   ` [Intel-gfx] " Christian König
@ 2021-12-01 12:16                                     ` Thomas Hellström (Intel)
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01 12:16 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 12:25, Christian König wrote:
> Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>>
>> On 12/1/21 11:32, Christian König wrote:
>>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>>> [SNIP]
>>>>>
>>>>> What we could do is to avoid all this by not calling the callback 
>>>>> with the lock held in the first place.
>>>>
>>>> If that's possible that might be a good idea, pls also see below.
>>>
>>> The problem with that is 
>>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
>>> could avoid using that or at least allow it to drop the lock then we 
>>> could call the callback without holding it.
>>>
>>> Somebody would need to audit the drivers and see if holding the lock 
>>> is really necessary anywhere.
>>>
>>>>>
>>>>>>>
>>>>>>>>>
>>>>>>>>> /Thomas
>>>>>>>>
>>>>>>>> Oh, and a follow up question:
>>>>>>>>
>>>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>>>> containers did require any special treatment, would it be worth 
>>>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>>>
>>>>>>> That won't really help, you just move the recursion from the 
>>>>>>> final put into the callback.
>>>>>>
>>>>>> How do we recurse from the callback? The introduced fence_put() 
>>>>>> of individual fence pointers
>>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>>> recursion is broken by the irq_work?
>>>>>
>>>>> Yeah, but then you would need to take another lock to avoid racing 
>>>>> with dma_fence_array_signaled().
>>>>>
>>>>>>
>>>>>> I figure the big amount of work would be to adjust code that 
>>>>>> iterates over the individual fence pointers to recognize that 
>>>>>> they are rcu protected.
>>>>>
>>>>> Could be that we could solve this with RCU, but that sounds like a 
>>>>> lot of churn for no gain at all.
>>>>>
>>>>> In other words even with the problems solved I think it would be a 
>>>>> really bad idea to allow chaining of dma_fence_array objects.
>>>>
>>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>>> not really suggesting we should allow this as an intentional 
>>>> feature. I'm worried, however, that if we allow these containers to 
>>>> start floating around cross-driver (or even internally) disguised 
>>>> as ordinary dma_fences, they would require a lot of driver special 
>>>> casing, or else completely unexpeced WARN_ON()s and lockdep splats 
>>>> would start to turn up, scaring people off from using them. And 
>>>> that would be a breeding ground for hairy driver-private constructs.
>>>
>>> Well the question is why we would want to do it?
>>>
>>> If it's to avoid inter driver lock dependencies by avoiding to call 
>>> the callback with the spinlock held, then yes please. We had tons of 
>>> problems with that, resulting in irq_work and work_item delegation 
>>> all over the place.
>>
>> Yes, that sounds like something desirable, but in these containers, 
>> what's causing the lock dependencies is the enable_signaling() 
>> callback that is typically called locked.
>>
>>
>>>
>>> If it's to allow nesting of dma_fence_array instances, then it's 
>>> most likely a really bad idea even if we fix all the locking order 
>>> problems.
>>
>> Well I think my use-case where I hit a dead end may illustrate what 
>> worries me here:
>>
>> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
>> object migration.
>> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
>> timeline because the TTM resource manager code requires that.
>>
>> Initially seemingly harmless to me.
>>
>> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
>> into the dma-fence-array for the clearing operation. Code still works 
>> fine, and no deep recursion, no warnings. But if I were to add 
>> another driver to the system that instead feeds a dma-fence-array 
>> into a dma-fence-chain, this would give me a lockdep splat.
>>
>> So then if somebody were to come up with the splendid idea of using a 
>> dma-fence-chain to initially coalesce fences, I'd hit the same 
>> problem or risk illegaly joining two dma-fence-chains together.
>>
>> To fix this, I would need to look at the incoming fences and iterate 
>> over any dma-fence-array or dma-fence-chain that is fed into the 
>> dma-fence-array to flatten out the input. In fact all dma-fence-array 
>> users would need to do that, and even dma-fence-chain users watching 
>> out for not joining chains together or accidently add an array that 
>> perhaps came as a disguised dma-fence from antother driver.
>>
>> So the purpose to me would be to allow these containers as input to 
>> eachother without a lot of in-driver special-casing, be it by 
>> breaking recursion on built-in flattening to avoid
>>
>> a) Hitting issues in the future or with existing interoperating drivers.
>> b) Avoid driver-private containers that also might break the 
>> interoperability. (For example the i915 currently driver-private 
>> dma_fence_work avoid all these problems, but we're attempting to 
>> address issues in common code rather than re-inventing stuff 
>> internally).
>
> I don't think that a dma_fence_array or dma_fence_chain is the right 
> thing to begin with in those use cases.
>
> When you want to coalesce the dependencies for a job you could either 
> use an xarray like Daniel did for the scheduler or some hashtable like 
> we use in amdgpu. But I don't see the need for exposing the dma_fence 
> interface for those.

This is because the interface to our migration code takes just a single 
dma-fence as dependency. Now this is of course something we need to look 
at to mitigate this, but see below.

>
> And why do you use dma_fence_chain to generate a timeline for TTM? 
> That should come naturally because all the moves must be ordered.

Oh, in this case because we're looking at adding stuff at the end of 
migration (like coalescing object shared fences and / or async unbind 
fences), which may not complete in order.

But that's not really the point, the point was that an (at least to me) 
seemingly harmless usage pattern, be it real or fictious, ends up giving 
you severe internal- or cross-driver headaches.

/Thomas


>
> Regards,
> Christian.
>
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-01 12:16                                     ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-01 12:16 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld


On 12/1/21 12:25, Christian König wrote:
> Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>>
>> On 12/1/21 11:32, Christian König wrote:
>>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>>> [SNIP]
>>>>>
>>>>> What we could do is to avoid all this by not calling the callback 
>>>>> with the lock held in the first place.
>>>>
>>>> If that's possible that might be a good idea, pls also see below.
>>>
>>> The problem with that is 
>>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we 
>>> could avoid using that or at least allow it to drop the lock then we 
>>> could call the callback without holding it.
>>>
>>> Somebody would need to audit the drivers and see if holding the lock 
>>> is really necessary anywhere.
>>>
>>>>>
>>>>>>>
>>>>>>>>>
>>>>>>>>> /Thomas
>>>>>>>>
>>>>>>>> Oh, and a follow up question:
>>>>>>>>
>>>>>>>> If there was a way to break the recursion on final put() (using 
>>>>>>>> the same basic approach as patch 2 in this series uses to break 
>>>>>>>> recursion in enable_signaling()), so that none of these 
>>>>>>>> containers did require any special treatment, would it be worth 
>>>>>>>> pursuing? I guess it might be possible by having the callbacks 
>>>>>>>> drop the references rather than the loop in the final put. + a 
>>>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>>>
>>>>>>> That won't really help, you just move the recursion from the 
>>>>>>> final put into the callback.
>>>>>>
>>>>>> How do we recurse from the callback? The introduced fence_put() 
>>>>>> of individual fence pointers
>>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>>> recursion is broken by the irq_work?
>>>>>
>>>>> Yeah, but then you would need to take another lock to avoid racing 
>>>>> with dma_fence_array_signaled().
>>>>>
>>>>>>
>>>>>> I figure the big amount of work would be to adjust code that 
>>>>>> iterates over the individual fence pointers to recognize that 
>>>>>> they are rcu protected.
>>>>>
>>>>> Could be that we could solve this with RCU, but that sounds like a 
>>>>> lot of churn for no gain at all.
>>>>>
>>>>> In other words even with the problems solved I think it would be a 
>>>>> really bad idea to allow chaining of dma_fence_array objects.
>>>>
>>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>>> not really suggesting we should allow this as an intentional 
>>>> feature. I'm worried, however, that if we allow these containers to 
>>>> start floating around cross-driver (or even internally) disguised 
>>>> as ordinary dma_fences, they would require a lot of driver special 
>>>> casing, or else completely unexpeced WARN_ON()s and lockdep splats 
>>>> would start to turn up, scaring people off from using them. And 
>>>> that would be a breeding ground for hairy driver-private constructs.
>>>
>>> Well the question is why we would want to do it?
>>>
>>> If it's to avoid inter driver lock dependencies by avoiding to call 
>>> the callback with the spinlock held, then yes please. We had tons of 
>>> problems with that, resulting in irq_work and work_item delegation 
>>> all over the place.
>>
>> Yes, that sounds like something desirable, but in these containers, 
>> what's causing the lock dependencies is the enable_signaling() 
>> callback that is typically called locked.
>>
>>
>>>
>>> If it's to allow nesting of dma_fence_array instances, then it's 
>>> most likely a really bad idea even if we fix all the locking order 
>>> problems.
>>
>> Well I think my use-case where I hit a dead end may illustrate what 
>> worries me here:
>>
>> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
>> object migration.
>> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
>> timeline because the TTM resource manager code requires that.
>>
>> Initially seemingly harmless to me.
>>
>> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
>> into the dma-fence-array for the clearing operation. Code still works 
>> fine, and no deep recursion, no warnings. But if I were to add 
>> another driver to the system that instead feeds a dma-fence-array 
>> into a dma-fence-chain, this would give me a lockdep splat.
>>
>> So then if somebody were to come up with the splendid idea of using a 
>> dma-fence-chain to initially coalesce fences, I'd hit the same 
>> problem or risk illegaly joining two dma-fence-chains together.
>>
>> To fix this, I would need to look at the incoming fences and iterate 
>> over any dma-fence-array or dma-fence-chain that is fed into the 
>> dma-fence-array to flatten out the input. In fact all dma-fence-array 
>> users would need to do that, and even dma-fence-chain users watching 
>> out for not joining chains together or accidently add an array that 
>> perhaps came as a disguised dma-fence from antother driver.
>>
>> So the purpose to me would be to allow these containers as input to 
>> eachother without a lot of in-driver special-casing, be it by 
>> breaking recursion on built-in flattening to avoid
>>
>> a) Hitting issues in the future or with existing interoperating drivers.
>> b) Avoid driver-private containers that also might break the 
>> interoperability. (For example the i915 currently driver-private 
>> dma_fence_work avoid all these problems, but we're attempting to 
>> address issues in common code rather than re-inventing stuff 
>> internally).
>
> I don't think that a dma_fence_array or dma_fence_chain is the right 
> thing to begin with in those use cases.
>
> When you want to coalesce the dependencies for a job you could either 
> use an xarray like Daniel did for the scheduler or some hashtable like 
> we use in amdgpu. But I don't see the need for exposing the dma_fence 
> interface for those.

This is because the interface to our migration code takes just a single 
dma-fence as dependency. Now this is of course something we need to look 
at to mitigate this, but see below.

>
> And why do you use dma_fence_chain to generate a timeline for TTM? 
> That should come naturally because all the moves must be ordered.

Oh, in this case because we're looking at adding stuff at the end of 
migration (like coalescing object shared fences and / or async unbind 
fences), which may not complete in order.

But that's not really the point, the point was that an (at least to me) 
seemingly harmless usage pattern, be it real or fictious, ends up giving 
you severe internal- or cross-driver headaches.

/Thomas


>
> Regards,
> Christian.
>
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-01 12:16                                     ` Thomas Hellström (Intel)
@ 2021-12-03 13:08                                       ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-03 13:08 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 12:25, Christian König wrote:
>> Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>>>
>>> On 12/1/21 11:32, Christian König wrote:
>>>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>>>> [SNIP]
>>>>>>
>>>>>> What we could do is to avoid all this by not calling the callback 
>>>>>> with the lock held in the first place.
>>>>>
>>>>> If that's possible that might be a good idea, pls also see below.
>>>>
>>>> The problem with that is 
>>>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If 
>>>> we could avoid using that or at least allow it to drop the lock 
>>>> then we could call the callback without holding it.
>>>>
>>>> Somebody would need to audit the drivers and see if holding the 
>>>> lock is really necessary anywhere.
>>>>
>>>>>>
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> /Thomas
>>>>>>>>>
>>>>>>>>> Oh, and a follow up question:
>>>>>>>>>
>>>>>>>>> If there was a way to break the recursion on final put() 
>>>>>>>>> (using the same basic approach as patch 2 in this series uses 
>>>>>>>>> to break recursion in enable_signaling()), so that none of 
>>>>>>>>> these containers did require any special treatment, would it 
>>>>>>>>> be worth pursuing? I guess it might be possible by having the 
>>>>>>>>> callbacks drop the references rather than the loop in the 
>>>>>>>>> final put. + a couple of changes in code iterating over the 
>>>>>>>>> fence pointers.
>>>>>>>>
>>>>>>>> That won't really help, you just move the recursion from the 
>>>>>>>> final put into the callback.
>>>>>>>
>>>>>>> How do we recurse from the callback? The introduced fence_put() 
>>>>>>> of individual fence pointers
>>>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>>>> recursion is broken by the irq_work?
>>>>>>
>>>>>> Yeah, but then you would need to take another lock to avoid 
>>>>>> racing with dma_fence_array_signaled().
>>>>>>
>>>>>>>
>>>>>>> I figure the big amount of work would be to adjust code that 
>>>>>>> iterates over the individual fence pointers to recognize that 
>>>>>>> they are rcu protected.
>>>>>>
>>>>>> Could be that we could solve this with RCU, but that sounds like 
>>>>>> a lot of churn for no gain at all.
>>>>>>
>>>>>> In other words even with the problems solved I think it would be 
>>>>>> a really bad idea to allow chaining of dma_fence_array objects.
>>>>>
>>>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>>>> not really suggesting we should allow this as an intentional 
>>>>> feature. I'm worried, however, that if we allow these containers 
>>>>> to start floating around cross-driver (or even internally) 
>>>>> disguised as ordinary dma_fences, they would require a lot of 
>>>>> driver special casing, or else completely unexpeced WARN_ON()s and 
>>>>> lockdep splats would start to turn up, scaring people off from 
>>>>> using them. And that would be a breeding ground for hairy 
>>>>> driver-private constructs.
>>>>
>>>> Well the question is why we would want to do it?
>>>>
>>>> If it's to avoid inter driver lock dependencies by avoiding to call 
>>>> the callback with the spinlock held, then yes please. We had tons 
>>>> of problems with that, resulting in irq_work and work_item 
>>>> delegation all over the place.
>>>
>>> Yes, that sounds like something desirable, but in these containers, 
>>> what's causing the lock dependencies is the enable_signaling() 
>>> callback that is typically called locked.
>>>
>>>
>>>>
>>>> If it's to allow nesting of dma_fence_array instances, then it's 
>>>> most likely a really bad idea even if we fix all the locking order 
>>>> problems.
>>>
>>> Well I think my use-case where I hit a dead end may illustrate what 
>>> worries me here:
>>>
>>> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
>>> object migration.
>>> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
>>> timeline because the TTM resource manager code requires that.
>>>
>>> Initially seemingly harmless to me.
>>>
>>> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
>>> into the dma-fence-array for the clearing operation. Code still 
>>> works fine, and no deep recursion, no warnings. But if I were to add 
>>> another driver to the system that instead feeds a dma-fence-array 
>>> into a dma-fence-chain, this would give me a lockdep splat.
>>>
>>> So then if somebody were to come up with the splendid idea of using 
>>> a dma-fence-chain to initially coalesce fences, I'd hit the same 
>>> problem or risk illegaly joining two dma-fence-chains together.
>>>
>>> To fix this, I would need to look at the incoming fences and iterate 
>>> over any dma-fence-array or dma-fence-chain that is fed into the 
>>> dma-fence-array to flatten out the input. In fact all 
>>> dma-fence-array users would need to do that, and even 
>>> dma-fence-chain users watching out for not joining chains together 
>>> or accidently add an array that perhaps came as a disguised 
>>> dma-fence from antother driver.
>>>
>>> So the purpose to me would be to allow these containers as input to 
>>> eachother without a lot of in-driver special-casing, be it by 
>>> breaking recursion on built-in flattening to avoid
>>>
>>> a) Hitting issues in the future or with existing interoperating 
>>> drivers.
>>> b) Avoid driver-private containers that also might break the 
>>> interoperability. (For example the i915 currently driver-private 
>>> dma_fence_work avoid all these problems, but we're attempting to 
>>> address issues in common code rather than re-inventing stuff 
>>> internally).
>>
>> I don't think that a dma_fence_array or dma_fence_chain is the right 
>> thing to begin with in those use cases.
>>
>> When you want to coalesce the dependencies for a job you could either 
>> use an xarray like Daniel did for the scheduler or some hashtable 
>> like we use in amdgpu. But I don't see the need for exposing the 
>> dma_fence interface for those.
>
> This is because the interface to our migration code takes just a 
> single dma-fence as dependency. Now this is of course something we 
> need to look at to mitigate this, but see below.

Yeah, that's actually fine.

>>
>> And why do you use dma_fence_chain to generate a timeline for TTM? 
>> That should come naturally because all the moves must be ordered.
>
> Oh, in this case because we're looking at adding stuff at the end of 
> migration (like coalescing object shared fences and / or async unbind 
> fences), which may not complete in order.

Well that's ok as well. My question is why does this single dma_fence 
then shows up in the dma_fence_chain representing the whole migration?

That somehow doesn't seem to make sense because each individual step of 
the migration needs to wait for those dependencies as well even when it 
runs in parallel.

> But that's not really the point, the point was that an (at least to 
> me) seemingly harmless usage pattern, be it real or fictious, ends up 
> giving you severe internal- or cross-driver headaches.

Yeah, we probably should document that better. But in general I don't 
see much reason to allow mixing containers. The dma_fence_array and 
dma_fence_chain objects have some distinct use cases and and using them 
to build up larger dependency structures sounds really questionable.

Christian.

>
> /Thomas
>
>
>>
>> Regards,
>> Christian.
>>
>>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-03 13:08                                       ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-03 13:08 UTC (permalink / raw)
  To: Thomas Hellström (Intel),
	Thomas Hellström, Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 12:25, Christian König wrote:
>> Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>>>
>>> On 12/1/21 11:32, Christian König wrote:
>>>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>>>> [SNIP]
>>>>>>
>>>>>> What we could do is to avoid all this by not calling the callback 
>>>>>> with the lock held in the first place.
>>>>>
>>>>> If that's possible that might be a good idea, pls also see below.
>>>>
>>>> The problem with that is 
>>>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If 
>>>> we could avoid using that or at least allow it to drop the lock 
>>>> then we could call the callback without holding it.
>>>>
>>>> Somebody would need to audit the drivers and see if holding the 
>>>> lock is really necessary anywhere.
>>>>
>>>>>>
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> /Thomas
>>>>>>>>>
>>>>>>>>> Oh, and a follow up question:
>>>>>>>>>
>>>>>>>>> If there was a way to break the recursion on final put() 
>>>>>>>>> (using the same basic approach as patch 2 in this series uses 
>>>>>>>>> to break recursion in enable_signaling()), so that none of 
>>>>>>>>> these containers did require any special treatment, would it 
>>>>>>>>> be worth pursuing? I guess it might be possible by having the 
>>>>>>>>> callbacks drop the references rather than the loop in the 
>>>>>>>>> final put. + a couple of changes in code iterating over the 
>>>>>>>>> fence pointers.
>>>>>>>>
>>>>>>>> That won't really help, you just move the recursion from the 
>>>>>>>> final put into the callback.
>>>>>>>
>>>>>>> How do we recurse from the callback? The introduced fence_put() 
>>>>>>> of individual fence pointers
>>>>>>> doesn't recurse anymore (at most 1 level), and any callback 
>>>>>>> recursion is broken by the irq_work?
>>>>>>
>>>>>> Yeah, but then you would need to take another lock to avoid 
>>>>>> racing with dma_fence_array_signaled().
>>>>>>
>>>>>>>
>>>>>>> I figure the big amount of work would be to adjust code that 
>>>>>>> iterates over the individual fence pointers to recognize that 
>>>>>>> they are rcu protected.
>>>>>>
>>>>>> Could be that we could solve this with RCU, but that sounds like 
>>>>>> a lot of churn for no gain at all.
>>>>>>
>>>>>> In other words even with the problems solved I think it would be 
>>>>>> a really bad idea to allow chaining of dma_fence_array objects.
>>>>>
>>>>> Yes, that was really the question, Is it worth pursuing this? I'm 
>>>>> not really suggesting we should allow this as an intentional 
>>>>> feature. I'm worried, however, that if we allow these containers 
>>>>> to start floating around cross-driver (or even internally) 
>>>>> disguised as ordinary dma_fences, they would require a lot of 
>>>>> driver special casing, or else completely unexpeced WARN_ON()s and 
>>>>> lockdep splats would start to turn up, scaring people off from 
>>>>> using them. And that would be a breeding ground for hairy 
>>>>> driver-private constructs.
>>>>
>>>> Well the question is why we would want to do it?
>>>>
>>>> If it's to avoid inter driver lock dependencies by avoiding to call 
>>>> the callback with the spinlock held, then yes please. We had tons 
>>>> of problems with that, resulting in irq_work and work_item 
>>>> delegation all over the place.
>>>
>>> Yes, that sounds like something desirable, but in these containers, 
>>> what's causing the lock dependencies is the enable_signaling() 
>>> callback that is typically called locked.
>>>
>>>
>>>>
>>>> If it's to allow nesting of dma_fence_array instances, then it's 
>>>> most likely a really bad idea even if we fix all the locking order 
>>>> problems.
>>>
>>> Well I think my use-case where I hit a dead end may illustrate what 
>>> worries me here:
>>>
>>> 1) We use a dma-fence-array to coalesce all dependencies for ttm 
>>> object migration.
>>> 2) We use a dma-fence-chain to order the resulting dm_fence into a 
>>> timeline because the TTM resource manager code requires that.
>>>
>>> Initially seemingly harmless to me.
>>>
>>> But after a sequence evict->alloc->clear, the dma-fence-chain feeds 
>>> into the dma-fence-array for the clearing operation. Code still 
>>> works fine, and no deep recursion, no warnings. But if I were to add 
>>> another driver to the system that instead feeds a dma-fence-array 
>>> into a dma-fence-chain, this would give me a lockdep splat.
>>>
>>> So then if somebody were to come up with the splendid idea of using 
>>> a dma-fence-chain to initially coalesce fences, I'd hit the same 
>>> problem or risk illegaly joining two dma-fence-chains together.
>>>
>>> To fix this, I would need to look at the incoming fences and iterate 
>>> over any dma-fence-array or dma-fence-chain that is fed into the 
>>> dma-fence-array to flatten out the input. In fact all 
>>> dma-fence-array users would need to do that, and even 
>>> dma-fence-chain users watching out for not joining chains together 
>>> or accidently add an array that perhaps came as a disguised 
>>> dma-fence from antother driver.
>>>
>>> So the purpose to me would be to allow these containers as input to 
>>> eachother without a lot of in-driver special-casing, be it by 
>>> breaking recursion on built-in flattening to avoid
>>>
>>> a) Hitting issues in the future or with existing interoperating 
>>> drivers.
>>> b) Avoid driver-private containers that also might break the 
>>> interoperability. (For example the i915 currently driver-private 
>>> dma_fence_work avoid all these problems, but we're attempting to 
>>> address issues in common code rather than re-inventing stuff 
>>> internally).
>>
>> I don't think that a dma_fence_array or dma_fence_chain is the right 
>> thing to begin with in those use cases.
>>
>> When you want to coalesce the dependencies for a job you could either 
>> use an xarray like Daniel did for the scheduler or some hashtable 
>> like we use in amdgpu. But I don't see the need for exposing the 
>> dma_fence interface for those.
>
> This is because the interface to our migration code takes just a 
> single dma-fence as dependency. Now this is of course something we 
> need to look at to mitigate this, but see below.

Yeah, that's actually fine.

>>
>> And why do you use dma_fence_chain to generate a timeline for TTM? 
>> That should come naturally because all the moves must be ordered.
>
> Oh, in this case because we're looking at adding stuff at the end of 
> migration (like coalescing object shared fences and / or async unbind 
> fences), which may not complete in order.

Well that's ok as well. My question is why does this single dma_fence 
then shows up in the dma_fence_chain representing the whole migration?

That somehow doesn't seem to make sense because each individual step of 
the migration needs to wait for those dependencies as well even when it 
runs in parallel.

> But that's not really the point, the point was that an (at least to 
> me) seemingly harmless usage pattern, be it real or fictious, ends up 
> giving you severe internal- or cross-driver headaches.

Yeah, we probably should document that better. But in general I don't 
see much reason to allow mixing containers. The dma_fence_array and 
dma_fence_chain objects have some distinct use cases and and using them 
to build up larger dependency structures sounds really questionable.

Christian.

>
> /Thomas
>
>
>>
>> Regards,
>> Christian.
>>
>>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-03 13:08                                       ` [Intel-gfx] " Christian König
@ 2021-12-03 14:18                                         ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-12-03 14:18 UTC (permalink / raw)
  To: Christian König, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > 
> > On 12/1/21 12:25, Christian König wrote:
> > > Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
> > > > 
> > > > On 12/1/21 11:32, Christian König wrote:
> > > > > Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> > > > > > [SNIP]
> > > > > > > 
> > > > > > > What we could do is to avoid all this by not calling the
> > > > > > > callback 
> > > > > > > with the lock held in the first place.
> > > > > > 
> > > > > > If that's possible that might be a good idea, pls also see
> > > > > > below.
> > > > > 
> > > > > The problem with that is 
> > > > > dma_fence_signal_locked()/dma_fence_signal_timestamp_locked()
> > > > > . If 
> > > > > we could avoid using that or at least allow it to drop the
> > > > > lock 
> > > > > then we could call the callback without holding it.
> > > > > 
> > > > > Somebody would need to audit the drivers and see if holding
> > > > > the 
> > > > > lock is really necessary anywhere.
> > > > > 
> > > > > > > 
> > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > /Thomas
> > > > > > > > > > 
> > > > > > > > > > Oh, and a follow up question:
> > > > > > > > > > 
> > > > > > > > > > If there was a way to break the recursion on final
> > > > > > > > > > put() 
> > > > > > > > > > (using the same basic approach as patch 2 in this
> > > > > > > > > > series uses 
> > > > > > > > > > to break recursion in enable_signaling()), so that
> > > > > > > > > > none of 
> > > > > > > > > > these containers did require any special treatment,
> > > > > > > > > > would it 
> > > > > > > > > > be worth pursuing? I guess it might be possible by
> > > > > > > > > > having the 
> > > > > > > > > > callbacks drop the references rather than the loop
> > > > > > > > > > in the 
> > > > > > > > > > final put. + a couple of changes in code iterating
> > > > > > > > > > over the 
> > > > > > > > > > fence pointers.
> > > > > > > > > 
> > > > > > > > > That won't really help, you just move the recursion
> > > > > > > > > from the 
> > > > > > > > > final put into the callback.
> > > > > > > > 
> > > > > > > > How do we recurse from the callback? The introduced
> > > > > > > > fence_put() 
> > > > > > > > of individual fence pointers
> > > > > > > > doesn't recurse anymore (at most 1 level), and any
> > > > > > > > callback 
> > > > > > > > recursion is broken by the irq_work?
> > > > > > > 
> > > > > > > Yeah, but then you would need to take another lock to
> > > > > > > avoid 
> > > > > > > racing with dma_fence_array_signaled().
> > > > > > > 
> > > > > > > > 
> > > > > > > > I figure the big amount of work would be to adjust code
> > > > > > > > that 
> > > > > > > > iterates over the individual fence pointers to
> > > > > > > > recognize that 
> > > > > > > > they are rcu protected.
> > > > > > > 
> > > > > > > Could be that we could solve this with RCU, but that
> > > > > > > sounds like 
> > > > > > > a lot of churn for no gain at all.
> > > > > > > 
> > > > > > > In other words even with the problems solved I think it
> > > > > > > would be 
> > > > > > > a really bad idea to allow chaining of dma_fence_array
> > > > > > > objects.
> > > > > > 
> > > > > > Yes, that was really the question, Is it worth pursuing
> > > > > > this? I'm 
> > > > > > not really suggesting we should allow this as an
> > > > > > intentional 
> > > > > > feature. I'm worried, however, that if we allow these
> > > > > > containers 
> > > > > > to start floating around cross-driver (or even internally) 
> > > > > > disguised as ordinary dma_fences, they would require a lot
> > > > > > of 
> > > > > > driver special casing, or else completely unexpeced
> > > > > > WARN_ON()s and 
> > > > > > lockdep splats would start to turn up, scaring people off
> > > > > > from 
> > > > > > using them. And that would be a breeding ground for hairy 
> > > > > > driver-private constructs.
> > > > > 
> > > > > Well the question is why we would want to do it?
> > > > > 
> > > > > If it's to avoid inter driver lock dependencies by avoiding
> > > > > to call 
> > > > > the callback with the spinlock held, then yes please. We had
> > > > > tons 
> > > > > of problems with that, resulting in irq_work and work_item 
> > > > > delegation all over the place.
> > > > 
> > > > Yes, that sounds like something desirable, but in these
> > > > containers, 
> > > > what's causing the lock dependencies is the enable_signaling() 
> > > > callback that is typically called locked.
> > > > 
> > > > 
> > > > > 
> > > > > If it's to allow nesting of dma_fence_array instances, then
> > > > > it's 
> > > > > most likely a really bad idea even if we fix all the locking
> > > > > order 
> > > > > problems.
> > > > 
> > > > Well I think my use-case where I hit a dead end may illustrate
> > > > what 
> > > > worries me here:
> > > > 
> > > > 1) We use a dma-fence-array to coalesce all dependencies for
> > > > ttm 
> > > > object migration.
> > > > 2) We use a dma-fence-chain to order the resulting dm_fence
> > > > into a 
> > > > timeline because the TTM resource manager code requires that.
> > > > 
> > > > Initially seemingly harmless to me.
> > > > 
> > > > But after a sequence evict->alloc->clear, the dma-fence-chain
> > > > feeds 
> > > > into the dma-fence-array for the clearing operation. Code still
> > > > works fine, and no deep recursion, no warnings. But if I were
> > > > to add 
> > > > another driver to the system that instead feeds a dma-fence-
> > > > array 
> > > > into a dma-fence-chain, this would give me a lockdep splat.
> > > > 
> > > > So then if somebody were to come up with the splendid idea of
> > > > using 
> > > > a dma-fence-chain to initially coalesce fences, I'd hit the
> > > > same 
> > > > problem or risk illegaly joining two dma-fence-chains together.
> > > > 
> > > > To fix this, I would need to look at the incoming fences and
> > > > iterate 
> > > > over any dma-fence-array or dma-fence-chain that is fed into
> > > > the 
> > > > dma-fence-array to flatten out the input. In fact all 
> > > > dma-fence-array users would need to do that, and even 
> > > > dma-fence-chain users watching out for not joining chains
> > > > together 
> > > > or accidently add an array that perhaps came as a disguised 
> > > > dma-fence from antother driver.
> > > > 
> > > > So the purpose to me would be to allow these containers as
> > > > input to 
> > > > eachother without a lot of in-driver special-casing, be it by 
> > > > breaking recursion on built-in flattening to avoid
> > > > 
> > > > a) Hitting issues in the future or with existing interoperating
> > > > drivers.
> > > > b) Avoid driver-private containers that also might break the 
> > > > interoperability. (For example the i915 currently driver-
> > > > private 
> > > > dma_fence_work avoid all these problems, but we're attempting
> > > > to 
> > > > address issues in common code rather than re-inventing stuff 
> > > > internally).
> > > 
> > > I don't think that a dma_fence_array or dma_fence_chain is the
> > > right 
> > > thing to begin with in those use cases.
> > > 
> > > When you want to coalesce the dependencies for a job you could
> > > either 
> > > use an xarray like Daniel did for the scheduler or some hashtable
> > > like we use in amdgpu. But I don't see the need for exposing the 
> > > dma_fence interface for those.
> > 
> > This is because the interface to our migration code takes just a 
> > single dma-fence as dependency. Now this is of course something we 
> > need to look at to mitigate this, but see below.
> 
> Yeah, that's actually fine.
> 
> > > 
> > > And why do you use dma_fence_chain to generate a timeline for
> > > TTM? 
> > > That should come naturally because all the moves must be ordered.
> > 
> > Oh, in this case because we're looking at adding stuff at the end
> > of 
> > migration (like coalescing object shared fences and / or async
> > unbind 
> > fences), which may not complete in order.
> 
> Well that's ok as well. My question is why does this single dma_fence
> then shows up in the dma_fence_chain representing the whole
> migration?

What we'd like to happen during eviction is that we

1) await any exclusive- or moving fences, then schedule the migration
blit. The blit manages its own GPU ptes. Results in a single fence.
2) Schedule unbind of any gpu vmas, resulting possibly in multiple
fences.
3) Most but not all of the remaining resv shared fences will have been
finished in 2) We can't easily tell which so we have a couple of shared
fences left.
4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
dma-fence-chain. 
5) hand the resulting dma-fence-chain representing the end of migration
over to ttm's resource manager. 

Now this means we have a dma-fence-chain disguised as a dma-fence out
in the wild, and it could in theory reappear as a 3) fence for another
migration unless a very careful audit is done, or as an input to the
dma-fence-array used for that single dependency.

> 
> That somehow doesn't seem to make sense because each individual step
> of 
> the migration needs to wait for those dependencies as well even when
> it 
> runs in parallel.
> 
> > But that's not really the point, the point was that an (at least to
> > me) seemingly harmless usage pattern, be it real or fictious, ends
> > up 
> > giving you severe internal- or cross-driver headaches.
> 
> Yeah, we probably should document that better. But in general I don't
> see much reason to allow mixing containers. The dma_fence_array and 
> dma_fence_chain objects have some distinct use cases and and using
> them 
> to build up larger dependency structures sounds really questionable.

Yes, I tend to agree to some extent here. Perhaps add warnings when
adding a chain or array as an input to array and when accidently
joining chains, and provide helpers for flattening if needed.

/Thomas


> 
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > 
> 



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-03 14:18                                         ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-12-03 14:18 UTC (permalink / raw)
  To: Christian König, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel
  Cc: linaro-mm-sig, matthew.auld

On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > 
> > On 12/1/21 12:25, Christian König wrote:
> > > Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
> > > > 
> > > > On 12/1/21 11:32, Christian König wrote:
> > > > > Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> > > > > > [SNIP]
> > > > > > > 
> > > > > > > What we could do is to avoid all this by not calling the
> > > > > > > callback 
> > > > > > > with the lock held in the first place.
> > > > > > 
> > > > > > If that's possible that might be a good idea, pls also see
> > > > > > below.
> > > > > 
> > > > > The problem with that is 
> > > > > dma_fence_signal_locked()/dma_fence_signal_timestamp_locked()
> > > > > . If 
> > > > > we could avoid using that or at least allow it to drop the
> > > > > lock 
> > > > > then we could call the callback without holding it.
> > > > > 
> > > > > Somebody would need to audit the drivers and see if holding
> > > > > the 
> > > > > lock is really necessary anywhere.
> > > > > 
> > > > > > > 
> > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > /Thomas
> > > > > > > > > > 
> > > > > > > > > > Oh, and a follow up question:
> > > > > > > > > > 
> > > > > > > > > > If there was a way to break the recursion on final
> > > > > > > > > > put() 
> > > > > > > > > > (using the same basic approach as patch 2 in this
> > > > > > > > > > series uses 
> > > > > > > > > > to break recursion in enable_signaling()), so that
> > > > > > > > > > none of 
> > > > > > > > > > these containers did require any special treatment,
> > > > > > > > > > would it 
> > > > > > > > > > be worth pursuing? I guess it might be possible by
> > > > > > > > > > having the 
> > > > > > > > > > callbacks drop the references rather than the loop
> > > > > > > > > > in the 
> > > > > > > > > > final put. + a couple of changes in code iterating
> > > > > > > > > > over the 
> > > > > > > > > > fence pointers.
> > > > > > > > > 
> > > > > > > > > That won't really help, you just move the recursion
> > > > > > > > > from the 
> > > > > > > > > final put into the callback.
> > > > > > > > 
> > > > > > > > How do we recurse from the callback? The introduced
> > > > > > > > fence_put() 
> > > > > > > > of individual fence pointers
> > > > > > > > doesn't recurse anymore (at most 1 level), and any
> > > > > > > > callback 
> > > > > > > > recursion is broken by the irq_work?
> > > > > > > 
> > > > > > > Yeah, but then you would need to take another lock to
> > > > > > > avoid 
> > > > > > > racing with dma_fence_array_signaled().
> > > > > > > 
> > > > > > > > 
> > > > > > > > I figure the big amount of work would be to adjust code
> > > > > > > > that 
> > > > > > > > iterates over the individual fence pointers to
> > > > > > > > recognize that 
> > > > > > > > they are rcu protected.
> > > > > > > 
> > > > > > > Could be that we could solve this with RCU, but that
> > > > > > > sounds like 
> > > > > > > a lot of churn for no gain at all.
> > > > > > > 
> > > > > > > In other words even with the problems solved I think it
> > > > > > > would be 
> > > > > > > a really bad idea to allow chaining of dma_fence_array
> > > > > > > objects.
> > > > > > 
> > > > > > Yes, that was really the question, Is it worth pursuing
> > > > > > this? I'm 
> > > > > > not really suggesting we should allow this as an
> > > > > > intentional 
> > > > > > feature. I'm worried, however, that if we allow these
> > > > > > containers 
> > > > > > to start floating around cross-driver (or even internally) 
> > > > > > disguised as ordinary dma_fences, they would require a lot
> > > > > > of 
> > > > > > driver special casing, or else completely unexpeced
> > > > > > WARN_ON()s and 
> > > > > > lockdep splats would start to turn up, scaring people off
> > > > > > from 
> > > > > > using them. And that would be a breeding ground for hairy 
> > > > > > driver-private constructs.
> > > > > 
> > > > > Well the question is why we would want to do it?
> > > > > 
> > > > > If it's to avoid inter driver lock dependencies by avoiding
> > > > > to call 
> > > > > the callback with the spinlock held, then yes please. We had
> > > > > tons 
> > > > > of problems with that, resulting in irq_work and work_item 
> > > > > delegation all over the place.
> > > > 
> > > > Yes, that sounds like something desirable, but in these
> > > > containers, 
> > > > what's causing the lock dependencies is the enable_signaling() 
> > > > callback that is typically called locked.
> > > > 
> > > > 
> > > > > 
> > > > > If it's to allow nesting of dma_fence_array instances, then
> > > > > it's 
> > > > > most likely a really bad idea even if we fix all the locking
> > > > > order 
> > > > > problems.
> > > > 
> > > > Well I think my use-case where I hit a dead end may illustrate
> > > > what 
> > > > worries me here:
> > > > 
> > > > 1) We use a dma-fence-array to coalesce all dependencies for
> > > > ttm 
> > > > object migration.
> > > > 2) We use a dma-fence-chain to order the resulting dm_fence
> > > > into a 
> > > > timeline because the TTM resource manager code requires that.
> > > > 
> > > > Initially seemingly harmless to me.
> > > > 
> > > > But after a sequence evict->alloc->clear, the dma-fence-chain
> > > > feeds 
> > > > into the dma-fence-array for the clearing operation. Code still
> > > > works fine, and no deep recursion, no warnings. But if I were
> > > > to add 
> > > > another driver to the system that instead feeds a dma-fence-
> > > > array 
> > > > into a dma-fence-chain, this would give me a lockdep splat.
> > > > 
> > > > So then if somebody were to come up with the splendid idea of
> > > > using 
> > > > a dma-fence-chain to initially coalesce fences, I'd hit the
> > > > same 
> > > > problem or risk illegaly joining two dma-fence-chains together.
> > > > 
> > > > To fix this, I would need to look at the incoming fences and
> > > > iterate 
> > > > over any dma-fence-array or dma-fence-chain that is fed into
> > > > the 
> > > > dma-fence-array to flatten out the input. In fact all 
> > > > dma-fence-array users would need to do that, and even 
> > > > dma-fence-chain users watching out for not joining chains
> > > > together 
> > > > or accidently add an array that perhaps came as a disguised 
> > > > dma-fence from antother driver.
> > > > 
> > > > So the purpose to me would be to allow these containers as
> > > > input to 
> > > > eachother without a lot of in-driver special-casing, be it by 
> > > > breaking recursion on built-in flattening to avoid
> > > > 
> > > > a) Hitting issues in the future or with existing interoperating
> > > > drivers.
> > > > b) Avoid driver-private containers that also might break the 
> > > > interoperability. (For example the i915 currently driver-
> > > > private 
> > > > dma_fence_work avoid all these problems, but we're attempting
> > > > to 
> > > > address issues in common code rather than re-inventing stuff 
> > > > internally).
> > > 
> > > I don't think that a dma_fence_array or dma_fence_chain is the
> > > right 
> > > thing to begin with in those use cases.
> > > 
> > > When you want to coalesce the dependencies for a job you could
> > > either 
> > > use an xarray like Daniel did for the scheduler or some hashtable
> > > like we use in amdgpu. But I don't see the need for exposing the 
> > > dma_fence interface for those.
> > 
> > This is because the interface to our migration code takes just a 
> > single dma-fence as dependency. Now this is of course something we 
> > need to look at to mitigate this, but see below.
> 
> Yeah, that's actually fine.
> 
> > > 
> > > And why do you use dma_fence_chain to generate a timeline for
> > > TTM? 
> > > That should come naturally because all the moves must be ordered.
> > 
> > Oh, in this case because we're looking at adding stuff at the end
> > of 
> > migration (like coalescing object shared fences and / or async
> > unbind 
> > fences), which may not complete in order.
> 
> Well that's ok as well. My question is why does this single dma_fence
> then shows up in the dma_fence_chain representing the whole
> migration?

What we'd like to happen during eviction is that we

1) await any exclusive- or moving fences, then schedule the migration
blit. The blit manages its own GPU ptes. Results in a single fence.
2) Schedule unbind of any gpu vmas, resulting possibly in multiple
fences.
3) Most but not all of the remaining resv shared fences will have been
finished in 2) We can't easily tell which so we have a couple of shared
fences left.
4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
dma-fence-chain. 
5) hand the resulting dma-fence-chain representing the end of migration
over to ttm's resource manager. 

Now this means we have a dma-fence-chain disguised as a dma-fence out
in the wild, and it could in theory reappear as a 3) fence for another
migration unless a very careful audit is done, or as an input to the
dma-fence-array used for that single dependency.

> 
> That somehow doesn't seem to make sense because each individual step
> of 
> the migration needs to wait for those dependencies as well even when
> it 
> runs in parallel.
> 
> > But that's not really the point, the point was that an (at least to
> > me) seemingly harmless usage pattern, be it real or fictious, ends
> > up 
> > giving you severe internal- or cross-driver headaches.
> 
> Yeah, we probably should document that better. But in general I don't
> see much reason to allow mixing containers. The dma_fence_array and 
> dma_fence_chain objects have some distinct use cases and and using
> them 
> to build up larger dependency structures sounds really questionable.

Yes, I tend to agree to some extent here. Perhaps add warnings when
adding a chain or array as an input to array and when accidently
joining chains, and provide helpers for flattening if needed.

/Thomas


> 
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > 
> 



^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-03 14:18                                         ` [Intel-gfx] " Thomas Hellström
@ 2021-12-03 14:26                                           ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-03 14:26 UTC (permalink / raw)
  To: Thomas Hellström, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld

[Adding Daniel here as well]

Am 03.12.21 um 15:18 schrieb Thomas Hellström:
> [SNIP]
>> Well that's ok as well. My question is why does this single dma_fence
>> then shows up in the dma_fence_chain representing the whole
>> migration?
> What we'd like to happen during eviction is that we
>
> 1) await any exclusive- or moving fences, then schedule the migration
> blit. The blit manages its own GPU ptes. Results in a single fence.
> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> fences.
> 3) Most but not all of the remaining resv shared fences will have been
> finished in 2) We can't easily tell which so we have a couple of shared
> fences left.

Stop, wait a second here. We are going a bit in circles.

Before you migrate a buffer, you *MUST* wait for all shared fences to 
complete. This is documented mandatory DMA-buf behavior.

Daniel and I have discussed that quite extensively in the last few month.

So how does it come that you do the blit before all shared fences are 
completed?

> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> dma-fence-chain.
> 5) hand the resulting dma-fence-chain representing the end of migration
> over to ttm's resource manager.
>
> Now this means we have a dma-fence-chain disguised as a dma-fence out
> in the wild, and it could in theory reappear as a 3) fence for another
> migration unless a very careful audit is done, or as an input to the
> dma-fence-array used for that single dependency.
>
>> That somehow doesn't seem to make sense because each individual step
>> of
>> the migration needs to wait for those dependencies as well even when
>> it
>> runs in parallel.
>>
>>> But that's not really the point, the point was that an (at least to
>>> me) seemingly harmless usage pattern, be it real or fictious, ends
>>> up
>>> giving you severe internal- or cross-driver headaches.
>> Yeah, we probably should document that better. But in general I don't
>> see much reason to allow mixing containers. The dma_fence_array and
>> dma_fence_chain objects have some distinct use cases and and using
>> them
>> to build up larger dependency structures sounds really questionable.
> Yes, I tend to agree to some extent here. Perhaps add warnings when
> adding a chain or array as an input to array and when accidently
> joining chains, and provide helpers for flattening if needed.

Yeah, that's probably a really good idea. Going to put it on my todo list.

Thanks,
Christian.

>
> /Thomas
>
>
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-03 14:26                                           ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-03 14:26 UTC (permalink / raw)
  To: Thomas Hellström, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld

[Adding Daniel here as well]

Am 03.12.21 um 15:18 schrieb Thomas Hellström:
> [SNIP]
>> Well that's ok as well. My question is why does this single dma_fence
>> then shows up in the dma_fence_chain representing the whole
>> migration?
> What we'd like to happen during eviction is that we
>
> 1) await any exclusive- or moving fences, then schedule the migration
> blit. The blit manages its own GPU ptes. Results in a single fence.
> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> fences.
> 3) Most but not all of the remaining resv shared fences will have been
> finished in 2) We can't easily tell which so we have a couple of shared
> fences left.

Stop, wait a second here. We are going a bit in circles.

Before you migrate a buffer, you *MUST* wait for all shared fences to 
complete. This is documented mandatory DMA-buf behavior.

Daniel and I have discussed that quite extensively in the last few month.

So how does it come that you do the blit before all shared fences are 
completed?

> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> dma-fence-chain.
> 5) hand the resulting dma-fence-chain representing the end of migration
> over to ttm's resource manager.
>
> Now this means we have a dma-fence-chain disguised as a dma-fence out
> in the wild, and it could in theory reappear as a 3) fence for another
> migration unless a very careful audit is done, or as an input to the
> dma-fence-array used for that single dependency.
>
>> That somehow doesn't seem to make sense because each individual step
>> of
>> the migration needs to wait for those dependencies as well even when
>> it
>> runs in parallel.
>>
>>> But that's not really the point, the point was that an (at least to
>>> me) seemingly harmless usage pattern, be it real or fictious, ends
>>> up
>>> giving you severe internal- or cross-driver headaches.
>> Yeah, we probably should document that better. But in general I don't
>> see much reason to allow mixing containers. The dma_fence_array and
>> dma_fence_chain objects have some distinct use cases and and using
>> them
>> to build up larger dependency structures sounds really questionable.
> Yes, I tend to agree to some extent here. Perhaps add warnings when
> adding a chain or array as an input to array and when accidently
> joining chains, and provide helpers for flattening if needed.

Yeah, that's probably a really good idea. Going to put it on my todo list.

Thanks,
Christian.

>
> /Thomas
>
>
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-03 14:26                                           ` [Intel-gfx] " Christian König
@ 2021-12-03 14:50                                             ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-12-03 14:50 UTC (permalink / raw)
  To: Christian König, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld


On 12/3/21 15:26, Christian König wrote:
> [Adding Daniel here as well]
>
> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>> [SNIP]
>>> Well that's ok as well. My question is why does this single dma_fence
>>> then shows up in the dma_fence_chain representing the whole
>>> migration?
>> What we'd like to happen during eviction is that we
>>
>> 1) await any exclusive- or moving fences, then schedule the migration
>> blit. The blit manages its own GPU ptes. Results in a single fence.
>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>> fences.
>> 3) Most but not all of the remaining resv shared fences will have been
>> finished in 2) We can't easily tell which so we have a couple of shared
>> fences left.
>
> Stop, wait a second here. We are going a bit in circles.
>
> Before you migrate a buffer, you *MUST* wait for all shared fences to 
> complete. This is documented mandatory DMA-buf behavior.
>
> Daniel and I have discussed that quite extensively in the last few month.
>
> So how does it come that you do the blit before all shared fences are 
> completed?

Well we don't currently but wanted to... (I haven't consulted Daniel in 
the matter, tbh).

I was under the impression that all writes would add an exclusive fence 
to the dma_resv. If that's not the case or this is otherwise against the 
mandatory DMA-buf bevhavior, we can certainly keep that part as is and 
that would eliminate 3).

/Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-03 14:50                                             ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-12-03 14:50 UTC (permalink / raw)
  To: Christian König, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld


On 12/3/21 15:26, Christian König wrote:
> [Adding Daniel here as well]
>
> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>> [SNIP]
>>> Well that's ok as well. My question is why does this single dma_fence
>>> then shows up in the dma_fence_chain representing the whole
>>> migration?
>> What we'd like to happen during eviction is that we
>>
>> 1) await any exclusive- or moving fences, then schedule the migration
>> blit. The blit manages its own GPU ptes. Results in a single fence.
>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>> fences.
>> 3) Most but not all of the remaining resv shared fences will have been
>> finished in 2) We can't easily tell which so we have a couple of shared
>> fences left.
>
> Stop, wait a second here. We are going a bit in circles.
>
> Before you migrate a buffer, you *MUST* wait for all shared fences to 
> complete. This is documented mandatory DMA-buf behavior.
>
> Daniel and I have discussed that quite extensively in the last few month.
>
> So how does it come that you do the blit before all shared fences are 
> completed?

Well we don't currently but wanted to... (I haven't consulted Daniel in 
the matter, tbh).

I was under the impression that all writes would add an exclusive fence 
to the dma_resv. If that's not the case or this is otherwise against the 
mandatory DMA-buf bevhavior, we can certainly keep that part as is and 
that would eliminate 3).

/Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-03 14:50                                             ` [Intel-gfx] " Thomas Hellström
@ 2021-12-03 15:00                                               ` Christian König
  -1 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-03 15:00 UTC (permalink / raw)
  To: Thomas Hellström, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld

Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>
> On 12/3/21 15:26, Christian König wrote:
>> [Adding Daniel here as well]
>>
>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>> [SNIP]
>>>> Well that's ok as well. My question is why does this single dma_fence
>>>> then shows up in the dma_fence_chain representing the whole
>>>> migration?
>>> What we'd like to happen during eviction is that we
>>>
>>> 1) await any exclusive- or moving fences, then schedule the migration
>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>> fences.
>>> 3) Most but not all of the remaining resv shared fences will have been
>>> finished in 2) We can't easily tell which so we have a couple of shared
>>> fences left.
>>
>> Stop, wait a second here. We are going a bit in circles.
>>
>> Before you migrate a buffer, you *MUST* wait for all shared fences to 
>> complete. This is documented mandatory DMA-buf behavior.
>>
>> Daniel and I have discussed that quite extensively in the last few 
>> month.
>>
>> So how does it come that you do the blit before all shared fences are 
>> completed?
>
> Well we don't currently but wanted to... (I haven't consulted Daniel 
> in the matter, tbh).
>
> I was under the impression that all writes would add an exclusive 
> fence to the dma_resv.

Yes that's correct. I'm working on to have more than one write fence, 
but that is currently under review.

> If that's not the case or this is otherwise against the mandatory 
> DMA-buf bevhavior, we can certainly keep that part as is and that 
> would eliminate 3).

Ah, now that somewhat starts to make sense.

So your blit only waits for the writes to finish before starting the 
blit. Yes that's legal as long as you don't change the original content 
with the blit.

But don't you then need to wait for both reads and writes before you 
unmap the VMAs?

Anyway the good news is your problem totally goes away with the DMA-resv 
rework I've already send out. Basically it is now possible to have more 
than one fence in the DMA-resv object for migrations and all existing 
fences are kept around until they are finished.

Regards,
Christian.

>
> /Thomas
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-03 15:00                                               ` Christian König
  0 siblings, 0 replies; 65+ messages in thread
From: Christian König @ 2021-12-03 15:00 UTC (permalink / raw)
  To: Thomas Hellström, Thomas Hellström (Intel),
	Maarten Lankhorst, intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld

Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>
> On 12/3/21 15:26, Christian König wrote:
>> [Adding Daniel here as well]
>>
>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>> [SNIP]
>>>> Well that's ok as well. My question is why does this single dma_fence
>>>> then shows up in the dma_fence_chain representing the whole
>>>> migration?
>>> What we'd like to happen during eviction is that we
>>>
>>> 1) await any exclusive- or moving fences, then schedule the migration
>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>> fences.
>>> 3) Most but not all of the remaining resv shared fences will have been
>>> finished in 2) We can't easily tell which so we have a couple of shared
>>> fences left.
>>
>> Stop, wait a second here. We are going a bit in circles.
>>
>> Before you migrate a buffer, you *MUST* wait for all shared fences to 
>> complete. This is documented mandatory DMA-buf behavior.
>>
>> Daniel and I have discussed that quite extensively in the last few 
>> month.
>>
>> So how does it come that you do the blit before all shared fences are 
>> completed?
>
> Well we don't currently but wanted to... (I haven't consulted Daniel 
> in the matter, tbh).
>
> I was under the impression that all writes would add an exclusive 
> fence to the dma_resv.

Yes that's correct. I'm working on to have more than one write fence, 
but that is currently under review.

> If that's not the case or this is otherwise against the mandatory 
> DMA-buf bevhavior, we can certainly keep that part as is and that 
> would eliminate 3).

Ah, now that somewhat starts to make sense.

So your blit only waits for the writes to finish before starting the 
blit. Yes that's legal as long as you don't change the original content 
with the blit.

But don't you then need to wait for both reads and writes before you 
unmap the VMAs?

Anyway the good news is your problem totally goes away with the DMA-resv 
rework I've already send out. Basically it is now possible to have more 
than one fence in the DMA-resv object for migrations and all existing 
fences are kept around until they are finished.

Regards,
Christian.

>
> /Thomas
>


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-03 15:00                                               ` [Intel-gfx] " Christian König
@ 2021-12-03 15:13                                                 ` Thomas Hellström (Intel)
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-03 15:13 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld


On 12/3/21 16:00, Christian König wrote:
> Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>>
>> On 12/3/21 15:26, Christian König wrote:
>>> [Adding Daniel here as well]
>>>
>>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>>> [SNIP]
>>>>> Well that's ok as well. My question is why does this single dma_fence
>>>>> then shows up in the dma_fence_chain representing the whole
>>>>> migration?
>>>> What we'd like to happen during eviction is that we
>>>>
>>>> 1) await any exclusive- or moving fences, then schedule the migration
>>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>>> fences.
>>>> 3) Most but not all of the remaining resv shared fences will have been
>>>> finished in 2) We can't easily tell which so we have a couple of 
>>>> shared
>>>> fences left.
>>>
>>> Stop, wait a second here. We are going a bit in circles.
>>>
>>> Before you migrate a buffer, you *MUST* wait for all shared fences 
>>> to complete. This is documented mandatory DMA-buf behavior.
>>>
>>> Daniel and I have discussed that quite extensively in the last few 
>>> month.
>>>
>>> So how does it come that you do the blit before all shared fences 
>>> are completed?
>>
>> Well we don't currently but wanted to... (I haven't consulted Daniel 
>> in the matter, tbh).
>>
>> I was under the impression that all writes would add an exclusive 
>> fence to the dma_resv.
>
> Yes that's correct. I'm working on to have more than one write fence, 
> but that is currently under review.
>
>> If that's not the case or this is otherwise against the mandatory 
>> DMA-buf bevhavior, we can certainly keep that part as is and that 
>> would eliminate 3).
>
> Ah, now that somewhat starts to make sense.
>
> So your blit only waits for the writes to finish before starting the 
> blit. Yes that's legal as long as you don't change the original 
> content with the blit.
>
> But don't you then need to wait for both reads and writes before you 
> unmap the VMAs?

Yes, but that's planned to be done all async, and those unbind jobs are 
scheduled simultaneosly with the blit, and the blit itself manages its 
own page-table-entries, so no need to unbind any blit vmas.

>
> Anyway the good news is your problem totally goes away with the 
> DMA-resv rework I've already send out. Basically it is now possible to 
> have more than one fence in the DMA-resv object for migrations and all 
> existing fences are kept around until they are finished.

Sounds good.

Thanks,

Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-03 15:13                                                 ` Thomas Hellström (Intel)
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström (Intel) @ 2021-12-03 15:13 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, Maarten Lankhorst,
	intel-gfx, dri-devel, Daniel Vetter
  Cc: linaro-mm-sig, matthew.auld


On 12/3/21 16:00, Christian König wrote:
> Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>>
>> On 12/3/21 15:26, Christian König wrote:
>>> [Adding Daniel here as well]
>>>
>>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>>> [SNIP]
>>>>> Well that's ok as well. My question is why does this single dma_fence
>>>>> then shows up in the dma_fence_chain representing the whole
>>>>> migration?
>>>> What we'd like to happen during eviction is that we
>>>>
>>>> 1) await any exclusive- or moving fences, then schedule the migration
>>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>>> fences.
>>>> 3) Most but not all of the remaining resv shared fences will have been
>>>> finished in 2) We can't easily tell which so we have a couple of 
>>>> shared
>>>> fences left.
>>>
>>> Stop, wait a second here. We are going a bit in circles.
>>>
>>> Before you migrate a buffer, you *MUST* wait for all shared fences 
>>> to complete. This is documented mandatory DMA-buf behavior.
>>>
>>> Daniel and I have discussed that quite extensively in the last few 
>>> month.
>>>
>>> So how does it come that you do the blit before all shared fences 
>>> are completed?
>>
>> Well we don't currently but wanted to... (I haven't consulted Daniel 
>> in the matter, tbh).
>>
>> I was under the impression that all writes would add an exclusive 
>> fence to the dma_resv.
>
> Yes that's correct. I'm working on to have more than one write fence, 
> but that is currently under review.
>
>> If that's not the case or this is otherwise against the mandatory 
>> DMA-buf bevhavior, we can certainly keep that part as is and that 
>> would eliminate 3).
>
> Ah, now that somewhat starts to make sense.
>
> So your blit only waits for the writes to finish before starting the 
> blit. Yes that's legal as long as you don't change the original 
> content with the blit.
>
> But don't you then need to wait for both reads and writes before you 
> unmap the VMAs?

Yes, but that's planned to be done all async, and those unbind jobs are 
scheduled simultaneosly with the blit, and the blit itself manages its 
own page-table-entries, so no need to unbind any blit vmas.

>
> Anyway the good news is your problem totally goes away with the 
> DMA-resv rework I've already send out. Basically it is now possible to 
> have more than one fence in the DMA-resv object for migrations and all 
> existing fences are kept around until they are finished.

Sounds good.

Thanks,

Thomas


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-03 14:18                                         ` [Intel-gfx] " Thomas Hellström
@ 2021-12-07 18:08                                           ` Daniel Vetter
  -1 siblings, 0 replies; 65+ messages in thread
From: Daniel Vetter @ 2021-12-07 18:08 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Thomas Hellström (Intel),
	dri-devel, linaro-mm-sig, matthew.auld, intel-gfx,
	Christian König

Once more an entire week behind on mails, but this looked interesting
enough.

On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
> On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> > Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > > 
> > > On 12/1/21 12:25, Christian König wrote:
> > > > And why do you use dma_fence_chain to generate a timeline for
> > > > TTM? 
> > > > That should come naturally because all the moves must be ordered.
> > > 
> > > Oh, in this case because we're looking at adding stuff at the end
> > > of 
> > > migration (like coalescing object shared fences and / or async
> > > unbind 
> > > fences), which may not complete in order.
> > 
> > Well that's ok as well. My question is why does this single dma_fence
> > then shows up in the dma_fence_chain representing the whole
> > migration?
> 
> What we'd like to happen during eviction is that we
> 
> 1) await any exclusive- or moving fences, then schedule the migration
> blit. The blit manages its own GPU ptes. Results in a single fence.
> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> fences.

This sounds like over-optimizing for nothing. We only really care about
pipeling moves on dgpu, and on dgpu we only care about modern userspace
(because even gl moves in that direction).

And modern means that usually even write access is only setting a read
fence, because in vk/compute we only set write fences for object which
need implicit sync, and _only_ when actually needed.

So ignoring read fences for movings "because it's only reads" is actually
busted.

I think for buffer moves we should document and enforce (in review) the
rule that you have to wait for all fences, otherwise boom. Same really
like before freeing backing storage. Otherwise there's just too many gaps
and surprises.

And yes with Christian's rework of dma_resv this will change, and we'll
allow multiple write fences (because that's what amdgpu encoded into their
uapi). Still means that you cannot move a buffer without waiting for read
fences (or kernel fences or anything really).

The other thing is this entire spinlock recursion topic for dma_fence, and
I'm deeply unhappy about the truckload of tricks i915 plays and hence in
favour of avoiding recursion in this area as much as possible.

If we really can't avoid it then irq_work to get a new clean context gets
the job done. Making this messy and work is imo a feature, lock nesting of
same level locks is just not a good&robust engineering idea.

/me back to being completely burried

I do hope I can find some more time to review a few more of Christian's
patches this week though :-/

Cheers, Daniel

> 3) Most but not all of the remaining resv shared fences will have been
> finished in 2) We can't easily tell which so we have a couple of shared
> fences left.
> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> dma-fence-chain. 
> 5) hand the resulting dma-fence-chain representing the end of migration
> over to ttm's resource manager. 
> 
> Now this means we have a dma-fence-chain disguised as a dma-fence out
> in the wild, and it could in theory reappear as a 3) fence for another
> migration unless a very careful audit is done, or as an input to the
> dma-fence-array used for that single dependency.
> 
> > 
> > That somehow doesn't seem to make sense because each individual step
> > of 
> > the migration needs to wait for those dependencies as well even when
> > it 
> > runs in parallel.
> > 
> > > But that's not really the point, the point was that an (at least to
> > > me) seemingly harmless usage pattern, be it real or fictious, ends
> > > up 
> > > giving you severe internal- or cross-driver headaches.
> > 
> > Yeah, we probably should document that better. But in general I don't
> > see much reason to allow mixing containers. The dma_fence_array and 
> > dma_fence_chain objects have some distinct use cases and and using
> > them 
> > to build up larger dependency structures sounds really questionable.
> 
> Yes, I tend to agree to some extent here. Perhaps add warnings when
> adding a chain or array as an input to array and when accidently
> joining chains, and provide helpers for flattening if needed.
> 
> /Thomas
> 
> 
> > 
> > Christian.
> > 
> > > 
> > > /Thomas
> > > 
> > > 
> > > > 
> > > > Regards,
> > > > Christian.
> > > > 
> > > > 
> > 
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-07 18:08                                           ` Daniel Vetter
  0 siblings, 0 replies; 65+ messages in thread
From: Daniel Vetter @ 2021-12-07 18:08 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: dri-devel, linaro-mm-sig, matthew.auld, intel-gfx, Christian König

Once more an entire week behind on mails, but this looked interesting
enough.

On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
> On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> > Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > > 
> > > On 12/1/21 12:25, Christian König wrote:
> > > > And why do you use dma_fence_chain to generate a timeline for
> > > > TTM? 
> > > > That should come naturally because all the moves must be ordered.
> > > 
> > > Oh, in this case because we're looking at adding stuff at the end
> > > of 
> > > migration (like coalescing object shared fences and / or async
> > > unbind 
> > > fences), which may not complete in order.
> > 
> > Well that's ok as well. My question is why does this single dma_fence
> > then shows up in the dma_fence_chain representing the whole
> > migration?
> 
> What we'd like to happen during eviction is that we
> 
> 1) await any exclusive- or moving fences, then schedule the migration
> blit. The blit manages its own GPU ptes. Results in a single fence.
> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> fences.

This sounds like over-optimizing for nothing. We only really care about
pipeling moves on dgpu, and on dgpu we only care about modern userspace
(because even gl moves in that direction).

And modern means that usually even write access is only setting a read
fence, because in vk/compute we only set write fences for object which
need implicit sync, and _only_ when actually needed.

So ignoring read fences for movings "because it's only reads" is actually
busted.

I think for buffer moves we should document and enforce (in review) the
rule that you have to wait for all fences, otherwise boom. Same really
like before freeing backing storage. Otherwise there's just too many gaps
and surprises.

And yes with Christian's rework of dma_resv this will change, and we'll
allow multiple write fences (because that's what amdgpu encoded into their
uapi). Still means that you cannot move a buffer without waiting for read
fences (or kernel fences or anything really).

The other thing is this entire spinlock recursion topic for dma_fence, and
I'm deeply unhappy about the truckload of tricks i915 plays and hence in
favour of avoiding recursion in this area as much as possible.

If we really can't avoid it then irq_work to get a new clean context gets
the job done. Making this messy and work is imo a feature, lock nesting of
same level locks is just not a good&robust engineering idea.

/me back to being completely burried

I do hope I can find some more time to review a few more of Christian's
patches this week though :-/

Cheers, Daniel

> 3) Most but not all of the remaining resv shared fences will have been
> finished in 2) We can't easily tell which so we have a couple of shared
> fences left.
> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> dma-fence-chain. 
> 5) hand the resulting dma-fence-chain representing the end of migration
> over to ttm's resource manager. 
> 
> Now this means we have a dma-fence-chain disguised as a dma-fence out
> in the wild, and it could in theory reappear as a 3) fence for another
> migration unless a very careful audit is done, or as an input to the
> dma-fence-array used for that single dependency.
> 
> > 
> > That somehow doesn't seem to make sense because each individual step
> > of 
> > the migration needs to wait for those dependencies as well even when
> > it 
> > runs in parallel.
> > 
> > > But that's not really the point, the point was that an (at least to
> > > me) seemingly harmless usage pattern, be it real or fictious, ends
> > > up 
> > > giving you severe internal- or cross-driver headaches.
> > 
> > Yeah, we probably should document that better. But in general I don't
> > see much reason to allow mixing containers. The dma_fence_array and 
> > dma_fence_chain objects have some distinct use cases and and using
> > them 
> > to build up larger dependency structures sounds really questionable.
> 
> Yes, I tend to agree to some extent here. Perhaps add warnings when
> adding a chain or array as an input to array and when accidently
> joining chains, and provide helpers for flattening if needed.
> 
> /Thomas
> 
> 
> > 
> > Christian.
> > 
> > > 
> > > /Thomas
> > > 
> > > 
> > > > 
> > > > Regards,
> > > > Christian.
> > > > 
> > > > 
> > 
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-07 18:08                                           ` Daniel Vetter
@ 2021-12-07 20:46                                             ` Thomas Hellström
  -1 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-12-07 20:46 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Thomas Hellström (Intel),
	dri-devel, linaro-mm-sig, matthew.auld, intel-gfx,
	Christian König


On 12/7/21 19:08, Daniel Vetter wrote:
> Once more an entire week behind on mails, but this looked interesting
> enough.
>
> On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
>> On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
>>> Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
>>>> On 12/1/21 12:25, Christian König wrote:
>>>>> And why do you use dma_fence_chain to generate a timeline for
>>>>> TTM?
>>>>> That should come naturally because all the moves must be ordered.
>>>> Oh, in this case because we're looking at adding stuff at the end
>>>> of
>>>> migration (like coalescing object shared fences and / or async
>>>> unbind
>>>> fences), which may not complete in order.
>>> Well that's ok as well. My question is why does this single dma_fence
>>> then shows up in the dma_fence_chain representing the whole
>>> migration?
>> What we'd like to happen during eviction is that we
>>
>> 1) await any exclusive- or moving fences, then schedule the migration
>> blit. The blit manages its own GPU ptes. Results in a single fence.
>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>> fences.
> This sounds like over-optimizing for nothing. We only really care about
> pipeling moves on dgpu, and on dgpu we only care about modern userspace
> (because even gl moves in that direction).
Hmm. It's not totally clear what you mean with over-optimizing for 
nothing, is it the fact that we want to start the blit before all shared 
fences have signaled or the fact that we're doing async unbinding to 
avoid a synchronization point that stops us from fully pipelining evictions?
> And modern means that usually even write access is only setting a read
> fence, because in vk/compute we only set write fences for object which
> need implicit sync, and _only_ when actually needed.
>
> So ignoring read fences for movings "because it's only reads" is actually
> busted.

I'm fine with awaiting also shared fences before we start the blit, as 
mentioned also later in the thread, but that is just a matter of when we 
coalesce the shared fences. So since difference in complexity is 
minimal, what's viewed as optimizing for nothing can also be conversely 
be viewed as unneccesarily waiting for nothing, blocking the migration 
context timeline from progressing with unrelated blits. (Unless there 
are correctness issues of course, see below).

But not setting a write fence after write seems to conflict with dma-buf 
rules as also discussed later in the thread. Perhaps some clarity is 
needed here. How would a writer or reader that implicitly *wants* to 
wait for previous writers go about doing that?

Note that what we're doing is not "moving" in the sense that we're 
giving up or modifying the old storage but rather start a blit assuming 
that the contents of the old storage is stable, or the writer doesn't care.

>
> I think for buffer moves we should document and enforce (in review) the
> rule that you have to wait for all fences, otherwise boom. Same really
> like before freeing backing storage. Otherwise there's just too many gaps
> and surprises.
>
> And yes with Christian's rework of dma_resv this will change, and we'll
> allow multiple write fences (because that's what amdgpu encoded into their
> uapi). Still means that you cannot move a buffer without waiting for read
> fences (or kernel fences or anything really).

Sounds like some agreement is needed here what rules we actually should 
obey. As mentioned above I'm fine with either.

>
> The other thing is this entire spinlock recursion topic for dma_fence, and
> I'm deeply unhappy about the truckload of tricks i915 plays and hence in
> favour of avoiding recursion in this area as much as possible.

TBH I think the i915 corresponding container manages to avoid both the 
deep recursive calls and lock nesting simply by early enable_signaling() 
and not storing the fence pointers of the array fences, which to me 
appears to be a simple and clean approach. No tricks there.

>
> If we really can't avoid it then irq_work to get a new clean context gets
> the job done. Making this messy and work is imo a feature, lock nesting of
> same level locks is just not a good&robust engineering idea.

For the dma-fence-chain and dma-fence-array there are four possibilities 
moving forward:

1) Keeping the current same-level locking nesting order of 
container-first containee later. This is fully annotated, but fragile 
and blows up if users attempt to nest containers in different orders.

2) Establishing a reverse-signaling locking order. Not annotatable. 
blows up on signal-on-any.

3) Early enable-signaling, no lock nesting, low latency but possibly 
unnecessary enable_signaling calls.

4) irq_work in enable_signaling(). High latency.

The tread finally agreed the solution would be to keep 1), add early 
warnings for the pitfalls and if possible provide helpers to flatten to 
avoid container recursion.

/Thomas


>
> /me back to being completely burried
>
> I do hope I can find some more time to review a few more of Christian's
> patches this week though :-/
>
> Cheers, Daniel
>
>> 3) Most but not all of the remaining resv shared fences will have been
>> finished in 2) We can't easily tell which so we have a couple of shared
>> fences left.
>> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
>> dma-fence-chain.
>> 5) hand the resulting dma-fence-chain representing the end of migration
>> over to ttm's resource manager.
>>
>> Now this means we have a dma-fence-chain disguised as a dma-fence out
>> in the wild, and it could in theory reappear as a 3) fence for another
>> migration unless a very careful audit is done, or as an input to the
>> dma-fence-array used for that single dependency.
>>
>>> That somehow doesn't seem to make sense because each individual step
>>> of
>>> the migration needs to wait for those dependencies as well even when
>>> it
>>> runs in parallel.
>>>
>>>> But that's not really the point, the point was that an (at least to
>>>> me) seemingly harmless usage pattern, be it real or fictious, ends
>>>> up
>>>> giving you severe internal- or cross-driver headaches.
>>> Yeah, we probably should document that better. But in general I don't
>>> see much reason to allow mixing containers. The dma_fence_array and
>>> dma_fence_chain objects have some distinct use cases and and using
>>> them
>>> to build up larger dependency structures sounds really questionable.
>> Yes, I tend to agree to some extent here. Perhaps add warnings when
>> adding a chain or array as an input to array and when accidently
>> joining chains, and provide helpers for flattening if needed.
>>
>> /Thomas
>>
>>
>>> Christian.
>>>
>>>> /Thomas
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>
>>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-07 20:46                                             ` Thomas Hellström
  0 siblings, 0 replies; 65+ messages in thread
From: Thomas Hellström @ 2021-12-07 20:46 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: dri-devel, linaro-mm-sig, matthew.auld, intel-gfx, Christian König


On 12/7/21 19:08, Daniel Vetter wrote:
> Once more an entire week behind on mails, but this looked interesting
> enough.
>
> On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
>> On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
>>> Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
>>>> On 12/1/21 12:25, Christian König wrote:
>>>>> And why do you use dma_fence_chain to generate a timeline for
>>>>> TTM?
>>>>> That should come naturally because all the moves must be ordered.
>>>> Oh, in this case because we're looking at adding stuff at the end
>>>> of
>>>> migration (like coalescing object shared fences and / or async
>>>> unbind
>>>> fences), which may not complete in order.
>>> Well that's ok as well. My question is why does this single dma_fence
>>> then shows up in the dma_fence_chain representing the whole
>>> migration?
>> What we'd like to happen during eviction is that we
>>
>> 1) await any exclusive- or moving fences, then schedule the migration
>> blit. The blit manages its own GPU ptes. Results in a single fence.
>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>> fences.
> This sounds like over-optimizing for nothing. We only really care about
> pipeling moves on dgpu, and on dgpu we only care about modern userspace
> (because even gl moves in that direction).
Hmm. It's not totally clear what you mean with over-optimizing for 
nothing, is it the fact that we want to start the blit before all shared 
fences have signaled or the fact that we're doing async unbinding to 
avoid a synchronization point that stops us from fully pipelining evictions?
> And modern means that usually even write access is only setting a read
> fence, because in vk/compute we only set write fences for object which
> need implicit sync, and _only_ when actually needed.
>
> So ignoring read fences for movings "because it's only reads" is actually
> busted.

I'm fine with awaiting also shared fences before we start the blit, as 
mentioned also later in the thread, but that is just a matter of when we 
coalesce the shared fences. So since difference in complexity is 
minimal, what's viewed as optimizing for nothing can also be conversely 
be viewed as unneccesarily waiting for nothing, blocking the migration 
context timeline from progressing with unrelated blits. (Unless there 
are correctness issues of course, see below).

But not setting a write fence after write seems to conflict with dma-buf 
rules as also discussed later in the thread. Perhaps some clarity is 
needed here. How would a writer or reader that implicitly *wants* to 
wait for previous writers go about doing that?

Note that what we're doing is not "moving" in the sense that we're 
giving up or modifying the old storage but rather start a blit assuming 
that the contents of the old storage is stable, or the writer doesn't care.

>
> I think for buffer moves we should document and enforce (in review) the
> rule that you have to wait for all fences, otherwise boom. Same really
> like before freeing backing storage. Otherwise there's just too many gaps
> and surprises.
>
> And yes with Christian's rework of dma_resv this will change, and we'll
> allow multiple write fences (because that's what amdgpu encoded into their
> uapi). Still means that you cannot move a buffer without waiting for read
> fences (or kernel fences or anything really).

Sounds like some agreement is needed here what rules we actually should 
obey. As mentioned above I'm fine with either.

>
> The other thing is this entire spinlock recursion topic for dma_fence, and
> I'm deeply unhappy about the truckload of tricks i915 plays and hence in
> favour of avoiding recursion in this area as much as possible.

TBH I think the i915 corresponding container manages to avoid both the 
deep recursive calls and lock nesting simply by early enable_signaling() 
and not storing the fence pointers of the array fences, which to me 
appears to be a simple and clean approach. No tricks there.

>
> If we really can't avoid it then irq_work to get a new clean context gets
> the job done. Making this messy and work is imo a feature, lock nesting of
> same level locks is just not a good&robust engineering idea.

For the dma-fence-chain and dma-fence-array there are four possibilities 
moving forward:

1) Keeping the current same-level locking nesting order of 
container-first containee later. This is fully annotated, but fragile 
and blows up if users attempt to nest containers in different orders.

2) Establishing a reverse-signaling locking order. Not annotatable. 
blows up on signal-on-any.

3) Early enable-signaling, no lock nesting, low latency but possibly 
unnecessary enable_signaling calls.

4) irq_work in enable_signaling(). High latency.

The tread finally agreed the solution would be to keep 1), add early 
warnings for the pitfalls and if possible provide helpers to flatten to 
avoid container recursion.

/Thomas


>
> /me back to being completely burried
>
> I do hope I can find some more time to review a few more of Christian's
> patches this week though :-/
>
> Cheers, Daniel
>
>> 3) Most but not all of the remaining resv shared fences will have been
>> finished in 2) We can't easily tell which so we have a couple of shared
>> fences left.
>> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
>> dma-fence-chain.
>> 5) hand the resulting dma-fence-chain representing the end of migration
>> over to ttm's resource manager.
>>
>> Now this means we have a dma-fence-chain disguised as a dma-fence out
>> in the wild, and it could in theory reappear as a 3) fence for another
>> migration unless a very careful audit is done, or as an input to the
>> dma-fence-array used for that single dependency.
>>
>>> That somehow doesn't seem to make sense because each individual step
>>> of
>>> the migration needs to wait for those dependencies as well even when
>>> it
>>> runs in parallel.
>>>
>>>> But that's not really the point, the point was that an (at least to
>>>> me) seemingly harmless usage pattern, be it real or fictious, ends
>>>> up
>>>> giving you severe internal- or cross-driver headaches.
>>> Yeah, we probably should document that better. But in general I don't
>>> see much reason to allow mixing containers. The dma_fence_array and
>>> dma_fence_chain objects have some distinct use cases and and using
>>> them
>>> to build up larger dependency structures sounds really questionable.
>> Yes, I tend to agree to some extent here. Perhaps add warnings when
>> adding a chain or array as an input to array and when accidently
>> joining chains, and provide helpers for flattening if needed.
>>
>> /Thomas
>>
>>
>>> Christian.
>>>
>>>> /Thomas
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>
>>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
  2021-12-07 20:46                                             ` Thomas Hellström
@ 2021-12-20  9:37                                               ` Daniel Vetter
  -1 siblings, 0 replies; 65+ messages in thread
From: Daniel Vetter @ 2021-12-20  9:37 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Thomas Hellström (Intel),
	dri-devel, linaro-mm-sig, matthew.auld, intel-gfx,
	Christian König

On Tue, Dec 07, 2021 at 09:46:47PM +0100, Thomas Hellström wrote:
> 
> On 12/7/21 19:08, Daniel Vetter wrote:
> > Once more an entire week behind on mails, but this looked interesting
> > enough.
> > 
> > On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
> > > On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> > > > Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > > > > On 12/1/21 12:25, Christian König wrote:
> > > > > > And why do you use dma_fence_chain to generate a timeline for
> > > > > > TTM?
> > > > > > That should come naturally because all the moves must be ordered.
> > > > > Oh, in this case because we're looking at adding stuff at the end
> > > > > of
> > > > > migration (like coalescing object shared fences and / or async
> > > > > unbind
> > > > > fences), which may not complete in order.
> > > > Well that's ok as well. My question is why does this single dma_fence
> > > > then shows up in the dma_fence_chain representing the whole
> > > > migration?
> > > What we'd like to happen during eviction is that we
> > > 
> > > 1) await any exclusive- or moving fences, then schedule the migration
> > > blit. The blit manages its own GPU ptes. Results in a single fence.
> > > 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> > > fences.
> > This sounds like over-optimizing for nothing. We only really care about
> > pipeling moves on dgpu, and on dgpu we only care about modern userspace
> > (because even gl moves in that direction).
> Hmm. It's not totally clear what you mean with over-optimizing for nothing,
> is it the fact that we want to start the blit before all shared fences have
> signaled or the fact that we're doing async unbinding to avoid a
> synchronization point that stops us from fully pipelining evictions?

Yup. Least because that breaks vulkan, so you really can't do this
optimizations :-)

In general I meant that unless you really, really understand everything
all the time (which frankly no one does), then trying to be clever just
isn't worth it. We have access pending in the dma_resv, we wait for it is
dumb, simple, no surprises.

> > And modern means that usually even write access is only setting a read
> > fence, because in vk/compute we only set write fences for object which
> > need implicit sync, and _only_ when actually needed.
> > 
> > So ignoring read fences for movings "because it's only reads" is actually
> > busted.
> 
> I'm fine with awaiting also shared fences before we start the blit, as
> mentioned also later in the thread, but that is just a matter of when we
> coalesce the shared fences. So since difference in complexity is minimal,
> what's viewed as optimizing for nothing can also be conversely be viewed as
> unneccesarily waiting for nothing, blocking the migration context timeline
> from progressing with unrelated blits. (Unless there are correctness issues
> of course, see below).
> 
> But not setting a write fence after write seems to conflict with dma-buf
> rules as also discussed later in the thread. Perhaps some clarity is needed
> here. How would a writer or reader that implicitly *wants* to wait for
> previous writers go about doing that?
> 
> Note that what we're doing is not "moving" in the sense that we're giving up
> or modifying the old storage but rather start a blit assuming that the
> contents of the old storage is stable, or the writer doesn't care.

Yeah that's not how dma-buf works, and which is what Christian is trying
to rectify with his huge refactoring/doc series to give a bit clearer
meaning to what a fence in a dma_resv means.

> > I think for buffer moves we should document and enforce (in review) the
> > rule that you have to wait for all fences, otherwise boom. Same really
> > like before freeing backing storage. Otherwise there's just too many gaps
> > and surprises.
> > 
> > And yes with Christian's rework of dma_resv this will change, and we'll
> > allow multiple write fences (because that's what amdgpu encoded into their
> > uapi). Still means that you cannot move a buffer without waiting for read
> > fences (or kernel fences or anything really).
> 
> Sounds like some agreement is needed here what rules we actually should
> obey. As mentioned above I'm fine with either.

I think it would be good to comment on the doc patch in Christian's series
for that. But essentially read/write don't mean actual read/write to
memory, but only read/write access in terms of implicit sync. Buffers
which do not partake in implicit sync (driver internal stuff) or access
which is not implicitly synced (anything vk does) do _not_ need to set a
write fence. They will (except amdgpu, until they fix their CS uapi)
_only_ set a read fence.

Christian and me had a multi-month discussion on this, so it's a bit
tricky.

> > The other thing is this entire spinlock recursion topic for dma_fence, and
> > I'm deeply unhappy about the truckload of tricks i915 plays and hence in
> > favour of avoiding recursion in this area as much as possible.
> 
> TBH I think the i915 corresponding container manages to avoid both the deep
> recursive calls and lock nesting simply by early enable_signaling() and not
> storing the fence pointers of the array fences, which to me appears to be a
> simple and clean approach. No tricks there.
> 
> > 
> > If we really can't avoid it then irq_work to get a new clean context gets
> > the job done. Making this messy and work is imo a feature, lock nesting of
> > same level locks is just not a good&robust engineering idea.
> 
> For the dma-fence-chain and dma-fence-array there are four possibilities
> moving forward:
> 
> 1) Keeping the current same-level locking nesting order of container-first
> containee later. This is fully annotated, but fragile and blows up if users
> attempt to nest containers in different orders.
> 
> 2) Establishing a reverse-signaling locking order. Not annotatable. blows up
> on signal-on-any.
> 
> 3) Early enable-signaling, no lock nesting, low latency but possibly
> unnecessary enable_signaling calls.
> 
> 4) irq_work in enable_signaling(). High latency.
> 
> The tread finally agreed the solution would be to keep 1), add early
> warnings for the pitfalls and if possible provide helpers to flatten to
> avoid container recursion.

Hm ok seems ok. It's definitely an area where we don't have great
solutions :-/
-Daniel

> 
> /Thomas
> 
> 
> > 
> > /me back to being completely burried
> > 
> > I do hope I can find some more time to review a few more of Christian's
> > patches this week though :-/
> > 
> > Cheers, Daniel
> > 
> > > 3) Most but not all of the remaining resv shared fences will have been
> > > finished in 2) We can't easily tell which so we have a couple of shared
> > > fences left.
> > > 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> > > dma-fence-chain.
> > > 5) hand the resulting dma-fence-chain representing the end of migration
> > > over to ttm's resource manager.
> > > 
> > > Now this means we have a dma-fence-chain disguised as a dma-fence out
> > > in the wild, and it could in theory reappear as a 3) fence for another
> > > migration unless a very careful audit is done, or as an input to the
> > > dma-fence-array used for that single dependency.
> > > 
> > > > That somehow doesn't seem to make sense because each individual step
> > > > of
> > > > the migration needs to wait for those dependencies as well even when
> > > > it
> > > > runs in parallel.
> > > > 
> > > > > But that's not really the point, the point was that an (at least to
> > > > > me) seemingly harmless usage pattern, be it real or fictious, ends
> > > > > up
> > > > > giving you severe internal- or cross-driver headaches.
> > > > Yeah, we probably should document that better. But in general I don't
> > > > see much reason to allow mixing containers. The dma_fence_array and
> > > > dma_fence_chain objects have some distinct use cases and and using
> > > > them
> > > > to build up larger dependency structures sounds really questionable.
> > > Yes, I tend to agree to some extent here. Perhaps add warnings when
> > > adding a chain or array as an input to array and when accidently
> > > joining chains, and provide helpers for flattening if needed.
> > > 
> > > /Thomas
> > > 
> > > 
> > > > Christian.
> > > > 
> > > > > /Thomas
> > > > > 
> > > > > 
> > > > > > Regards,
> > > > > > Christian.
> > > > > > 
> > > > > > 
> > > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [Intel-gfx] [Linaro-mm-sig] [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes
@ 2021-12-20  9:37                                               ` Daniel Vetter
  0 siblings, 0 replies; 65+ messages in thread
From: Daniel Vetter @ 2021-12-20  9:37 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: dri-devel, linaro-mm-sig, matthew.auld, intel-gfx, Christian König

On Tue, Dec 07, 2021 at 09:46:47PM +0100, Thomas Hellström wrote:
> 
> On 12/7/21 19:08, Daniel Vetter wrote:
> > Once more an entire week behind on mails, but this looked interesting
> > enough.
> > 
> > On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
> > > On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> > > > Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > > > > On 12/1/21 12:25, Christian König wrote:
> > > > > > And why do you use dma_fence_chain to generate a timeline for
> > > > > > TTM?
> > > > > > That should come naturally because all the moves must be ordered.
> > > > > Oh, in this case because we're looking at adding stuff at the end
> > > > > of
> > > > > migration (like coalescing object shared fences and / or async
> > > > > unbind
> > > > > fences), which may not complete in order.
> > > > Well that's ok as well. My question is why does this single dma_fence
> > > > then shows up in the dma_fence_chain representing the whole
> > > > migration?
> > > What we'd like to happen during eviction is that we
> > > 
> > > 1) await any exclusive- or moving fences, then schedule the migration
> > > blit. The blit manages its own GPU ptes. Results in a single fence.
> > > 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> > > fences.
> > This sounds like over-optimizing for nothing. We only really care about
> > pipeling moves on dgpu, and on dgpu we only care about modern userspace
> > (because even gl moves in that direction).
> Hmm. It's not totally clear what you mean with over-optimizing for nothing,
> is it the fact that we want to start the blit before all shared fences have
> signaled or the fact that we're doing async unbinding to avoid a
> synchronization point that stops us from fully pipelining evictions?

Yup. Least because that breaks vulkan, so you really can't do this
optimizations :-)

In general I meant that unless you really, really understand everything
all the time (which frankly no one does), then trying to be clever just
isn't worth it. We have access pending in the dma_resv, we wait for it is
dumb, simple, no surprises.

> > And modern means that usually even write access is only setting a read
> > fence, because in vk/compute we only set write fences for object which
> > need implicit sync, and _only_ when actually needed.
> > 
> > So ignoring read fences for movings "because it's only reads" is actually
> > busted.
> 
> I'm fine with awaiting also shared fences before we start the blit, as
> mentioned also later in the thread, but that is just a matter of when we
> coalesce the shared fences. So since difference in complexity is minimal,
> what's viewed as optimizing for nothing can also be conversely be viewed as
> unneccesarily waiting for nothing, blocking the migration context timeline
> from progressing with unrelated blits. (Unless there are correctness issues
> of course, see below).
> 
> But not setting a write fence after write seems to conflict with dma-buf
> rules as also discussed later in the thread. Perhaps some clarity is needed
> here. How would a writer or reader that implicitly *wants* to wait for
> previous writers go about doing that?
> 
> Note that what we're doing is not "moving" in the sense that we're giving up
> or modifying the old storage but rather start a blit assuming that the
> contents of the old storage is stable, or the writer doesn't care.

Yeah that's not how dma-buf works, and which is what Christian is trying
to rectify with his huge refactoring/doc series to give a bit clearer
meaning to what a fence in a dma_resv means.

> > I think for buffer moves we should document and enforce (in review) the
> > rule that you have to wait for all fences, otherwise boom. Same really
> > like before freeing backing storage. Otherwise there's just too many gaps
> > and surprises.
> > 
> > And yes with Christian's rework of dma_resv this will change, and we'll
> > allow multiple write fences (because that's what amdgpu encoded into their
> > uapi). Still means that you cannot move a buffer without waiting for read
> > fences (or kernel fences or anything really).
> 
> Sounds like some agreement is needed here what rules we actually should
> obey. As mentioned above I'm fine with either.

I think it would be good to comment on the doc patch in Christian's series
for that. But essentially read/write don't mean actual read/write to
memory, but only read/write access in terms of implicit sync. Buffers
which do not partake in implicit sync (driver internal stuff) or access
which is not implicitly synced (anything vk does) do _not_ need to set a
write fence. They will (except amdgpu, until they fix their CS uapi)
_only_ set a read fence.

Christian and me had a multi-month discussion on this, so it's a bit
tricky.

> > The other thing is this entire spinlock recursion topic for dma_fence, and
> > I'm deeply unhappy about the truckload of tricks i915 plays and hence in
> > favour of avoiding recursion in this area as much as possible.
> 
> TBH I think the i915 corresponding container manages to avoid both the deep
> recursive calls and lock nesting simply by early enable_signaling() and not
> storing the fence pointers of the array fences, which to me appears to be a
> simple and clean approach. No tricks there.
> 
> > 
> > If we really can't avoid it then irq_work to get a new clean context gets
> > the job done. Making this messy and work is imo a feature, lock nesting of
> > same level locks is just not a good&robust engineering idea.
> 
> For the dma-fence-chain and dma-fence-array there are four possibilities
> moving forward:
> 
> 1) Keeping the current same-level locking nesting order of container-first
> containee later. This is fully annotated, but fragile and blows up if users
> attempt to nest containers in different orders.
> 
> 2) Establishing a reverse-signaling locking order. Not annotatable. blows up
> on signal-on-any.
> 
> 3) Early enable-signaling, no lock nesting, low latency but possibly
> unnecessary enable_signaling calls.
> 
> 4) irq_work in enable_signaling(). High latency.
> 
> The tread finally agreed the solution would be to keep 1), add early
> warnings for the pitfalls and if possible provide helpers to flatten to
> avoid container recursion.

Hm ok seems ok. It's definitely an area where we don't have great
solutions :-/
-Daniel

> 
> /Thomas
> 
> 
> > 
> > /me back to being completely burried
> > 
> > I do hope I can find some more time to review a few more of Christian's
> > patches this week though :-/
> > 
> > Cheers, Daniel
> > 
> > > 3) Most but not all of the remaining resv shared fences will have been
> > > finished in 2) We can't easily tell which so we have a couple of shared
> > > fences left.
> > > 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> > > dma-fence-chain.
> > > 5) hand the resulting dma-fence-chain representing the end of migration
> > > over to ttm's resource manager.
> > > 
> > > Now this means we have a dma-fence-chain disguised as a dma-fence out
> > > in the wild, and it could in theory reappear as a 3) fence for another
> > > migration unless a very careful audit is done, or as an input to the
> > > dma-fence-array used for that single dependency.
> > > 
> > > > That somehow doesn't seem to make sense because each individual step
> > > > of
> > > > the migration needs to wait for those dependencies as well even when
> > > > it
> > > > runs in parallel.
> > > > 
> > > > > But that's not really the point, the point was that an (at least to
> > > > > me) seemingly harmless usage pattern, be it real or fictious, ends
> > > > > up
> > > > > giving you severe internal- or cross-driver headaches.
> > > > Yeah, we probably should document that better. But in general I don't
> > > > see much reason to allow mixing containers. The dma_fence_array and
> > > > dma_fence_chain objects have some distinct use cases and and using
> > > > them
> > > > to build up larger dependency structures sounds really questionable.
> > > Yes, I tend to agree to some extent here. Perhaps add warnings when
> > > adding a chain or array as an input to array and when accidently
> > > joining chains, and provide helpers for flattening if needed.
> > > 
> > > /Thomas
> > > 
> > > 
> > > > Christian.
> > > > 
> > > > > /Thomas
> > > > > 
> > > > > 
> > > > > > Regards,
> > > > > > Christian.
> > > > > > 
> > > > > > 
> > > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 65+ messages in thread

end of thread, other threads:[~2021-12-20  9:37 UTC | newest]

Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-30 12:19 [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats Thomas Hellström
2021-11-30 12:19 ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:19 ` [RFC PATCH 1/2] dma-fence: Avoid establishing a locking order between fence classes Thomas Hellström
2021-11-30 12:19   ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:25   ` Maarten Lankhorst
2021-11-30 12:25     ` [Intel-gfx] " Maarten Lankhorst
2021-11-30 12:31     ` Thomas Hellström
2021-11-30 12:31       ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:42       ` Christian König
2021-11-30 12:42         ` [Intel-gfx] " Christian König
2021-11-30 12:56         ` Thomas Hellström
2021-11-30 12:56           ` [Intel-gfx] " Thomas Hellström
2021-11-30 13:26           ` Christian König
2021-11-30 13:26             ` [Intel-gfx] " Christian König
2021-11-30 14:35             ` Thomas Hellström
2021-11-30 14:35               ` [Intel-gfx] " Thomas Hellström
2021-11-30 15:02               ` Christian König
2021-11-30 15:02                 ` [Intel-gfx] " Christian König
2021-11-30 18:12                 ` Thomas Hellström
2021-11-30 18:12                   ` Thomas Hellström
2021-11-30 19:27                   ` Thomas Hellström
2021-11-30 19:27                     ` [Intel-gfx] " Thomas Hellström
2021-12-01  7:05                     ` Christian König
2021-12-01  7:05                       ` [Intel-gfx] " Christian König
2021-12-01  8:23                       ` [Linaro-mm-sig] " Thomas Hellström (Intel)
2021-12-01  8:23                         ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-01  8:36                         ` Christian König
2021-12-01  8:36                           ` [Intel-gfx] " Christian König
2021-12-01 10:15                           ` Thomas Hellström (Intel)
2021-12-01 10:15                             ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-01 10:32                             ` Christian König
2021-12-01 10:32                               ` [Intel-gfx] " Christian König
2021-12-01 11:04                               ` Thomas Hellström (Intel)
2021-12-01 11:04                                 ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-01 11:25                                 ` Christian König
2021-12-01 11:25                                   ` [Intel-gfx] " Christian König
2021-12-01 12:16                                   ` Thomas Hellström (Intel)
2021-12-01 12:16                                     ` Thomas Hellström (Intel)
2021-12-03 13:08                                     ` Christian König
2021-12-03 13:08                                       ` [Intel-gfx] " Christian König
2021-12-03 14:18                                       ` Thomas Hellström
2021-12-03 14:18                                         ` [Intel-gfx] " Thomas Hellström
2021-12-03 14:26                                         ` Christian König
2021-12-03 14:26                                           ` [Intel-gfx] " Christian König
2021-12-03 14:50                                           ` Thomas Hellström
2021-12-03 14:50                                             ` [Intel-gfx] " Thomas Hellström
2021-12-03 15:00                                             ` Christian König
2021-12-03 15:00                                               ` [Intel-gfx] " Christian König
2021-12-03 15:13                                               ` Thomas Hellström (Intel)
2021-12-03 15:13                                                 ` [Intel-gfx] " Thomas Hellström (Intel)
2021-12-07 18:08                                         ` Daniel Vetter
2021-12-07 18:08                                           ` Daniel Vetter
2021-12-07 20:46                                           ` Thomas Hellström
2021-12-07 20:46                                             ` Thomas Hellström
2021-12-20  9:37                                             ` Daniel Vetter
2021-12-20  9:37                                               ` Daniel Vetter
2021-11-30 12:32   ` Thomas Hellström
2021-11-30 12:32     ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:19 ` [RFC PATCH 2/2] dma-fence: Avoid excessive recursive fence locking from enable_signaling() callbacks Thomas Hellström
2021-11-30 12:19   ` [Intel-gfx] " Thomas Hellström
2021-11-30 12:36 ` [RFC PATCH 0/2] Attempt to avoid dma-fence-[chain|array] lockdep splats Christian König
2021-11-30 12:36   ` [Intel-gfx] " Christian König
2021-11-30 13:05 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for " Patchwork
2021-11-30 13:48 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-11-30 17:47 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.