All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-10 13:06 ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fences which are actively waited upon from userspace. This has
it's pros and cons and can certainly be discussed at lenght. However fact is
some workloads really like it.

Problem is that with the arrival of drm syncobj and a new userspace waiting
entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
this mini series really (really) quickly to see if some discussion can be had.

It adds a concept of "wait count" to dma fence, which is incremented for every
explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
dma_fence_add_callback but from explicit/userspace wait paths).

Individual drivers can then inspect this via dma_fence_wait_count() and decide
to wait boost the waits on such fences.

Again, quickly put together and smoke tested only - no guarantees whatsoever and
I will rely on interested parties to test and report if it even works or how
well.

v2:
 * Small fixups based on CI feedback:
    * Handle decrement correctly for already signalled case while adding callback.
    * Remove i915 assert which was making sure struct i915_request does not grow.
 * Split out the i915 patch into three separate functional changes.

Tvrtko Ursulin (5):
  dma-fence: Track explicit waiters
  drm/syncobj: Mark syncobj waits as external waiters
  drm/i915: Waitboost external waits
  drm/i915: Mark waits as explicit
  drm/i915: Wait boost requests waited upon by others

 drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
 drivers/gpu/drm/drm_syncobj.c             |   6 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
 drivers/gpu/drm/i915/i915_request.c       |  13 ++-
 include/linux/dma-fence.h                 |  14 +++
 5 files changed, 101 insertions(+), 35 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-10 13:06 ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fences which are actively waited upon from userspace. This has
it's pros and cons and can certainly be discussed at lenght. However fact is
some workloads really like it.

Problem is that with the arrival of drm syncobj and a new userspace waiting
entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
this mini series really (really) quickly to see if some discussion can be had.

It adds a concept of "wait count" to dma fence, which is incremented for every
explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
dma_fence_add_callback but from explicit/userspace wait paths).

Individual drivers can then inspect this via dma_fence_wait_count() and decide
to wait boost the waits on such fences.

Again, quickly put together and smoke tested only - no guarantees whatsoever and
I will rely on interested parties to test and report if it even works or how
well.

v2:
 * Small fixups based on CI feedback:
    * Handle decrement correctly for already signalled case while adding callback.
    * Remove i915 assert which was making sure struct i915_request does not grow.
 * Split out the i915 patch into three separate functional changes.

Tvrtko Ursulin (5):
  dma-fence: Track explicit waiters
  drm/syncobj: Mark syncobj waits as external waiters
  drm/i915: Waitboost external waits
  drm/i915: Mark waits as explicit
  drm/i915: Wait boost requests waited upon by others

 drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
 drivers/gpu/drm/drm_syncobj.c             |   6 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
 drivers/gpu/drm/i915/i915_request.c       |  13 ++-
 include/linux/dma-fence.h                 |  14 +++
 5 files changed, 101 insertions(+), 35 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [RFC 1/5] dma-fence: Track explicit waiters
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Track how many callers are explicity waiting on a fence to signal and
allow querying that via new dma_fence_wait_count() API.

This provides infrastructure on top of which generic "waitboost" concepts
can be implemented by individual drivers. Wait-boosting is any reactive
activity, such as raising the GPU clocks, which happens while there are
active external waiters.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
 include/linux/dma-fence.h                 |  14 +++
 3 files changed, 88 insertions(+), 29 deletions(-)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 0de0482cd36e..ed43290c0bdf 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -344,6 +344,24 @@ void __dma_fence_might_wait(void)
 }
 #endif
 
+static void incr_wait_count(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+	lockdep_assert_held(fence->lock);
+
+	__set_bit(DMA_FENCE_CB_FLAG_WAITCOUNT_BIT, &cb->flags);
+	fence->waitcount++;
+	WARN_ON_ONCE(!fence->waitcount);
+}
+
+static void decr_wait_count(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+	lockdep_assert_held(fence->lock);
+
+	if (__test_and_clear_bit(DMA_FENCE_CB_FLAG_WAITCOUNT_BIT, &cb->flags)) {
+		WARN_ON_ONCE(!fence->waitcount);
+		fence->waitcount--;
+	}
+}
 
 /**
  * dma_fence_signal_timestamp_locked - signal completion of a fence
@@ -384,6 +402,7 @@ int dma_fence_signal_timestamp_locked(struct dma_fence *fence,
 
 	list_for_each_entry_safe(cur, tmp, &cb_list, node) {
 		INIT_LIST_HEAD(&cur->node);
+		decr_wait_count(fence, cur);
 		cur->func(fence, cur);
 	}
 
@@ -612,35 +631,15 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
 	unsigned long flags;
 
 	spin_lock_irqsave(fence->lock, flags);
+	fence->waitcount++;
+	WARN_ON_ONCE(!fence->waitcount);
 	__dma_fence_enable_signaling(fence);
 	spin_unlock_irqrestore(fence->lock, flags);
 }
 EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
 
-/**
- * dma_fence_add_callback - add a callback to be called when the fence
- * is signaled
- * @fence: the fence to wait on
- * @cb: the callback to register
- * @func: the function to call
- *
- * Add a software callback to the fence. The caller should keep a reference to
- * the fence.
- *
- * @cb will be initialized by dma_fence_add_callback(), no initialization
- * by the caller is required. Any number of callbacks can be registered
- * to a fence, but a callback can only be registered to one fence at a time.
- *
- * If fence is already signaled, this function will return -ENOENT (and
- * *not* call the callback).
- *
- * Note that the callback can be called from an atomic context or irq context.
- *
- * Returns 0 in case of success, -ENOENT if the fence is already signaled
- * and -EINVAL in case of error.
- */
-int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
-			   dma_fence_func_t func)
+static int add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
+			dma_fence_func_t func, bool wait)
 {
 	unsigned long flags;
 	int ret = 0;
@@ -655,10 +654,15 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
 
 	spin_lock_irqsave(fence->lock, flags);
 
+	if (wait)
+		incr_wait_count(fence, cb);
+
 	if (__dma_fence_enable_signaling(fence)) {
 		cb->func = func;
 		list_add_tail(&cb->node, &fence->cb_list);
 	} else {
+		if (test_bit(DMA_FENCE_CB_FLAG_WAITCOUNT_BIT, &cb->flags))
+			decr_wait_count(fence, cb);
 		INIT_LIST_HEAD(&cb->node);
 		ret = -ENOENT;
 	}
@@ -667,8 +671,44 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
 
 	return ret;
 }
+
+/**
+ * dma_fence_add_callback - add a callback to be called when the fence
+ * is signaled
+ * @fence: the fence to wait on
+ * @cb: the callback to register
+ * @func: the function to call
+ *
+ * Add a software callback to the fence. The caller should keep a reference to
+ * the fence.
+ *
+ * @cb will be initialized by dma_fence_add_callback(), no initialization
+ * by the caller is required. Any number of callbacks can be registered
+ * to a fence, but a callback can only be registered to one fence at a time.
+ *
+ * If fence is already signaled, this function will return -ENOENT (and
+ * *not* call the callback).
+ *
+ * Note that the callback can be called from an atomic context or irq context.
+ *
+ * Returns 0 in case of success, -ENOENT if the fence is already signaled
+ * and -EINVAL in case of error.
+ */
+int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
+			   dma_fence_func_t func)
+{
+	return add_callback(fence, cb, func, false);
+}
 EXPORT_SYMBOL(dma_fence_add_callback);
 
+int dma_fence_add_wait_callback(struct dma_fence *fence,
+				struct dma_fence_cb *cb,
+				dma_fence_func_t func)
+{
+	return add_callback(fence, cb, func, true);
+}
+EXPORT_SYMBOL(dma_fence_add_wait_callback);
+
 /**
  * dma_fence_get_status - returns the status upon completion
  * @fence: the dma_fence to query
@@ -721,8 +761,10 @@ dma_fence_remove_callback(struct dma_fence *fence, struct dma_fence_cb *cb)
 	spin_lock_irqsave(fence->lock, flags);
 
 	ret = !list_empty(&cb->node);
-	if (ret)
+	if (ret) {
+		decr_wait_count(fence, cb);
 		list_del_init(&cb->node);
+	}
 
 	spin_unlock_irqrestore(fence->lock, flags);
 
@@ -780,6 +822,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
 
 	cb.base.func = dma_fence_default_wait_cb;
 	cb.task = current;
+	incr_wait_count(fence, &cb.base);
 	list_add(&cb.base.node, &fence->cb_list);
 
 	while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
@@ -796,8 +839,10 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
 			ret = -ERESTARTSYS;
 	}
 
-	if (!list_empty(&cb.base.node))
+	if (!list_empty(&cb.base.node)) {
+		decr_wait_count(fence, &cb.base);
 		list_del(&cb.base.node);
+	}
 	__set_current_state(TASK_RUNNING);
 
 out:
@@ -875,8 +920,8 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
 		struct dma_fence *fence = fences[i];
 
 		cb[i].task = current;
-		if (dma_fence_add_callback(fence, &cb[i].base,
-					   dma_fence_default_wait_cb)) {
+		if (dma_fence_add_wait_callback(fence, &cb[i].base,
+						dma_fence_default_wait_cb)) {
 			/* This fence is already signaled */
 			if (idx)
 				*idx = i;
@@ -957,6 +1002,7 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 	fence->context = context;
 	fence->seqno = seqno;
 	fence->flags = 0UL;
+	fence->waitcount = 0;
 	fence->error = 0;
 
 	trace_dma_fence_init(fence);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index e971b153fda9..2693a0151a6b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -218,7 +218,6 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 		 * until the background request retirement running every
 		 * second or two).
 		 */
-		BUILD_BUG_ON(sizeof(rq->duration) > sizeof(rq->submitq));
 		dma_fence_add_callback(&rq->fence, &rq->duration.cb, duration);
 		rq->duration.emitted = ktime_get();
 	}
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 775cdc0b4f24..d0ed923e4545 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -92,6 +92,7 @@ struct dma_fence {
 	u64 seqno;
 	unsigned long flags;
 	struct kref refcount;
+	unsigned int waitcount;
 	int error;
 };
 
@@ -116,6 +117,11 @@ typedef void (*dma_fence_func_t)(struct dma_fence *fence,
 struct dma_fence_cb {
 	struct list_head node;
 	dma_fence_func_t func;
+	unsigned long flags;
+};
+
+enum dma_fence_cb_flag_bits {
+	DMA_FENCE_CB_FLAG_WAITCOUNT_BIT,
 };
 
 /**
@@ -377,6 +383,9 @@ signed long dma_fence_default_wait(struct dma_fence *fence,
 int dma_fence_add_callback(struct dma_fence *fence,
 			   struct dma_fence_cb *cb,
 			   dma_fence_func_t func);
+int dma_fence_add_wait_callback(struct dma_fence *fence,
+				struct dma_fence_cb *cb,
+				dma_fence_func_t func);
 bool dma_fence_remove_callback(struct dma_fence *fence,
 			       struct dma_fence_cb *cb);
 void dma_fence_enable_sw_signaling(struct dma_fence *fence);
@@ -528,6 +537,11 @@ static inline int dma_fence_get_status_locked(struct dma_fence *fence)
 
 int dma_fence_get_status(struct dma_fence *fence);
 
+static inline unsigned int dma_fence_wait_count(struct dma_fence *fence)
+{
+	return fence->waitcount;
+}
+
 /**
  * dma_fence_set_error - flag an error condition on the fence
  * @fence: the dma_fence
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [Intel-gfx] [RFC 1/5] dma-fence: Track explicit waiters
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Track how many callers are explicity waiting on a fence to signal and
allow querying that via new dma_fence_wait_count() API.

This provides infrastructure on top of which generic "waitboost" concepts
can be implemented by individual drivers. Wait-boosting is any reactive
activity, such as raising the GPU clocks, which happens while there are
active external waiters.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
 drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
 include/linux/dma-fence.h                 |  14 +++
 3 files changed, 88 insertions(+), 29 deletions(-)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 0de0482cd36e..ed43290c0bdf 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -344,6 +344,24 @@ void __dma_fence_might_wait(void)
 }
 #endif
 
+static void incr_wait_count(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+	lockdep_assert_held(fence->lock);
+
+	__set_bit(DMA_FENCE_CB_FLAG_WAITCOUNT_BIT, &cb->flags);
+	fence->waitcount++;
+	WARN_ON_ONCE(!fence->waitcount);
+}
+
+static void decr_wait_count(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+	lockdep_assert_held(fence->lock);
+
+	if (__test_and_clear_bit(DMA_FENCE_CB_FLAG_WAITCOUNT_BIT, &cb->flags)) {
+		WARN_ON_ONCE(!fence->waitcount);
+		fence->waitcount--;
+	}
+}
 
 /**
  * dma_fence_signal_timestamp_locked - signal completion of a fence
@@ -384,6 +402,7 @@ int dma_fence_signal_timestamp_locked(struct dma_fence *fence,
 
 	list_for_each_entry_safe(cur, tmp, &cb_list, node) {
 		INIT_LIST_HEAD(&cur->node);
+		decr_wait_count(fence, cur);
 		cur->func(fence, cur);
 	}
 
@@ -612,35 +631,15 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
 	unsigned long flags;
 
 	spin_lock_irqsave(fence->lock, flags);
+	fence->waitcount++;
+	WARN_ON_ONCE(!fence->waitcount);
 	__dma_fence_enable_signaling(fence);
 	spin_unlock_irqrestore(fence->lock, flags);
 }
 EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
 
-/**
- * dma_fence_add_callback - add a callback to be called when the fence
- * is signaled
- * @fence: the fence to wait on
- * @cb: the callback to register
- * @func: the function to call
- *
- * Add a software callback to the fence. The caller should keep a reference to
- * the fence.
- *
- * @cb will be initialized by dma_fence_add_callback(), no initialization
- * by the caller is required. Any number of callbacks can be registered
- * to a fence, but a callback can only be registered to one fence at a time.
- *
- * If fence is already signaled, this function will return -ENOENT (and
- * *not* call the callback).
- *
- * Note that the callback can be called from an atomic context or irq context.
- *
- * Returns 0 in case of success, -ENOENT if the fence is already signaled
- * and -EINVAL in case of error.
- */
-int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
-			   dma_fence_func_t func)
+static int add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
+			dma_fence_func_t func, bool wait)
 {
 	unsigned long flags;
 	int ret = 0;
@@ -655,10 +654,15 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
 
 	spin_lock_irqsave(fence->lock, flags);
 
+	if (wait)
+		incr_wait_count(fence, cb);
+
 	if (__dma_fence_enable_signaling(fence)) {
 		cb->func = func;
 		list_add_tail(&cb->node, &fence->cb_list);
 	} else {
+		if (test_bit(DMA_FENCE_CB_FLAG_WAITCOUNT_BIT, &cb->flags))
+			decr_wait_count(fence, cb);
 		INIT_LIST_HEAD(&cb->node);
 		ret = -ENOENT;
 	}
@@ -667,8 +671,44 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
 
 	return ret;
 }
+
+/**
+ * dma_fence_add_callback - add a callback to be called when the fence
+ * is signaled
+ * @fence: the fence to wait on
+ * @cb: the callback to register
+ * @func: the function to call
+ *
+ * Add a software callback to the fence. The caller should keep a reference to
+ * the fence.
+ *
+ * @cb will be initialized by dma_fence_add_callback(), no initialization
+ * by the caller is required. Any number of callbacks can be registered
+ * to a fence, but a callback can only be registered to one fence at a time.
+ *
+ * If fence is already signaled, this function will return -ENOENT (and
+ * *not* call the callback).
+ *
+ * Note that the callback can be called from an atomic context or irq context.
+ *
+ * Returns 0 in case of success, -ENOENT if the fence is already signaled
+ * and -EINVAL in case of error.
+ */
+int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
+			   dma_fence_func_t func)
+{
+	return add_callback(fence, cb, func, false);
+}
 EXPORT_SYMBOL(dma_fence_add_callback);
 
+int dma_fence_add_wait_callback(struct dma_fence *fence,
+				struct dma_fence_cb *cb,
+				dma_fence_func_t func)
+{
+	return add_callback(fence, cb, func, true);
+}
+EXPORT_SYMBOL(dma_fence_add_wait_callback);
+
 /**
  * dma_fence_get_status - returns the status upon completion
  * @fence: the dma_fence to query
@@ -721,8 +761,10 @@ dma_fence_remove_callback(struct dma_fence *fence, struct dma_fence_cb *cb)
 	spin_lock_irqsave(fence->lock, flags);
 
 	ret = !list_empty(&cb->node);
-	if (ret)
+	if (ret) {
+		decr_wait_count(fence, cb);
 		list_del_init(&cb->node);
+	}
 
 	spin_unlock_irqrestore(fence->lock, flags);
 
@@ -780,6 +822,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
 
 	cb.base.func = dma_fence_default_wait_cb;
 	cb.task = current;
+	incr_wait_count(fence, &cb.base);
 	list_add(&cb.base.node, &fence->cb_list);
 
 	while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
@@ -796,8 +839,10 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
 			ret = -ERESTARTSYS;
 	}
 
-	if (!list_empty(&cb.base.node))
+	if (!list_empty(&cb.base.node)) {
+		decr_wait_count(fence, &cb.base);
 		list_del(&cb.base.node);
+	}
 	__set_current_state(TASK_RUNNING);
 
 out:
@@ -875,8 +920,8 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
 		struct dma_fence *fence = fences[i];
 
 		cb[i].task = current;
-		if (dma_fence_add_callback(fence, &cb[i].base,
-					   dma_fence_default_wait_cb)) {
+		if (dma_fence_add_wait_callback(fence, &cb[i].base,
+						dma_fence_default_wait_cb)) {
 			/* This fence is already signaled */
 			if (idx)
 				*idx = i;
@@ -957,6 +1002,7 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 	fence->context = context;
 	fence->seqno = seqno;
 	fence->flags = 0UL;
+	fence->waitcount = 0;
 	fence->error = 0;
 
 	trace_dma_fence_init(fence);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index e971b153fda9..2693a0151a6b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -218,7 +218,6 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 		 * until the background request retirement running every
 		 * second or two).
 		 */
-		BUILD_BUG_ON(sizeof(rq->duration) > sizeof(rq->submitq));
 		dma_fence_add_callback(&rq->fence, &rq->duration.cb, duration);
 		rq->duration.emitted = ktime_get();
 	}
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 775cdc0b4f24..d0ed923e4545 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -92,6 +92,7 @@ struct dma_fence {
 	u64 seqno;
 	unsigned long flags;
 	struct kref refcount;
+	unsigned int waitcount;
 	int error;
 };
 
@@ -116,6 +117,11 @@ typedef void (*dma_fence_func_t)(struct dma_fence *fence,
 struct dma_fence_cb {
 	struct list_head node;
 	dma_fence_func_t func;
+	unsigned long flags;
+};
+
+enum dma_fence_cb_flag_bits {
+	DMA_FENCE_CB_FLAG_WAITCOUNT_BIT,
 };
 
 /**
@@ -377,6 +383,9 @@ signed long dma_fence_default_wait(struct dma_fence *fence,
 int dma_fence_add_callback(struct dma_fence *fence,
 			   struct dma_fence_cb *cb,
 			   dma_fence_func_t func);
+int dma_fence_add_wait_callback(struct dma_fence *fence,
+				struct dma_fence_cb *cb,
+				dma_fence_func_t func);
 bool dma_fence_remove_callback(struct dma_fence *fence,
 			       struct dma_fence_cb *cb);
 void dma_fence_enable_sw_signaling(struct dma_fence *fence);
@@ -528,6 +537,11 @@ static inline int dma_fence_get_status_locked(struct dma_fence *fence)
 
 int dma_fence_get_status(struct dma_fence *fence);
 
+static inline unsigned int dma_fence_wait_count(struct dma_fence *fence)
+{
+	return fence->waitcount;
+}
+
 /**
  * dma_fence_set_error - flag an error condition on the fence
  * @fence: the dma_fence
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC 2/5] drm/syncobj: Mark syncobj waits as external waiters
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the previously added dma-fence tracking of explicit waiters.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/drm_syncobj.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 0c2be8360525..776b90774a64 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -1065,9 +1065,9 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
 			if ((flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE) ||
 			    dma_fence_is_signaled(fence) ||
 			    (!entries[i].fence_cb.func &&
-			     dma_fence_add_callback(fence,
-						    &entries[i].fence_cb,
-						    syncobj_wait_fence_func))) {
+			     dma_fence_add_wait_callback(fence,
+							 &entries[i].fence_cb,
+							 syncobj_wait_fence_func))) {
 				/* The fence has been signaled */
 				if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL) {
 					signaled_count++;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [Intel-gfx] [RFC 2/5] drm/syncobj: Mark syncobj waits as external waiters
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the previously added dma-fence tracking of explicit waiters.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/drm_syncobj.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 0c2be8360525..776b90774a64 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -1065,9 +1065,9 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
 			if ((flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE) ||
 			    dma_fence_is_signaled(fence) ||
 			    (!entries[i].fence_cb.func &&
-			     dma_fence_add_callback(fence,
-						    &entries[i].fence_cb,
-						    syncobj_wait_fence_func))) {
+			     dma_fence_add_wait_callback(fence,
+							 &entries[i].fence_cb,
+							 syncobj_wait_fence_func))) {
 				/* The fence has been signaled */
 				if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL) {
 					signaled_count++;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC 3/5] drm/i915: Waitboost external waits
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Userspace waits coming via the drm_syncobj route have so far been
bypassing the waitboost mechanism.

Use the previously added dma-fence wait tracking API and apply the
same waitboosting logic which applies to other entry points.

This should fix the perfomance regressions experience by clvk and
similar userspace which relies on drm_syncobj.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 7503dcb9043b..8989f62a7fba 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -94,7 +94,12 @@ static bool i915_fence_signaled(struct dma_fence *fence)
 
 static bool i915_fence_enable_signaling(struct dma_fence *fence)
 {
-	return i915_request_enable_breadcrumb(to_request(fence));
+	struct i915_request *rq = to_request(fence);
+
+	if (dma_fence_wait_count(&rq->fence) && !i915_request_started(rq))
+		intel_rps_boost(rq);
+
+	return i915_request_enable_breadcrumb(rq);
 }
 
 static signed long i915_fence_wait(struct dma_fence *fence,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [Intel-gfx] [RFC 3/5] drm/i915: Waitboost external waits
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Userspace waits coming via the drm_syncobj route have so far been
bypassing the waitboost mechanism.

Use the previously added dma-fence wait tracking API and apply the
same waitboosting logic which applies to other entry points.

This should fix the perfomance regressions experience by clvk and
similar userspace which relies on drm_syncobj.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 7503dcb9043b..8989f62a7fba 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -94,7 +94,12 @@ static bool i915_fence_signaled(struct dma_fence *fence)
 
 static bool i915_fence_enable_signaling(struct dma_fence *fence)
 {
-	return i915_request_enable_breadcrumb(to_request(fence));
+	struct i915_request *rq = to_request(fence);
+
+	if (dma_fence_wait_count(&rq->fence) && !i915_request_started(rq))
+		intel_rps_boost(rq);
+
+	return i915_request_enable_breadcrumb(rq);
 }
 
 static signed long i915_fence_wait(struct dma_fence *fence,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC 4/5] drm/i915: Mark waits as explicit
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the previously added dma-fence API to mark the direct i915 waits as
explicit. This has no significant effect apart from following the new
pattern.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 8989f62a7fba..488b180f8821 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -2046,7 +2046,8 @@ long i915_request_wait_timeout(struct i915_request *rq,
 		intel_rps_boost(rq);
 
 	wait.tsk = current;
-	if (dma_fence_add_callback(&rq->fence, &wait.cb, request_wait_wake))
+	if (dma_fence_add_wait_callback(&rq->fence, &wait.cb,
+					request_wait_wake))
 		goto out;
 
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [Intel-gfx] [RFC 4/5] drm/i915: Mark waits as explicit
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the previously added dma-fence API to mark the direct i915 waits as
explicit. This has no significant effect apart from following the new
pattern.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 8989f62a7fba..488b180f8821 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -2046,7 +2046,8 @@ long i915_request_wait_timeout(struct i915_request *rq,
 		intel_rps_boost(rq);
 
 	wait.tsk = current;
-	if (dma_fence_add_callback(&rq->fence, &wait.cb, request_wait_wake))
+	if (dma_fence_add_wait_callback(&rq->fence, &wait.cb,
+					request_wait_wake))
 		goto out;
 
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC 5/5] drm/i915: Wait boost requests waited upon by others
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  -1 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel; +Cc: Tvrtko Ursulin

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the newly added dma-fence API to apply waitboost not only requests
which have been marked with I915_WAIT_PRIORITY by i915, but which may be
waited upon by others (such as for instance buffer sharing in multi-GPU
scenarios).

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 488b180f8821..e24fac5c1567 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -2042,7 +2042,8 @@ long i915_request_wait_timeout(struct i915_request *rq,
 	 * but at a cost of spending more power processing the workload
 	 * (bad for battery).
 	 */
-	if (flags & I915_WAIT_PRIORITY && !i915_request_started(rq))
+	if (((flags & I915_WAIT_PRIORITY) || dma_fence_wait_count(&rq->fence))
+	    && !i915_request_started(rq))
 		intel_rps_boost(rq);
 
 	wait.tsk = current;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [Intel-gfx] [RFC 5/5] drm/i915: Wait boost requests waited upon by others
@ 2023-02-10 13:06   ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-10 13:06 UTC (permalink / raw)
  To: Intel-gfx, dri-devel

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Use the newly added dma-fence API to apply waitboost not only requests
which have been marked with I915_WAIT_PRIORITY by i915, but which may be
waited upon by others (such as for instance buffer sharing in multi-GPU
scenarios).

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 488b180f8821..e24fac5c1567 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -2042,7 +2042,8 @@ long i915_request_wait_timeout(struct i915_request *rq,
 	 * but at a cost of spending more power processing the workload
 	 * (bad for battery).
 	 */
-	if (flags & I915_WAIT_PRIORITY && !i915_request_started(rq))
+	if (((flags & I915_WAIT_PRIORITY) || dma_fence_wait_count(&rq->fence))
+	    && !i915_request_started(rq))
 		intel_rps_boost(rq);
 
 	wait.tsk = current;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Waitboost drm syncobj waits (rev2)
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (5 preceding siblings ...)
  (?)
@ 2023-02-10 13:24 ` Patchwork
  -1 siblings, 0 replies; 40+ messages in thread
From: Patchwork @ 2023-02-10 13:24 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

== Series Details ==

Series: Waitboost drm syncobj waits (rev2)
URL   : https://patchwork.freedesktop.org/series/113846/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.



^ permalink raw reply	[flat|nested] 40+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for Waitboost drm syncobj waits (rev2)
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (6 preceding siblings ...)
  (?)
@ 2023-02-10 17:05 ` Patchwork
  -1 siblings, 0 replies; 40+ messages in thread
From: Patchwork @ 2023-02-10 17:05 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 12165 bytes --]

== Series Details ==

Series: Waitboost drm syncobj waits (rev2)
URL   : https://patchwork.freedesktop.org/series/113846/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_12727 -> Patchwork_113846v2
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_113846v2 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_113846v2, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/index.html

Participating hosts (40 -> 38)
------------------------------

  Missing    (2): bat-atsm-1 fi-snb-2520m 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_113846v2:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_busy@busy@all-engines:
    - fi-ivb-3770:        [PASS][1] -> [ABORT][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-ivb-3770/igt@gem_busy@busy@all-engines.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-ivb-3770/igt@gem_busy@busy@all-engines.html

  * igt@i915_module_load@load:
    - fi-ilk-650:         [PASS][3] -> [ABORT][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-ilk-650/igt@i915_module_load@load.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-ilk-650/igt@i915_module_load@load.html
    - fi-blb-e6850:       [PASS][5] -> [ABORT][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-blb-e6850/igt@i915_module_load@load.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-blb-e6850/igt@i915_module_load@load.html
    - fi-bsw-n3050:       [PASS][7] -> [ABORT][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-bsw-n3050/igt@i915_module_load@load.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-bsw-n3050/igt@i915_module_load@load.html
    - fi-rkl-11600:       [PASS][9] -> [ABORT][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-rkl-11600/igt@i915_module_load@load.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-rkl-11600/igt@i915_module_load@load.html
    - fi-skl-6600u:       [PASS][11] -> [ABORT][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-skl-6600u/igt@i915_module_load@load.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-skl-6600u/igt@i915_module_load@load.html
    - fi-apl-guc:         [PASS][13] -> [ABORT][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-apl-guc/igt@i915_module_load@load.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-apl-guc/igt@i915_module_load@load.html
    - bat-dg1-5:          [PASS][15] -> [ABORT][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-dg1-5/igt@i915_module_load@load.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-dg1-5/igt@i915_module_load@load.html
    - fi-pnv-d510:        [PASS][17] -> [ABORT][18]
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-pnv-d510/igt@i915_module_load@load.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-pnv-d510/igt@i915_module_load@load.html
    - fi-glk-j4005:       [PASS][19] -> [ABORT][20]
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-glk-j4005/igt@i915_module_load@load.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-glk-j4005/igt@i915_module_load@load.html
    - fi-skl-guc:         [PASS][21] -> [ABORT][22]
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-skl-guc/igt@i915_module_load@load.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-skl-guc/igt@i915_module_load@load.html
    - bat-dg1-6:          [PASS][23] -> [ABORT][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-dg1-6/igt@i915_module_load@load.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-dg1-6/igt@i915_module_load@load.html
    - fi-kbl-7567u:       [PASS][25] -> [ABORT][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-kbl-7567u/igt@i915_module_load@load.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-kbl-7567u/igt@i915_module_load@load.html
    - fi-cfl-8700k:       [PASS][27] -> [ABORT][28]
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-cfl-8700k/igt@i915_module_load@load.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-cfl-8700k/igt@i915_module_load@load.html
    - fi-elk-e7500:       [PASS][29] -> [ABORT][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-elk-e7500/igt@i915_module_load@load.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-elk-e7500/igt@i915_module_load@load.html
    - fi-bsw-nick:        [PASS][31] -> [ABORT][32]
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-bsw-nick/igt@i915_module_load@load.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-bsw-nick/igt@i915_module_load@load.html
    - fi-tgl-1115g4:      [PASS][33] -> [ABORT][34]
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-tgl-1115g4/igt@i915_module_load@load.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-tgl-1115g4/igt@i915_module_load@load.html
    - fi-kbl-x1275:       [PASS][35] -> [ABORT][36]
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-kbl-x1275/igt@i915_module_load@load.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-kbl-x1275/igt@i915_module_load@load.html
    - fi-hsw-4770:        [PASS][37] -> [ABORT][38]
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-hsw-4770/igt@i915_module_load@load.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-hsw-4770/igt@i915_module_load@load.html
    - fi-cfl-8109u:       [PASS][39] -> [ABORT][40]
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-cfl-8109u/igt@i915_module_load@load.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-cfl-8109u/igt@i915_module_load@load.html
    - fi-kbl-guc:         [PASS][41] -> [ABORT][42]
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-kbl-guc/igt@i915_module_load@load.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-kbl-guc/igt@i915_module_load@load.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_module_load@load:
    - {bat-jsl-1}:        [PASS][43] -> [ABORT][44]
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-jsl-1/igt@i915_module_load@load.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-jsl-1/igt@i915_module_load@load.html
    - {bat-rpls-1}:       [PASS][45] -> [ABORT][46]
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-rpls-1/igt@i915_module_load@load.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-rpls-1/igt@i915_module_load@load.html
    - {bat-adlp-6}:       [PASS][47] -> [ABORT][48]
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-adlp-6/igt@i915_module_load@load.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-adlp-6/igt@i915_module_load@load.html
    - {bat-adls-5}:       [PASS][49] -> [ABORT][50]
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-adls-5/igt@i915_module_load@load.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-adls-5/igt@i915_module_load@load.html
    - {bat-dg1-7}:        [PASS][51] -> [ABORT][52]
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-dg1-7/igt@i915_module_load@load.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-dg1-7/igt@i915_module_load@load.html
    - {bat-jsl-3}:        [PASS][53] -> [ABORT][54]
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-jsl-3/igt@i915_module_load@load.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-jsl-3/igt@i915_module_load@load.html
    - {bat-adlp-9}:       [PASS][55] -> [ABORT][56]
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-adlp-9/igt@i915_module_load@load.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-adlp-9/igt@i915_module_load@load.html
    - {bat-dg2-11}:       [PASS][57] -> [ABORT][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-dg2-11/igt@i915_module_load@load.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-dg2-11/igt@i915_module_load@load.html
    - {bat-adln-1}:       [PASS][59] -> [ABORT][60]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-adln-1/igt@i915_module_load@load.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-adln-1/igt@i915_module_load@load.html
    - {bat-adlm-1}:       [PASS][61] -> [ABORT][62]
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-adlm-1/igt@i915_module_load@load.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-adlm-1/igt@i915_module_load@load.html
    - {bat-rplp-1}:       [PASS][63] -> [ABORT][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-rplp-1/igt@i915_module_load@load.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-rplp-1/igt@i915_module_load@load.html
    - {bat-dg2-9}:        [PASS][65] -> [ABORT][66]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-dg2-9/igt@i915_module_load@load.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-dg2-9/igt@i915_module_load@load.html
    - {bat-rpls-2}:       [PASS][67] -> [ABORT][68]
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-rpls-2/igt@i915_module_load@load.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-rpls-2/igt@i915_module_load@load.html
    - {bat-dg2-8}:        [PASS][69] -> [ABORT][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/bat-dg2-8/igt@i915_module_load@load.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/bat-dg2-8/igt@i915_module_load@load.html

  
Known issues
------------

  Here are the changes found in Patchwork_113846v2 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_module_load@load:
    - fi-cfl-guc:         [PASS][71] -> [ABORT][72] ([i915#8141])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12727/fi-cfl-guc/igt@i915_module_load@load.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/fi-cfl-guc/igt@i915_module_load@load.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#8141]: https://gitlab.freedesktop.org/drm/intel/issues/8141


Build changes
-------------

  * Linux: CI_DRM_12727 -> Patchwork_113846v2

  CI-20190529: 20190529
  CI_DRM_12727: 571111c8567d76c5a5ec4641ba771bb1bdd278fc @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7156: d10786443fef434534ec4d08e93db92dfad84c7b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_113846v2: 571111c8567d76c5a5ec4641ba771bb1bdd278fc @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

7a08fd7f270f drm/i915: Wait boost requests waited upon by others
8c687c784061 drm/i915: Mark waits as explicit
119d3caa3ca6 drm/i915: Waitboost external waits
dfadc3ae295a drm/syncobj: Mark syncobj waits as external waiters
9729003feb78 dma-fence: Track explicit waiters

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_113846v2/index.html

[-- Attachment #2: Type: text/html, Size: 13360 bytes --]

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
                   ` (7 preceding siblings ...)
  (?)
@ 2023-02-14 19:14 ` Rob Clark
  2023-02-14 19:26   ` Rob Clark
                     ` (2 more replies)
  -1 siblings, 3 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-14 19:14 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Rob Clark, Intel-gfx, dri-devel

On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
> In i915 we have this concept of "wait boosting" where we give a priority boost
> for instance to fences which are actively waited upon from userspace. This has
> it's pros and cons and can certainly be discussed at lenght. However fact is
> some workloads really like it.
>
> Problem is that with the arrival of drm syncobj and a new userspace waiting
> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> this mini series really (really) quickly to see if some discussion can be had.
>
> It adds a concept of "wait count" to dma fence, which is incremented for every
> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> dma_fence_add_callback but from explicit/userspace wait paths).

I was thinking about a similar thing, but in the context of dma_fence
(or rather sync_file) fd poll()ing.  How does the kernel differentiate
between "housekeeping" poll()ers that don't want to trigger boost but
simply know when to do cleanup, and waiters who are waiting with some
urgency.  I think we could use EPOLLPRI for this purpose.

Not sure how that translates to waits via the syncobj.  But I think we
want to let userspace give some hint about urgent vs housekeeping
waits.

Also, on a related topic: https://lwn.net/Articles/868468/

BR,
-R

> Individual drivers can then inspect this via dma_fence_wait_count() and decide
> to wait boost the waits on such fences.
>
> Again, quickly put together and smoke tested only - no guarantees whatsoever and
> I will rely on interested parties to test and report if it even works or how
> well.
>
> v2:
>  * Small fixups based on CI feedback:
>     * Handle decrement correctly for already signalled case while adding callback.
>     * Remove i915 assert which was making sure struct i915_request does not grow.
>  * Split out the i915 patch into three separate functional changes.
>
> Tvrtko Ursulin (5):
>   dma-fence: Track explicit waiters
>   drm/syncobj: Mark syncobj waits as external waiters
>   drm/i915: Waitboost external waits
>   drm/i915: Mark waits as explicit
>   drm/i915: Wait boost requests waited upon by others
>
>  drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
>  drivers/gpu/drm/drm_syncobj.c             |   6 +-
>  drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
>  drivers/gpu/drm/i915/i915_request.c       |  13 ++-
>  include/linux/dma-fence.h                 |  14 +++
>  5 files changed, 101 insertions(+), 35 deletions(-)
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-14 19:14 ` [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits Rob Clark
@ 2023-02-14 19:26   ` Rob Clark
  2023-02-16 11:19   ` Tvrtko Ursulin
  2023-02-16 18:19     ` Rodrigo Vivi
  2 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-14 19:26 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Rob Clark, Intel-gfx, dri-devel

On Tue, Feb 14, 2023 at 11:14 AM Rob Clark <robdclark@gmail.com> wrote:
>
> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
> >
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >
> > In i915 we have this concept of "wait boosting" where we give a priority boost
> > for instance to fences which are actively waited upon from userspace. This has
> > it's pros and cons and can certainly be discussed at lenght. However fact is
> > some workloads really like it.
> >
> > Problem is that with the arrival of drm syncobj and a new userspace waiting
> > entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > this mini series really (really) quickly to see if some discussion can be had.
> >
> > It adds a concept of "wait count" to dma fence, which is incremented for every
> > explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > dma_fence_add_callback but from explicit/userspace wait paths).
>
> I was thinking about a similar thing, but in the context of dma_fence
> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> between "housekeeping" poll()ers that don't want to trigger boost but
> simply know when to do cleanup, and waiters who are waiting with some
> urgency.  I think we could use EPOLLPRI for this purpose.
>
> Not sure how that translates to waits via the syncobj.  But I think we
> want to let userspace give some hint about urgent vs housekeeping
> waits.

So probably the syncobj equiv of this would be to add something along
the lines of DRM_SYNCOBJ_WAIT_FLAGS_WAIT_PRI

BR,
-R

> Also, on a related topic: https://lwn.net/Articles/868468/
>
> BR,
> -R
>
> > Individual drivers can then inspect this via dma_fence_wait_count() and decide
> > to wait boost the waits on such fences.
> >
> > Again, quickly put together and smoke tested only - no guarantees whatsoever and
> > I will rely on interested parties to test and report if it even works or how
> > well.
> >
> > v2:
> >  * Small fixups based on CI feedback:
> >     * Handle decrement correctly for already signalled case while adding callback.
> >     * Remove i915 assert which was making sure struct i915_request does not grow.
> >  * Split out the i915 patch into three separate functional changes.
> >
> > Tvrtko Ursulin (5):
> >   dma-fence: Track explicit waiters
> >   drm/syncobj: Mark syncobj waits as external waiters
> >   drm/i915: Waitboost external waits
> >   drm/i915: Mark waits as explicit
> >   drm/i915: Wait boost requests waited upon by others
> >
> >  drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
> >  drivers/gpu/drm/drm_syncobj.c             |   6 +-
> >  drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
> >  drivers/gpu/drm/i915/i915_request.c       |  13 ++-
> >  include/linux/dma-fence.h                 |  14 +++
> >  5 files changed, 101 insertions(+), 35 deletions(-)
> >
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-14 19:14 ` [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits Rob Clark
  2023-02-14 19:26   ` Rob Clark
@ 2023-02-16 11:19   ` Tvrtko Ursulin
  2023-02-16 15:43       ` Rob Clark
  2023-02-16 18:19     ` Rodrigo Vivi
  2 siblings, 1 reply; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-16 11:19 UTC (permalink / raw)
  To: Rob Clark; +Cc: Rob Clark, Intel-gfx, dri-devel


On 14/02/2023 19:14, Rob Clark wrote:
> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> In i915 we have this concept of "wait boosting" where we give a priority boost
>> for instance to fences which are actively waited upon from userspace. This has
>> it's pros and cons and can certainly be discussed at lenght. However fact is
>> some workloads really like it.
>>
>> Problem is that with the arrival of drm syncobj and a new userspace waiting
>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
>> this mini series really (really) quickly to see if some discussion can be had.
>>
>> It adds a concept of "wait count" to dma fence, which is incremented for every
>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
>> dma_fence_add_callback but from explicit/userspace wait paths).
> 
> I was thinking about a similar thing, but in the context of dma_fence
> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> between "housekeeping" poll()ers that don't want to trigger boost but
> simply know when to do cleanup, and waiters who are waiting with some
> urgency.  I think we could use EPOLLPRI for this purpose.

Sounds plausible to allow distinguishing the two.

I wasn't aware one can set POLLPRI in pollfd.events but it appears it could be allowed:

/* Event types that can be polled for.  These bits may be set in `events'
    to indicate the interesting event types; they will appear in `revents'
    to indicate the status of the file descriptor.  */
#define POLLIN          0x001           /* There is data to read.  */
#define POLLPRI         0x002           /* There is urgent data to read.  */
#define POLLOUT         0x004           /* Writing now will not block.  */

> Not sure how that translates to waits via the syncobj.  But I think we
> want to let userspace give some hint about urgent vs housekeeping
> waits.

Probably DRM_SYNCOBJ_WAIT_FLAGS_<something>.

Both look easy additions on top of my series. It would be just a matter of dma_fence_add_callback vs dma_fence_add_wait_callback based on flags, as that's how I called the "explicit userspace wait" one.

It would require userspace changes to make use of it but that is probably okay, or even preferable, since it makes the thing less of a heuristic. What I don't know however is how feasible is to wire it up with say OpenCL, OpenGL or Vulkan, to allow application writers distinguish between house keeping vs performance sensitive waits.

> Also, on a related topic: https://lwn.net/Articles/868468/

Right, I missed that one.

One thing to mention is that my motivation here wasn't strictly waits relating to frame presentation but clvk workloads which constantly move between the CPU and GPU. Even outside the compute domain, I think this is a workload characteristic where waitboost in general helps. The concept of deadline could still be used I guess, just setting it for some artificially early value, when the actual time does not exist. But scanning that discussion seems the proposal got bogged down in interactions between mode setting and stuff?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-16 11:19   ` Tvrtko Ursulin
@ 2023-02-16 15:43       ` Rob Clark
  0 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-16 15:43 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Rob Clark, Intel-gfx, dri-devel

On Thu, Feb 16, 2023 at 3:19 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 14/02/2023 19:14, Rob Clark wrote:
> > On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> In i915 we have this concept of "wait boosting" where we give a priority boost
> >> for instance to fences which are actively waited upon from userspace. This has
> >> it's pros and cons and can certainly be discussed at lenght. However fact is
> >> some workloads really like it.
> >>
> >> Problem is that with the arrival of drm syncobj and a new userspace waiting
> >> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> >> this mini series really (really) quickly to see if some discussion can be had.
> >>
> >> It adds a concept of "wait count" to dma fence, which is incremented for every
> >> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> >> dma_fence_add_callback but from explicit/userspace wait paths).
> >
> > I was thinking about a similar thing, but in the context of dma_fence
> > (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > between "housekeeping" poll()ers that don't want to trigger boost but
> > simply know when to do cleanup, and waiters who are waiting with some
> > urgency.  I think we could use EPOLLPRI for this purpose.
>
> Sounds plausible to allow distinguishing the two.
>
> I wasn't aware one can set POLLPRI in pollfd.events but it appears it could be allowed:
>
> /* Event types that can be polled for.  These bits may be set in `events'
>     to indicate the interesting event types; they will appear in `revents'
>     to indicate the status of the file descriptor.  */
> #define POLLIN          0x001           /* There is data to read.  */
> #define POLLPRI         0x002           /* There is urgent data to read.  */
> #define POLLOUT         0x004           /* Writing now will not block.  */
>
> > Not sure how that translates to waits via the syncobj.  But I think we
> > want to let userspace give some hint about urgent vs housekeeping
> > waits.
>
> Probably DRM_SYNCOBJ_WAIT_FLAGS_<something>.
>
> Both look easy additions on top of my series. It would be just a matter of dma_fence_add_callback vs dma_fence_add_wait_callback based on flags, as that's how I called the "explicit userspace wait" one.
>
> It would require userspace changes to make use of it but that is probably okay, or even preferable, since it makes the thing less of a heuristic. What I don't know however is how feasible is to wire it up with say OpenCL, OpenGL or Vulkan, to allow application writers distinguish between house keeping vs performance sensitive waits.
>

I think to start with, we consider API level waits as
POLLPRI/DRM_SYNCOBJ_WAIT_PRI until someone types up an extension to
give the app control.  I guess most housekeeping waits will be within
the driver.

(I could see the argument for making "PRI" the default and having a
new flag for non-boosting waits.. but POLLPRI is also some sort of
precedent)

> > Also, on a related topic: https://lwn.net/Articles/868468/
>
> Right, I missed that one.
>
> One thing to mention is that my motivation here wasn't strictly waits relating to frame presentation but clvk workloads which constantly move between the CPU and GPU. Even outside the compute domain, I think this is a workload characteristic where waitboost in general helps. The concept of deadline could still be used I guess, just setting it for some artificially early value, when the actual time does not exist. But scanning that discussion seems the proposal got bogged down in interactions between mode setting and stuff?
>

Yeah, it isn't _exactly_ the same thing but it is the same class of
problem where GPU stalling on something else sends the freq in the
wrong direction.  Probably we could consider wait-boosting as simply
an immediate deadline to unify the two things.

BR,
-R


> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-16 15:43       ` Rob Clark
  0 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-16 15:43 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: Rob Clark, Intel-gfx, Daniel Vetter, dri-devel

On Thu, Feb 16, 2023 at 3:19 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 14/02/2023 19:14, Rob Clark wrote:
> > On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >> In i915 we have this concept of "wait boosting" where we give a priority boost
> >> for instance to fences which are actively waited upon from userspace. This has
> >> it's pros and cons and can certainly be discussed at lenght. However fact is
> >> some workloads really like it.
> >>
> >> Problem is that with the arrival of drm syncobj and a new userspace waiting
> >> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> >> this mini series really (really) quickly to see if some discussion can be had.
> >>
> >> It adds a concept of "wait count" to dma fence, which is incremented for every
> >> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> >> dma_fence_add_callback but from explicit/userspace wait paths).
> >
> > I was thinking about a similar thing, but in the context of dma_fence
> > (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > between "housekeeping" poll()ers that don't want to trigger boost but
> > simply know when to do cleanup, and waiters who are waiting with some
> > urgency.  I think we could use EPOLLPRI for this purpose.
>
> Sounds plausible to allow distinguishing the two.
>
> I wasn't aware one can set POLLPRI in pollfd.events but it appears it could be allowed:
>
> /* Event types that can be polled for.  These bits may be set in `events'
>     to indicate the interesting event types; they will appear in `revents'
>     to indicate the status of the file descriptor.  */
> #define POLLIN          0x001           /* There is data to read.  */
> #define POLLPRI         0x002           /* There is urgent data to read.  */
> #define POLLOUT         0x004           /* Writing now will not block.  */
>
> > Not sure how that translates to waits via the syncobj.  But I think we
> > want to let userspace give some hint about urgent vs housekeeping
> > waits.
>
> Probably DRM_SYNCOBJ_WAIT_FLAGS_<something>.
>
> Both look easy additions on top of my series. It would be just a matter of dma_fence_add_callback vs dma_fence_add_wait_callback based on flags, as that's how I called the "explicit userspace wait" one.
>
> It would require userspace changes to make use of it but that is probably okay, or even preferable, since it makes the thing less of a heuristic. What I don't know however is how feasible is to wire it up with say OpenCL, OpenGL or Vulkan, to allow application writers distinguish between house keeping vs performance sensitive waits.
>

I think to start with, we consider API level waits as
POLLPRI/DRM_SYNCOBJ_WAIT_PRI until someone types up an extension to
give the app control.  I guess most housekeeping waits will be within
the driver.

(I could see the argument for making "PRI" the default and having a
new flag for non-boosting waits.. but POLLPRI is also some sort of
precedent)

> > Also, on a related topic: https://lwn.net/Articles/868468/
>
> Right, I missed that one.
>
> One thing to mention is that my motivation here wasn't strictly waits relating to frame presentation but clvk workloads which constantly move between the CPU and GPU. Even outside the compute domain, I think this is a workload characteristic where waitboost in general helps. The concept of deadline could still be used I guess, just setting it for some artificially early value, when the actual time does not exist. But scanning that discussion seems the proposal got bogged down in interactions between mode setting and stuff?
>

Yeah, it isn't _exactly_ the same thing but it is the same class of
problem where GPU stalling on something else sends the freq in the
wrong direction.  Probably we could consider wait-boosting as simply
an immediate deadline to unify the two things.

BR,
-R


> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-14 19:14 ` [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits Rob Clark
@ 2023-02-16 18:19     ` Rodrigo Vivi
  2023-02-16 11:19   ` Tvrtko Ursulin
  2023-02-16 18:19     ` Rodrigo Vivi
  2 siblings, 0 replies; 40+ messages in thread
From: Rodrigo Vivi @ 2023-02-16 18:19 UTC (permalink / raw)
  To: Rob Clark, Alex Deucher; +Cc: Rob Clark, Tvrtko Ursulin, Intel-gfx, dri-devel

On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
> >
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >
> > In i915 we have this concept of "wait boosting" where we give a priority boost
> > for instance to fences which are actively waited upon from userspace. This has
> > it's pros and cons and can certainly be discussed at lenght. However fact is
> > some workloads really like it.
> >
> > Problem is that with the arrival of drm syncobj and a new userspace waiting
> > entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > this mini series really (really) quickly to see if some discussion can be had.
> >
> > It adds a concept of "wait count" to dma fence, which is incremented for every
> > explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > dma_fence_add_callback but from explicit/userspace wait paths).
> 
> I was thinking about a similar thing, but in the context of dma_fence
> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> between "housekeeping" poll()ers that don't want to trigger boost but
> simply know when to do cleanup, and waiters who are waiting with some
> urgency.  I think we could use EPOLLPRI for this purpose.
> 
> Not sure how that translates to waits via the syncobj.  But I think we
> want to let userspace give some hint about urgent vs housekeeping
> waits.

Should the hint be on the waits, or should the hints be on the executed
context?

In the end we need some way to quickly ramp-up the frequency to avoid
the execution bubbles.

waitboost is trying to guess that, but in some cases it guess wrong
and waste power.

btw, this is something that other drivers might need:

https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
Cc: Alex Deucher <alexander.deucher@amd.com>

> 
> Also, on a related topic: https://lwn.net/Articles/868468/
> 
> BR,
> -R
> 
> > Individual drivers can then inspect this via dma_fence_wait_count() and decide
> > to wait boost the waits on such fences.
> >
> > Again, quickly put together and smoke tested only - no guarantees whatsoever and
> > I will rely on interested parties to test and report if it even works or how
> > well.
> >
> > v2:
> >  * Small fixups based on CI feedback:
> >     * Handle decrement correctly for already signalled case while adding callback.
> >     * Remove i915 assert which was making sure struct i915_request does not grow.
> >  * Split out the i915 patch into three separate functional changes.
> >
> > Tvrtko Ursulin (5):
> >   dma-fence: Track explicit waiters
> >   drm/syncobj: Mark syncobj waits as external waiters
> >   drm/i915: Waitboost external waits
> >   drm/i915: Mark waits as explicit
> >   drm/i915: Wait boost requests waited upon by others
> >
> >  drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
> >  drivers/gpu/drm/drm_syncobj.c             |   6 +-
> >  drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
> >  drivers/gpu/drm/i915/i915_request.c       |  13 ++-
> >  include/linux/dma-fence.h                 |  14 +++
> >  5 files changed, 101 insertions(+), 35 deletions(-)
> >
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-16 18:19     ` Rodrigo Vivi
  0 siblings, 0 replies; 40+ messages in thread
From: Rodrigo Vivi @ 2023-02-16 18:19 UTC (permalink / raw)
  To: Rob Clark, Alex Deucher; +Cc: Rob Clark, Intel-gfx, dri-devel

On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
> >
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >
> > In i915 we have this concept of "wait boosting" where we give a priority boost
> > for instance to fences which are actively waited upon from userspace. This has
> > it's pros and cons and can certainly be discussed at lenght. However fact is
> > some workloads really like it.
> >
> > Problem is that with the arrival of drm syncobj and a new userspace waiting
> > entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > this mini series really (really) quickly to see if some discussion can be had.
> >
> > It adds a concept of "wait count" to dma fence, which is incremented for every
> > explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > dma_fence_add_callback but from explicit/userspace wait paths).
> 
> I was thinking about a similar thing, but in the context of dma_fence
> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> between "housekeeping" poll()ers that don't want to trigger boost but
> simply know when to do cleanup, and waiters who are waiting with some
> urgency.  I think we could use EPOLLPRI for this purpose.
> 
> Not sure how that translates to waits via the syncobj.  But I think we
> want to let userspace give some hint about urgent vs housekeeping
> waits.

Should the hint be on the waits, or should the hints be on the executed
context?

In the end we need some way to quickly ramp-up the frequency to avoid
the execution bubbles.

waitboost is trying to guess that, but in some cases it guess wrong
and waste power.

btw, this is something that other drivers might need:

https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
Cc: Alex Deucher <alexander.deucher@amd.com>

> 
> Also, on a related topic: https://lwn.net/Articles/868468/
> 
> BR,
> -R
> 
> > Individual drivers can then inspect this via dma_fence_wait_count() and decide
> > to wait boost the waits on such fences.
> >
> > Again, quickly put together and smoke tested only - no guarantees whatsoever and
> > I will rely on interested parties to test and report if it even works or how
> > well.
> >
> > v2:
> >  * Small fixups based on CI feedback:
> >     * Handle decrement correctly for already signalled case while adding callback.
> >     * Remove i915 assert which was making sure struct i915_request does not grow.
> >  * Split out the i915 patch into three separate functional changes.
> >
> > Tvrtko Ursulin (5):
> >   dma-fence: Track explicit waiters
> >   drm/syncobj: Mark syncobj waits as external waiters
> >   drm/i915: Waitboost external waits
> >   drm/i915: Mark waits as explicit
> >   drm/i915: Wait boost requests waited upon by others
> >
> >  drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
> >  drivers/gpu/drm/drm_syncobj.c             |   6 +-
> >  drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
> >  drivers/gpu/drm/i915/i915_request.c       |  13 ++-
> >  include/linux/dma-fence.h                 |  14 +++
> >  5 files changed, 101 insertions(+), 35 deletions(-)
> >
> > --
> > 2.34.1
> >

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-16 18:19     ` Rodrigo Vivi
@ 2023-02-16 19:59       ` Rob Clark
  -1 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-16 19:59 UTC (permalink / raw)
  To: Rodrigo Vivi
  Cc: Alex Deucher, Tvrtko Ursulin, Intel-gfx, Rob Clark, dri-devel

On Thu, Feb 16, 2023 at 10:20 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> > On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> > >
> > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >
> > > In i915 we have this concept of "wait boosting" where we give a priority boost
> > > for instance to fences which are actively waited upon from userspace. This has
> > > it's pros and cons and can certainly be discussed at lenght. However fact is
> > > some workloads really like it.
> > >
> > > Problem is that with the arrival of drm syncobj and a new userspace waiting
> > > entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > > this mini series really (really) quickly to see if some discussion can be had.
> > >
> > > It adds a concept of "wait count" to dma fence, which is incremented for every
> > > explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > > dma_fence_add_callback but from explicit/userspace wait paths).
> >
> > I was thinking about a similar thing, but in the context of dma_fence
> > (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > between "housekeeping" poll()ers that don't want to trigger boost but
> > simply know when to do cleanup, and waiters who are waiting with some
> > urgency.  I think we could use EPOLLPRI for this purpose.
> >
> > Not sure how that translates to waits via the syncobj.  But I think we
> > want to let userspace give some hint about urgent vs housekeeping
> > waits.
>
> Should the hint be on the waits, or should the hints be on the executed
> context?

I think it should be on the wait, because different waits may be for
different purposes.  Ideally this could be exposed at the app API
level, but I guess first step is to expose it to userspace.

BR,
-R

> In the end we need some way to quickly ramp-up the frequency to avoid
> the execution bubbles.
>
> waitboost is trying to guess that, but in some cases it guess wrong
> and waste power.
>
> btw, this is something that other drivers might need:
>
> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> Cc: Alex Deucher <alexander.deucher@amd.com>
>
> >
> > Also, on a related topic: https://lwn.net/Articles/868468/
> >
> > BR,
> > -R
> >
> > > Individual drivers can then inspect this via dma_fence_wait_count() and decide
> > > to wait boost the waits on such fences.
> > >
> > > Again, quickly put together and smoke tested only - no guarantees whatsoever and
> > > I will rely on interested parties to test and report if it even works or how
> > > well.
> > >
> > > v2:
> > >  * Small fixups based on CI feedback:
> > >     * Handle decrement correctly for already signalled case while adding callback.
> > >     * Remove i915 assert which was making sure struct i915_request does not grow.
> > >  * Split out the i915 patch into three separate functional changes.
> > >
> > > Tvrtko Ursulin (5):
> > >   dma-fence: Track explicit waiters
> > >   drm/syncobj: Mark syncobj waits as external waiters
> > >   drm/i915: Waitboost external waits
> > >   drm/i915: Mark waits as explicit
> > >   drm/i915: Wait boost requests waited upon by others
> > >
> > >  drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
> > >  drivers/gpu/drm/drm_syncobj.c             |   6 +-
> > >  drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
> > >  drivers/gpu/drm/i915/i915_request.c       |  13 ++-
> > >  include/linux/dma-fence.h                 |  14 +++
> > >  5 files changed, 101 insertions(+), 35 deletions(-)
> > >
> > > --
> > > 2.34.1
> > >

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-16 19:59       ` Rob Clark
  0 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-16 19:59 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: Alex Deucher, Intel-gfx, Rob Clark, dri-devel

On Thu, Feb 16, 2023 at 10:20 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> > On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> > >
> > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >
> > > In i915 we have this concept of "wait boosting" where we give a priority boost
> > > for instance to fences which are actively waited upon from userspace. This has
> > > it's pros and cons and can certainly be discussed at lenght. However fact is
> > > some workloads really like it.
> > >
> > > Problem is that with the arrival of drm syncobj and a new userspace waiting
> > > entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > > this mini series really (really) quickly to see if some discussion can be had.
> > >
> > > It adds a concept of "wait count" to dma fence, which is incremented for every
> > > explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > > dma_fence_add_callback but from explicit/userspace wait paths).
> >
> > I was thinking about a similar thing, but in the context of dma_fence
> > (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > between "housekeeping" poll()ers that don't want to trigger boost but
> > simply know when to do cleanup, and waiters who are waiting with some
> > urgency.  I think we could use EPOLLPRI for this purpose.
> >
> > Not sure how that translates to waits via the syncobj.  But I think we
> > want to let userspace give some hint about urgent vs housekeeping
> > waits.
>
> Should the hint be on the waits, or should the hints be on the executed
> context?

I think it should be on the wait, because different waits may be for
different purposes.  Ideally this could be exposed at the app API
level, but I guess first step is to expose it to userspace.

BR,
-R

> In the end we need some way to quickly ramp-up the frequency to avoid
> the execution bubbles.
>
> waitboost is trying to guess that, but in some cases it guess wrong
> and waste power.
>
> btw, this is something that other drivers might need:
>
> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> Cc: Alex Deucher <alexander.deucher@amd.com>
>
> >
> > Also, on a related topic: https://lwn.net/Articles/868468/
> >
> > BR,
> > -R
> >
> > > Individual drivers can then inspect this via dma_fence_wait_count() and decide
> > > to wait boost the waits on such fences.
> > >
> > > Again, quickly put together and smoke tested only - no guarantees whatsoever and
> > > I will rely on interested parties to test and report if it even works or how
> > > well.
> > >
> > > v2:
> > >  * Small fixups based on CI feedback:
> > >     * Handle decrement correctly for already signalled case while adding callback.
> > >     * Remove i915 assert which was making sure struct i915_request does not grow.
> > >  * Split out the i915 patch into three separate functional changes.
> > >
> > > Tvrtko Ursulin (5):
> > >   dma-fence: Track explicit waiters
> > >   drm/syncobj: Mark syncobj waits as external waiters
> > >   drm/i915: Waitboost external waits
> > >   drm/i915: Mark waits as explicit
> > >   drm/i915: Wait boost requests waited upon by others
> > >
> > >  drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
> > >  drivers/gpu/drm/drm_syncobj.c             |   6 +-
> > >  drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
> > >  drivers/gpu/drm/i915/i915_request.c       |  13 ++-
> > >  include/linux/dma-fence.h                 |  14 +++
> > >  5 files changed, 101 insertions(+), 35 deletions(-)
> > >
> > > --
> > > 2.34.1
> > >

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-16 18:19     ` Rodrigo Vivi
  (?)
  (?)
@ 2023-02-17 12:56     ` Tvrtko Ursulin
  2023-02-17 14:55       ` Rob Clark
  -1 siblings, 1 reply; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-17 12:56 UTC (permalink / raw)
  To: Rodrigo Vivi, Rob Clark, Alex Deucher; +Cc: Rob Clark, Intel-gfx, dri-devel


On 16/02/2023 18:19, Rodrigo Vivi wrote:
> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> In i915 we have this concept of "wait boosting" where we give a priority boost
>>> for instance to fences which are actively waited upon from userspace. This has
>>> it's pros and cons and can certainly be discussed at lenght. However fact is
>>> some workloads really like it.
>>>
>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
>>> this mini series really (really) quickly to see if some discussion can be had.
>>>
>>> It adds a concept of "wait count" to dma fence, which is incremented for every
>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
>>> dma_fence_add_callback but from explicit/userspace wait paths).
>>
>> I was thinking about a similar thing, but in the context of dma_fence
>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
>> between "housekeeping" poll()ers that don't want to trigger boost but
>> simply know when to do cleanup, and waiters who are waiting with some
>> urgency.  I think we could use EPOLLPRI for this purpose.
>>
>> Not sure how that translates to waits via the syncobj.  But I think we
>> want to let userspace give some hint about urgent vs housekeeping
>> waits.
> 
> Should the hint be on the waits, or should the hints be on the executed
> context?
> 
> In the end we need some way to quickly ramp-up the frequency to avoid
> the execution bubbles.
> 
> waitboost is trying to guess that, but in some cases it guess wrong
> and waste power.

Do we have a list of workloads which shows who benefits and who loses 
from the current implementation of waitboost?
> btw, this is something that other drivers might need:
> 
> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> Cc: Alex Deucher <alexander.deucher@amd.com>

I have several issues with the context hint if it would directly 
influence frequency selection in the "more power" direction.

First of all, assume a context hint would replace the waitboost. Which 
applications would need to set it to restore the lost performance and 
how would they set it?

Then I don't even think userspace necessarily knows. Think of a layer 
like OpenCL. It doesn't really know in advance the profile of 
submissions vs waits. It depends on the CPU vs GPU speed, so hardware 
generation, and the actual size of the workload which can be influenced 
by the application (or user) and not the library.

The approach also lends itself well for the "arms race" where every 
application can say "Me me me, I am the most important workload there is!".

The last concern is for me shared with the proposal to expose deadlines 
or high priority waits as explicit uapi knobs. Both come under the "what 
application told us it will do" category vs what it actually does. So I 
think it is slightly weaker than basing decisions of waits.

The current waitboost is a bit detached from that problem because when 
we waitboost for flips we _know_ it is an actual framebuffer in the flip 
chain. When we waitboost for waits we also know someone is waiting. We 
are not trusting userspace telling us this will be a buffer in the flip 
chain or that this is a context which will have a certain duty-cycle.

But yes, even if the input is truthful, latter is still only a 
heuristics because nothing says all waits are important. AFAIU it just 
happened to work well in the past.

I do understand I am effectively arguing for more heuristics, which may 
sound a bit against the common wisdom. This is because in general I 
think the logic to do the right thing, be it in the driver or in the 
firmware, can work best if it has a holistic view. Simply put it needs 
to have more inputs to the decisions it is making.

That is what my series is proposing - adding a common signal of "someone 
in userspace is waiting". What happens with that signal needs not be 
defined (promised) in the uapi contract.

Say you route it to SLPC logic. It doesn't need to do with it what 
legacy i915 is doing today. It just needs to do something which works 
best for majority of workloads. It can even ignore it if that works for it.

Finally, back to the immediate problem is when people replace the OpenCL 
NEO driver with clvk that performance tanks. Because former does waits 
using i915 specific ioctls and so triggers waitboost, latter waits on 
syncobj so no waitboost and performance is bad. What short term solution 
can we come up with? Or we say to not use clvk? I also wonder if other 
Vulkan based stuff is perhaps missing those easy performance gains..

Perhaps strictly speaking Rob's and mine proposal are not mutually 
exclusive. Yes I could piggy back on his with an "immediate deadline for 
waits" idea, but they could also be separate concepts if we concluded 
"someone is waiting" signal is useful to have. Or it takes to long to 
upstream the full deadline idea.

Regards,

Tvrtko

>>
>> Also, on a related topic: https://lwn.net/Articles/868468/
>>
>> BR,
>> -R
>>
>>> Individual drivers can then inspect this via dma_fence_wait_count() and decide
>>> to wait boost the waits on such fences.
>>>
>>> Again, quickly put together and smoke tested only - no guarantees whatsoever and
>>> I will rely on interested parties to test and report if it even works or how
>>> well.
>>>
>>> v2:
>>>   * Small fixups based on CI feedback:
>>>      * Handle decrement correctly for already signalled case while adding callback.
>>>      * Remove i915 assert which was making sure struct i915_request does not grow.
>>>   * Split out the i915 patch into three separate functional changes.
>>>
>>> Tvrtko Ursulin (5):
>>>    dma-fence: Track explicit waiters
>>>    drm/syncobj: Mark syncobj waits as external waiters
>>>    drm/i915: Waitboost external waits
>>>    drm/i915: Mark waits as explicit
>>>    drm/i915: Wait boost requests waited upon by others
>>>
>>>   drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
>>>   drivers/gpu/drm/drm_syncobj.c             |   6 +-
>>>   drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
>>>   drivers/gpu/drm/i915/i915_request.c       |  13 ++-
>>>   include/linux/dma-fence.h                 |  14 +++
>>>   5 files changed, 101 insertions(+), 35 deletions(-)
>>>
>>> --
>>> 2.34.1
>>>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 12:56     ` Tvrtko Ursulin
@ 2023-02-17 14:55       ` Rob Clark
  2023-02-17 16:03         ` Tvrtko Ursulin
  0 siblings, 1 reply; 40+ messages in thread
From: Rob Clark @ 2023-02-17 14:55 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi

On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> > On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> >> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> >> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>
> >>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>>
> >>> In i915 we have this concept of "wait boosting" where we give a priority boost
> >>> for instance to fences which are actively waited upon from userspace. This has
> >>> it's pros and cons and can certainly be discussed at lenght. However fact is
> >>> some workloads really like it.
> >>>
> >>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> >>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> >>> this mini series really (really) quickly to see if some discussion can be had.
> >>>
> >>> It adds a concept of "wait count" to dma fence, which is incremented for every
> >>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> >>> dma_fence_add_callback but from explicit/userspace wait paths).
> >>
> >> I was thinking about a similar thing, but in the context of dma_fence
> >> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> >> between "housekeeping" poll()ers that don't want to trigger boost but
> >> simply know when to do cleanup, and waiters who are waiting with some
> >> urgency.  I think we could use EPOLLPRI for this purpose.
> >>
> >> Not sure how that translates to waits via the syncobj.  But I think we
> >> want to let userspace give some hint about urgent vs housekeeping
> >> waits.
> >
> > Should the hint be on the waits, or should the hints be on the executed
> > context?
> >
> > In the end we need some way to quickly ramp-up the frequency to avoid
> > the execution bubbles.
> >
> > waitboost is trying to guess that, but in some cases it guess wrong
> > and waste power.
>
> Do we have a list of workloads which shows who benefits and who loses
> from the current implementation of waitboost?
> > btw, this is something that other drivers might need:
> >
> > https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> > Cc: Alex Deucher <alexander.deucher@amd.com>
>
> I have several issues with the context hint if it would directly
> influence frequency selection in the "more power" direction.
>
> First of all, assume a context hint would replace the waitboost. Which
> applications would need to set it to restore the lost performance and
> how would they set it?
>
> Then I don't even think userspace necessarily knows. Think of a layer
> like OpenCL. It doesn't really know in advance the profile of
> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> generation, and the actual size of the workload which can be influenced
> by the application (or user) and not the library.
>
> The approach also lends itself well for the "arms race" where every
> application can say "Me me me, I am the most important workload there is!".

since there is discussion happening in two places:

https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433

What I think you might want is a ctx boost_mask which lets an app or
driver disable certain boost signals/classes.  Where fence waits is
one class of boost, but hypothetical other signals like touchscreen
(or other) input events could be another class of boost.  A compute
workload might be interested in fence wait boosts but could care less
about input events.

> The last concern is for me shared with the proposal to expose deadlines
> or high priority waits as explicit uapi knobs. Both come under the "what
> application told us it will do" category vs what it actually does. So I
> think it is slightly weaker than basing decisions of waits.
>
> The current waitboost is a bit detached from that problem because when
> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> chain. When we waitboost for waits we also know someone is waiting. We
> are not trusting userspace telling us this will be a buffer in the flip
> chain or that this is a context which will have a certain duty-cycle.
>
> But yes, even if the input is truthful, latter is still only a
> heuristics because nothing says all waits are important. AFAIU it just
> happened to work well in the past.
>
> I do understand I am effectively arguing for more heuristics, which may
> sound a bit against the common wisdom. This is because in general I
> think the logic to do the right thing, be it in the driver or in the
> firmware, can work best if it has a holistic view. Simply put it needs
> to have more inputs to the decisions it is making.
>
> That is what my series is proposing - adding a common signal of "someone
> in userspace is waiting". What happens with that signal needs not be
> defined (promised) in the uapi contract.
>
> Say you route it to SLPC logic. It doesn't need to do with it what
> legacy i915 is doing today. It just needs to do something which works
> best for majority of workloads. It can even ignore it if that works for it.
>
> Finally, back to the immediate problem is when people replace the OpenCL
> NEO driver with clvk that performance tanks. Because former does waits
> using i915 specific ioctls and so triggers waitboost, latter waits on
> syncobj so no waitboost and performance is bad. What short term solution
> can we come up with? Or we say to not use clvk? I also wonder if other
> Vulkan based stuff is perhaps missing those easy performance gains..
>
> Perhaps strictly speaking Rob's and mine proposal are not mutually
> exclusive. Yes I could piggy back on his with an "immediate deadline for
> waits" idea, but they could also be separate concepts if we concluded
> "someone is waiting" signal is useful to have. Or it takes to long to
> upstream the full deadline idea.

Let me re-spin my series and add the syncobj wait flag and i915 bits
adapted from your patches..  I think the basic idea of deadlines
(which includes "I want it NOW" ;-)) isn't controversial, but the
original idea got caught up in some bikeshed (what about compositors
that wait on fences in userspace to decide which surfaces to update in
the next frame), plus me getting busy and generally not having a good
plan for how to leverage this from VM guests (which is becoming
increasingly important for CrOS).  I think I can build on some ongoing
virtgpu fencing improvement work to solve the latter.  But now that we
have a 2nd use-case for this, it makes sense to respin.

BR,
-R

> Regards,
>
> Tvrtko
>
> >>
> >> Also, on a related topic: https://lwn.net/Articles/868468/
> >>
> >> BR,
> >> -R
> >>
> >>> Individual drivers can then inspect this via dma_fence_wait_count() and decide
> >>> to wait boost the waits on such fences.
> >>>
> >>> Again, quickly put together and smoke tested only - no guarantees whatsoever and
> >>> I will rely on interested parties to test and report if it even works or how
> >>> well.
> >>>
> >>> v2:
> >>>   * Small fixups based on CI feedback:
> >>>      * Handle decrement correctly for already signalled case while adding callback.
> >>>      * Remove i915 assert which was making sure struct i915_request does not grow.
> >>>   * Split out the i915 patch into three separate functional changes.
> >>>
> >>> Tvrtko Ursulin (5):
> >>>    dma-fence: Track explicit waiters
> >>>    drm/syncobj: Mark syncobj waits as external waiters
> >>>    drm/i915: Waitboost external waits
> >>>    drm/i915: Mark waits as explicit
> >>>    drm/i915: Wait boost requests waited upon by others
> >>>
> >>>   drivers/dma-buf/dma-fence.c               | 102 ++++++++++++++++------
> >>>   drivers/gpu/drm/drm_syncobj.c             |   6 +-
> >>>   drivers/gpu/drm/i915/gt/intel_engine_pm.c |   1 -
> >>>   drivers/gpu/drm/i915/i915_request.c       |  13 ++-
> >>>   include/linux/dma-fence.h                 |  14 +++
> >>>   5 files changed, 101 insertions(+), 35 deletions(-)
> >>>
> >>> --
> >>> 2.34.1
> >>>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 14:55       ` Rob Clark
@ 2023-02-17 16:03         ` Tvrtko Ursulin
  2023-02-17 17:00           ` Rob Clark
  0 siblings, 1 reply; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-17 16:03 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi


On 17/02/2023 14:55, Rob Clark wrote:
> On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 16/02/2023 18:19, Rodrigo Vivi wrote:
>>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
>>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
>>>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>>>
>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>
>>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
>>>>> for instance to fences which are actively waited upon from userspace. This has
>>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
>>>>> some workloads really like it.
>>>>>
>>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
>>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
>>>>> this mini series really (really) quickly to see if some discussion can be had.
>>>>>
>>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
>>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
>>>>> dma_fence_add_callback but from explicit/userspace wait paths).
>>>>
>>>> I was thinking about a similar thing, but in the context of dma_fence
>>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
>>>> between "housekeeping" poll()ers that don't want to trigger boost but
>>>> simply know when to do cleanup, and waiters who are waiting with some
>>>> urgency.  I think we could use EPOLLPRI for this purpose.
>>>>
>>>> Not sure how that translates to waits via the syncobj.  But I think we
>>>> want to let userspace give some hint about urgent vs housekeeping
>>>> waits.
>>>
>>> Should the hint be on the waits, or should the hints be on the executed
>>> context?
>>>
>>> In the end we need some way to quickly ramp-up the frequency to avoid
>>> the execution bubbles.
>>>
>>> waitboost is trying to guess that, but in some cases it guess wrong
>>> and waste power.
>>
>> Do we have a list of workloads which shows who benefits and who loses
>> from the current implementation of waitboost?
>>> btw, this is something that other drivers might need:
>>>
>>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
>>> Cc: Alex Deucher <alexander.deucher@amd.com>
>>
>> I have several issues with the context hint if it would directly
>> influence frequency selection in the "more power" direction.
>>
>> First of all, assume a context hint would replace the waitboost. Which
>> applications would need to set it to restore the lost performance and
>> how would they set it?
>>
>> Then I don't even think userspace necessarily knows. Think of a layer
>> like OpenCL. It doesn't really know in advance the profile of
>> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
>> generation, and the actual size of the workload which can be influenced
>> by the application (or user) and not the library.
>>
>> The approach also lends itself well for the "arms race" where every
>> application can say "Me me me, I am the most important workload there is!".
> 
> since there is discussion happening in two places:
> 
> https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> 
> What I think you might want is a ctx boost_mask which lets an app or
> driver disable certain boost signals/classes.  Where fence waits is
> one class of boost, but hypothetical other signals like touchscreen
> (or other) input events could be another class of boost.  A compute
> workload might be interested in fence wait boosts but could care less
> about input events.

I think it can only be apps which could have any chance knowing whether 
their use of a library is latency sensitive or not. Which means new 
library extensions and their adoption. So I have some strong reservation 
that route is feasible.

Or we tie with priority which many drivers do. Normal and above gets the 
boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).

Related note is that we lack any external control of our scheduling 
decisions so we really do suck compared to other scheduling domains like 
CPU and IO etc.

>> The last concern is for me shared with the proposal to expose deadlines
>> or high priority waits as explicit uapi knobs. Both come under the "what
>> application told us it will do" category vs what it actually does. So I
>> think it is slightly weaker than basing decisions of waits.
>>
>> The current waitboost is a bit detached from that problem because when
>> we waitboost for flips we _know_ it is an actual framebuffer in the flip
>> chain. When we waitboost for waits we also know someone is waiting. We
>> are not trusting userspace telling us this will be a buffer in the flip
>> chain or that this is a context which will have a certain duty-cycle.
>>
>> But yes, even if the input is truthful, latter is still only a
>> heuristics because nothing says all waits are important. AFAIU it just
>> happened to work well in the past.
>>
>> I do understand I am effectively arguing for more heuristics, which may
>> sound a bit against the common wisdom. This is because in general I
>> think the logic to do the right thing, be it in the driver or in the
>> firmware, can work best if it has a holistic view. Simply put it needs
>> to have more inputs to the decisions it is making.
>>
>> That is what my series is proposing - adding a common signal of "someone
>> in userspace is waiting". What happens with that signal needs not be
>> defined (promised) in the uapi contract.
>>
>> Say you route it to SLPC logic. It doesn't need to do with it what
>> legacy i915 is doing today. It just needs to do something which works
>> best for majority of workloads. It can even ignore it if that works for it.
>>
>> Finally, back to the immediate problem is when people replace the OpenCL
>> NEO driver with clvk that performance tanks. Because former does waits
>> using i915 specific ioctls and so triggers waitboost, latter waits on
>> syncobj so no waitboost and performance is bad. What short term solution
>> can we come up with? Or we say to not use clvk? I also wonder if other
>> Vulkan based stuff is perhaps missing those easy performance gains..
>>
>> Perhaps strictly speaking Rob's and mine proposal are not mutually
>> exclusive. Yes I could piggy back on his with an "immediate deadline for
>> waits" idea, but they could also be separate concepts if we concluded
>> "someone is waiting" signal is useful to have. Or it takes to long to
>> upstream the full deadline idea.
> 
> Let me re-spin my series and add the syncobj wait flag and i915 bits

I think wait flag is questionable unless it is inverted to imply waits 
which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH). 
Having a flag which "makes things faster" IMO should require elevated 
privilege (to avoid the "arms race") at which point I fear it quickly 
becomes uninteresting.

> adapted from your patches..  I think the basic idea of deadlines
> (which includes "I want it NOW" ;-)) isn't controversial, but the
> original idea got caught up in some bikeshed (what about compositors
> that wait on fences in userspace to decide which surfaces to update in
> the next frame), plus me getting busy and generally not having a good
> plan for how to leverage this from VM guests (which is becoming
> increasingly important for CrOS).  I think I can build on some ongoing
> virtgpu fencing improvement work to solve the latter.  But now that we
> have a 2nd use-case for this, it makes sense to respin.

Sure, I was looking at the old version already. It is interesting. But 
also IMO needs quite a bit more work to approach achieving what is 
implied from the name of the feature. It would need proper deadline 
based sched job picking, and even then drm sched is mostly just a 
frontend. So once past runnable status and jobs handed over to backend, 
without further driver work it probably wouldn't be very effective past 
very lightly loaded systems.

Then if we fast forward to a world where schedulers perhaps become fully 
deadline aware (we even had this for i915 few years back) then the 
question will be does equating waits with immediate deadlines still 
works. Maybe not too well because we wouldn't have the ability to 
distinguish between the "someone is waiting" signal from the otherwise 
propagated deadlines.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 16:03         ` Tvrtko Ursulin
@ 2023-02-17 17:00           ` Rob Clark
  2023-02-17 20:45               ` Rodrigo Vivi
  2023-02-20 12:22             ` Tvrtko Ursulin
  0 siblings, 2 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-17 17:00 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi

On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 17/02/2023 14:55, Rob Clark wrote:
> > On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >>
> >> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> >>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> >>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> >>>> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>>>
> >>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>>>>
> >>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
> >>>>> for instance to fences which are actively waited upon from userspace. This has
> >>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
> >>>>> some workloads really like it.
> >>>>>
> >>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> >>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> >>>>> this mini series really (really) quickly to see if some discussion can be had.
> >>>>>
> >>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
> >>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> >>>>> dma_fence_add_callback but from explicit/userspace wait paths).
> >>>>
> >>>> I was thinking about a similar thing, but in the context of dma_fence
> >>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> >>>> between "housekeeping" poll()ers that don't want to trigger boost but
> >>>> simply know when to do cleanup, and waiters who are waiting with some
> >>>> urgency.  I think we could use EPOLLPRI for this purpose.
> >>>>
> >>>> Not sure how that translates to waits via the syncobj.  But I think we
> >>>> want to let userspace give some hint about urgent vs housekeeping
> >>>> waits.
> >>>
> >>> Should the hint be on the waits, or should the hints be on the executed
> >>> context?
> >>>
> >>> In the end we need some way to quickly ramp-up the frequency to avoid
> >>> the execution bubbles.
> >>>
> >>> waitboost is trying to guess that, but in some cases it guess wrong
> >>> and waste power.
> >>
> >> Do we have a list of workloads which shows who benefits and who loses
> >> from the current implementation of waitboost?
> >>> btw, this is something that other drivers might need:
> >>>
> >>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> >>> Cc: Alex Deucher <alexander.deucher@amd.com>
> >>
> >> I have several issues with the context hint if it would directly
> >> influence frequency selection in the "more power" direction.
> >>
> >> First of all, assume a context hint would replace the waitboost. Which
> >> applications would need to set it to restore the lost performance and
> >> how would they set it?
> >>
> >> Then I don't even think userspace necessarily knows. Think of a layer
> >> like OpenCL. It doesn't really know in advance the profile of
> >> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> >> generation, and the actual size of the workload which can be influenced
> >> by the application (or user) and not the library.
> >>
> >> The approach also lends itself well for the "arms race" where every
> >> application can say "Me me me, I am the most important workload there is!".
> >
> > since there is discussion happening in two places:
> >
> > https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> >
> > What I think you might want is a ctx boost_mask which lets an app or
> > driver disable certain boost signals/classes.  Where fence waits is
> > one class of boost, but hypothetical other signals like touchscreen
> > (or other) input events could be another class of boost.  A compute
> > workload might be interested in fence wait boosts but could care less
> > about input events.
>
> I think it can only be apps which could have any chance knowing whether
> their use of a library is latency sensitive or not. Which means new
> library extensions and their adoption. So I have some strong reservation
> that route is feasible.
>
> Or we tie with priority which many drivers do. Normal and above gets the
> boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).

yeah, that sounds reasonable.

> Related note is that we lack any external control of our scheduling
> decisions so we really do suck compared to other scheduling domains like
> CPU and IO etc.
>
> >> The last concern is for me shared with the proposal to expose deadlines
> >> or high priority waits as explicit uapi knobs. Both come under the "what
> >> application told us it will do" category vs what it actually does. So I
> >> think it is slightly weaker than basing decisions of waits.
> >>
> >> The current waitboost is a bit detached from that problem because when
> >> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> >> chain. When we waitboost for waits we also know someone is waiting. We
> >> are not trusting userspace telling us this will be a buffer in the flip
> >> chain or that this is a context which will have a certain duty-cycle.
> >>
> >> But yes, even if the input is truthful, latter is still only a
> >> heuristics because nothing says all waits are important. AFAIU it just
> >> happened to work well in the past.
> >>
> >> I do understand I am effectively arguing for more heuristics, which may
> >> sound a bit against the common wisdom. This is because in general I
> >> think the logic to do the right thing, be it in the driver or in the
> >> firmware, can work best if it has a holistic view. Simply put it needs
> >> to have more inputs to the decisions it is making.
> >>
> >> That is what my series is proposing - adding a common signal of "someone
> >> in userspace is waiting". What happens with that signal needs not be
> >> defined (promised) in the uapi contract.
> >>
> >> Say you route it to SLPC logic. It doesn't need to do with it what
> >> legacy i915 is doing today. It just needs to do something which works
> >> best for majority of workloads. It can even ignore it if that works for it.
> >>
> >> Finally, back to the immediate problem is when people replace the OpenCL
> >> NEO driver with clvk that performance tanks. Because former does waits
> >> using i915 specific ioctls and so triggers waitboost, latter waits on
> >> syncobj so no waitboost and performance is bad. What short term solution
> >> can we come up with? Or we say to not use clvk? I also wonder if other
> >> Vulkan based stuff is perhaps missing those easy performance gains..
> >>
> >> Perhaps strictly speaking Rob's and mine proposal are not mutually
> >> exclusive. Yes I could piggy back on his with an "immediate deadline for
> >> waits" idea, but they could also be separate concepts if we concluded
> >> "someone is waiting" signal is useful to have. Or it takes to long to
> >> upstream the full deadline idea.
> >
> > Let me re-spin my series and add the syncobj wait flag and i915 bits
>
> I think wait flag is questionable unless it is inverted to imply waits
> which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
> Having a flag which "makes things faster" IMO should require elevated
> privilege (to avoid the "arms race") at which point I fear it quickly
> becomes uninteresting.

I guess you could make the argument in either direction.  Making the
default behavior ramp up clocks could be a power regression.

I also think the "arms race" scenario isn't really as much of a
problem as you think.  There aren't _that_ many things using the GPU
at the same time (compared to # of things using CPU).   And a lot of
mobile games throttle framerate to avoid draining your battery too
quickly (after all, if your battery is dead you can't keep buying loot
boxes or whatever).

> > adapted from your patches..  I think the basic idea of deadlines
> > (which includes "I want it NOW" ;-)) isn't controversial, but the
> > original idea got caught up in some bikeshed (what about compositors
> > that wait on fences in userspace to decide which surfaces to update in
> > the next frame), plus me getting busy and generally not having a good
> > plan for how to leverage this from VM guests (which is becoming
> > increasingly important for CrOS).  I think I can build on some ongoing
> > virtgpu fencing improvement work to solve the latter.  But now that we
> > have a 2nd use-case for this, it makes sense to respin.
>
> Sure, I was looking at the old version already. It is interesting. But
> also IMO needs quite a bit more work to approach achieving what is
> implied from the name of the feature. It would need proper deadline
> based sched job picking, and even then drm sched is mostly just a
> frontend. So once past runnable status and jobs handed over to backend,
> without further driver work it probably wouldn't be very effective past
> very lightly loaded systems.

Yes, but all of that is not part of dma_fence ;-)

A pretty common challenging usecase is still the single fullscreen
game, where scheduling isn't the problem, but landing at an
appropriate GPU freq absolutely is.  (UI workloads are perhaps more
interesting from a scheduler standpoint, but they generally aren't
challenging from a load/freq standpoint.)

Fwiw, the original motivation of the series was to implement something
akin to i915 pageflip boosting without having to abandon the atomic
helpers.  (And, I guess it would also let i915 preserve that feature
if it switched to atomic helpers.. I'm unsure if there are still other
things blocking i915's migration.)

> Then if we fast forward to a world where schedulers perhaps become fully
> deadline aware (we even had this for i915 few years back) then the
> question will be does equating waits with immediate deadlines still
> works. Maybe not too well because we wouldn't have the ability to
> distinguish between the "someone is waiting" signal from the otherwise
> propagated deadlines.

Is there any other way to handle a wait boost than expressing it as an
ASAP deadline?

BR,
-R

>
> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 17:00           ` Rob Clark
@ 2023-02-17 20:45               ` Rodrigo Vivi
  2023-02-20 12:22             ` Tvrtko Ursulin
  1 sibling, 0 replies; 40+ messages in thread
From: Rodrigo Vivi @ 2023-02-17 20:45 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Tvrtko Ursulin, Intel-gfx, Rob Clark, dri-devel

On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
> On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
> >
> >
> > On 17/02/2023 14:55, Rob Clark wrote:
> > > On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> > > <tvrtko.ursulin@linux.intel.com> wrote:
> > >>
> > >>
> > >> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> > >>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> > >>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > >>>> <tvrtko.ursulin@linux.intel.com> wrote:
> > >>>>>
> > >>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >>>>>
> > >>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
> > >>>>> for instance to fences which are actively waited upon from userspace. This has
> > >>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
> > >>>>> some workloads really like it.
> > >>>>>
> > >>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> > >>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > >>>>> this mini series really (really) quickly to see if some discussion can be had.
> > >>>>>
> > >>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
> > >>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > >>>>> dma_fence_add_callback but from explicit/userspace wait paths).
> > >>>>
> > >>>> I was thinking about a similar thing, but in the context of dma_fence
> > >>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > >>>> between "housekeeping" poll()ers that don't want to trigger boost but
> > >>>> simply know when to do cleanup, and waiters who are waiting with some
> > >>>> urgency.  I think we could use EPOLLPRI for this purpose.
> > >>>>
> > >>>> Not sure how that translates to waits via the syncobj.  But I think we
> > >>>> want to let userspace give some hint about urgent vs housekeeping
> > >>>> waits.
> > >>>
> > >>> Should the hint be on the waits, or should the hints be on the executed
> > >>> context?
> > >>>
> > >>> In the end we need some way to quickly ramp-up the frequency to avoid
> > >>> the execution bubbles.
> > >>>
> > >>> waitboost is trying to guess that, but in some cases it guess wrong
> > >>> and waste power.
> > >>
> > >> Do we have a list of workloads which shows who benefits and who loses
> > >> from the current implementation of waitboost?
> > >>> btw, this is something that other drivers might need:
> > >>>
> > >>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> > >>> Cc: Alex Deucher <alexander.deucher@amd.com>
> > >>
> > >> I have several issues with the context hint if it would directly
> > >> influence frequency selection in the "more power" direction.
> > >>
> > >> First of all, assume a context hint would replace the waitboost. Which
> > >> applications would need to set it to restore the lost performance and
> > >> how would they set it?
> > >>
> > >> Then I don't even think userspace necessarily knows. Think of a layer
> > >> like OpenCL. It doesn't really know in advance the profile of
> > >> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> > >> generation, and the actual size of the workload which can be influenced
> > >> by the application (or user) and not the library.
> > >>
> > >> The approach also lends itself well for the "arms race" where every
> > >> application can say "Me me me, I am the most important workload there is!".
> > >
> > > since there is discussion happening in two places:
> > >
> > > https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> > >
> > > What I think you might want is a ctx boost_mask which lets an app or
> > > driver disable certain boost signals/classes.  Where fence waits is
> > > one class of boost, but hypothetical other signals like touchscreen
> > > (or other) input events could be another class of boost.  A compute
> > > workload might be interested in fence wait boosts but could care less
> > > about input events.
> >
> > I think it can only be apps which could have any chance knowing whether
> > their use of a library is latency sensitive or not. Which means new
> > library extensions and their adoption. So I have some strong reservation
> > that route is feasible.
> >
> > Or we tie with priority which many drivers do. Normal and above gets the
> > boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).
> 
> yeah, that sounds reasonable.
> 

on that gitlab-issue discussion Emma Anholt was against using the priority
to influence frequency since that should be more about latency.

or we are talking about something different priority here?

> > Related note is that we lack any external control of our scheduling
> > decisions so we really do suck compared to other scheduling domains like
> > CPU and IO etc.
> >
> > >> The last concern is for me shared with the proposal to expose deadlines
> > >> or high priority waits as explicit uapi knobs. Both come under the "what
> > >> application told us it will do" category vs what it actually does. So I
> > >> think it is slightly weaker than basing decisions of waits.
> > >>
> > >> The current waitboost is a bit detached from that problem because when
> > >> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> > >> chain. When we waitboost for waits we also know someone is waiting. We
> > >> are not trusting userspace telling us this will be a buffer in the flip
> > >> chain or that this is a context which will have a certain duty-cycle.
> > >>
> > >> But yes, even if the input is truthful, latter is still only a
> > >> heuristics because nothing says all waits are important. AFAIU it just
> > >> happened to work well in the past.
> > >>
> > >> I do understand I am effectively arguing for more heuristics, which may
> > >> sound a bit against the common wisdom. This is because in general I
> > >> think the logic to do the right thing, be it in the driver or in the
> > >> firmware, can work best if it has a holistic view. Simply put it needs
> > >> to have more inputs to the decisions it is making.
> > >>
> > >> That is what my series is proposing - adding a common signal of "someone
> > >> in userspace is waiting". What happens with that signal needs not be
> > >> defined (promised) in the uapi contract.
> > >>
> > >> Say you route it to SLPC logic. It doesn't need to do with it what
> > >> legacy i915 is doing today. It just needs to do something which works
> > >> best for majority of workloads. It can even ignore it if that works for it.
> > >>
> > >> Finally, back to the immediate problem is when people replace the OpenCL
> > >> NEO driver with clvk that performance tanks. Because former does waits
> > >> using i915 specific ioctls and so triggers waitboost, latter waits on
> > >> syncobj so no waitboost and performance is bad. What short term solution
> > >> can we come up with? Or we say to not use clvk? I also wonder if other
> > >> Vulkan based stuff is perhaps missing those easy performance gains..
> > >>
> > >> Perhaps strictly speaking Rob's and mine proposal are not mutually
> > >> exclusive. Yes I could piggy back on his with an "immediate deadline for
> > >> waits" idea, but they could also be separate concepts if we concluded
> > >> "someone is waiting" signal is useful to have. Or it takes to long to
> > >> upstream the full deadline idea.
> > >
> > > Let me re-spin my series and add the syncobj wait flag and i915 bits
> >
> > I think wait flag is questionable unless it is inverted to imply waits
> > which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
> > Having a flag which "makes things faster" IMO should require elevated
> > privilege (to avoid the "arms race") at which point I fear it quickly
> > becomes uninteresting.
> 
> I guess you could make the argument in either direction.  Making the
> default behavior ramp up clocks could be a power regression.

yeap, exactly the media / video conference case.

> 
> I also think the "arms race" scenario isn't really as much of a
> problem as you think.  There aren't _that_ many things using the GPU
> at the same time (compared to # of things using CPU).   And a lot of
> mobile games throttle framerate to avoid draining your battery too
> quickly (after all, if your battery is dead you can't keep buying loot
> boxes or whatever).

Very good point.

And in the GPU case they rely a lot on the profiles. Which btw, seems
to be the Radeon solution. They boost the freq if the high performance
profile is selected and don't care about the execution bubbles if low
or mid profiles are selected, or something like that.

> 
> > > adapted from your patches..  I think the basic idea of deadlines
> > > (which includes "I want it NOW" ;-)) isn't controversial, but the
> > > original idea got caught up in some bikeshed (what about compositors
> > > that wait on fences in userspace to decide which surfaces to update in
> > > the next frame), plus me getting busy and generally not having a good
> > > plan for how to leverage this from VM guests (which is becoming
> > > increasingly important for CrOS).  I think I can build on some ongoing
> > > virtgpu fencing improvement work to solve the latter.  But now that we
> > > have a 2nd use-case for this, it makes sense to respin.
> >
> > Sure, I was looking at the old version already. It is interesting. But
> > also IMO needs quite a bit more work to approach achieving what is
> > implied from the name of the feature. It would need proper deadline
> > based sched job picking, and even then drm sched is mostly just a
> > frontend. So once past runnable status and jobs handed over to backend,
> > without further driver work it probably wouldn't be very effective past
> > very lightly loaded systems.
> 
> Yes, but all of that is not part of dma_fence ;-)
> 
> A pretty common challenging usecase is still the single fullscreen
> game, where scheduling isn't the problem, but landing at an
> appropriate GPU freq absolutely is.  (UI workloads are perhaps more
> interesting from a scheduler standpoint, but they generally aren't
> challenging from a load/freq standpoint.)
> 
> Fwiw, the original motivation of the series was to implement something
> akin to i915 pageflip boosting without having to abandon the atomic
> helpers.  (And, I guess it would also let i915 preserve that feature
> if it switched to atomic helpers.. I'm unsure if there are still other
> things blocking i915's migration.)
> 
> > Then if we fast forward to a world where schedulers perhaps become fully
> > deadline aware (we even had this for i915 few years back) then the
> > question will be does equating waits with immediate deadlines still
> > works. Maybe not too well because we wouldn't have the ability to
> > distinguish between the "someone is waiting" signal from the otherwise
> > propagated deadlines.
> 
> Is there any other way to handle a wait boost than expressing it as an
> ASAP deadline?
> 
> BR,
> -R
> 
> >
> > Regards,
> >
> > Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-17 20:45               ` Rodrigo Vivi
  0 siblings, 0 replies; 40+ messages in thread
From: Rodrigo Vivi @ 2023-02-17 20:45 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Intel-gfx, Rob Clark, dri-devel

On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
> On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
> >
> >
> > On 17/02/2023 14:55, Rob Clark wrote:
> > > On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> > > <tvrtko.ursulin@linux.intel.com> wrote:
> > >>
> > >>
> > >> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> > >>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> > >>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > >>>> <tvrtko.ursulin@linux.intel.com> wrote:
> > >>>>>
> > >>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >>>>>
> > >>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
> > >>>>> for instance to fences which are actively waited upon from userspace. This has
> > >>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
> > >>>>> some workloads really like it.
> > >>>>>
> > >>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> > >>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > >>>>> this mini series really (really) quickly to see if some discussion can be had.
> > >>>>>
> > >>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
> > >>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > >>>>> dma_fence_add_callback but from explicit/userspace wait paths).
> > >>>>
> > >>>> I was thinking about a similar thing, but in the context of dma_fence
> > >>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > >>>> between "housekeeping" poll()ers that don't want to trigger boost but
> > >>>> simply know when to do cleanup, and waiters who are waiting with some
> > >>>> urgency.  I think we could use EPOLLPRI for this purpose.
> > >>>>
> > >>>> Not sure how that translates to waits via the syncobj.  But I think we
> > >>>> want to let userspace give some hint about urgent vs housekeeping
> > >>>> waits.
> > >>>
> > >>> Should the hint be on the waits, or should the hints be on the executed
> > >>> context?
> > >>>
> > >>> In the end we need some way to quickly ramp-up the frequency to avoid
> > >>> the execution bubbles.
> > >>>
> > >>> waitboost is trying to guess that, but in some cases it guess wrong
> > >>> and waste power.
> > >>
> > >> Do we have a list of workloads which shows who benefits and who loses
> > >> from the current implementation of waitboost?
> > >>> btw, this is something that other drivers might need:
> > >>>
> > >>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> > >>> Cc: Alex Deucher <alexander.deucher@amd.com>
> > >>
> > >> I have several issues with the context hint if it would directly
> > >> influence frequency selection in the "more power" direction.
> > >>
> > >> First of all, assume a context hint would replace the waitboost. Which
> > >> applications would need to set it to restore the lost performance and
> > >> how would they set it?
> > >>
> > >> Then I don't even think userspace necessarily knows. Think of a layer
> > >> like OpenCL. It doesn't really know in advance the profile of
> > >> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> > >> generation, and the actual size of the workload which can be influenced
> > >> by the application (or user) and not the library.
> > >>
> > >> The approach also lends itself well for the "arms race" where every
> > >> application can say "Me me me, I am the most important workload there is!".
> > >
> > > since there is discussion happening in two places:
> > >
> > > https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> > >
> > > What I think you might want is a ctx boost_mask which lets an app or
> > > driver disable certain boost signals/classes.  Where fence waits is
> > > one class of boost, but hypothetical other signals like touchscreen
> > > (or other) input events could be another class of boost.  A compute
> > > workload might be interested in fence wait boosts but could care less
> > > about input events.
> >
> > I think it can only be apps which could have any chance knowing whether
> > their use of a library is latency sensitive or not. Which means new
> > library extensions and their adoption. So I have some strong reservation
> > that route is feasible.
> >
> > Or we tie with priority which many drivers do. Normal and above gets the
> > boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).
> 
> yeah, that sounds reasonable.
> 

on that gitlab-issue discussion Emma Anholt was against using the priority
to influence frequency since that should be more about latency.

or we are talking about something different priority here?

> > Related note is that we lack any external control of our scheduling
> > decisions so we really do suck compared to other scheduling domains like
> > CPU and IO etc.
> >
> > >> The last concern is for me shared with the proposal to expose deadlines
> > >> or high priority waits as explicit uapi knobs. Both come under the "what
> > >> application told us it will do" category vs what it actually does. So I
> > >> think it is slightly weaker than basing decisions of waits.
> > >>
> > >> The current waitboost is a bit detached from that problem because when
> > >> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> > >> chain. When we waitboost for waits we also know someone is waiting. We
> > >> are not trusting userspace telling us this will be a buffer in the flip
> > >> chain or that this is a context which will have a certain duty-cycle.
> > >>
> > >> But yes, even if the input is truthful, latter is still only a
> > >> heuristics because nothing says all waits are important. AFAIU it just
> > >> happened to work well in the past.
> > >>
> > >> I do understand I am effectively arguing for more heuristics, which may
> > >> sound a bit against the common wisdom. This is because in general I
> > >> think the logic to do the right thing, be it in the driver or in the
> > >> firmware, can work best if it has a holistic view. Simply put it needs
> > >> to have more inputs to the decisions it is making.
> > >>
> > >> That is what my series is proposing - adding a common signal of "someone
> > >> in userspace is waiting". What happens with that signal needs not be
> > >> defined (promised) in the uapi contract.
> > >>
> > >> Say you route it to SLPC logic. It doesn't need to do with it what
> > >> legacy i915 is doing today. It just needs to do something which works
> > >> best for majority of workloads. It can even ignore it if that works for it.
> > >>
> > >> Finally, back to the immediate problem is when people replace the OpenCL
> > >> NEO driver with clvk that performance tanks. Because former does waits
> > >> using i915 specific ioctls and so triggers waitboost, latter waits on
> > >> syncobj so no waitboost and performance is bad. What short term solution
> > >> can we come up with? Or we say to not use clvk? I also wonder if other
> > >> Vulkan based stuff is perhaps missing those easy performance gains..
> > >>
> > >> Perhaps strictly speaking Rob's and mine proposal are not mutually
> > >> exclusive. Yes I could piggy back on his with an "immediate deadline for
> > >> waits" idea, but they could also be separate concepts if we concluded
> > >> "someone is waiting" signal is useful to have. Or it takes to long to
> > >> upstream the full deadline idea.
> > >
> > > Let me re-spin my series and add the syncobj wait flag and i915 bits
> >
> > I think wait flag is questionable unless it is inverted to imply waits
> > which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
> > Having a flag which "makes things faster" IMO should require elevated
> > privilege (to avoid the "arms race") at which point I fear it quickly
> > becomes uninteresting.
> 
> I guess you could make the argument in either direction.  Making the
> default behavior ramp up clocks could be a power regression.

yeap, exactly the media / video conference case.

> 
> I also think the "arms race" scenario isn't really as much of a
> problem as you think.  There aren't _that_ many things using the GPU
> at the same time (compared to # of things using CPU).   And a lot of
> mobile games throttle framerate to avoid draining your battery too
> quickly (after all, if your battery is dead you can't keep buying loot
> boxes or whatever).

Very good point.

And in the GPU case they rely a lot on the profiles. Which btw, seems
to be the Radeon solution. They boost the freq if the high performance
profile is selected and don't care about the execution bubbles if low
or mid profiles are selected, or something like that.

> 
> > > adapted from your patches..  I think the basic idea of deadlines
> > > (which includes "I want it NOW" ;-)) isn't controversial, but the
> > > original idea got caught up in some bikeshed (what about compositors
> > > that wait on fences in userspace to decide which surfaces to update in
> > > the next frame), plus me getting busy and generally not having a good
> > > plan for how to leverage this from VM guests (which is becoming
> > > increasingly important for CrOS).  I think I can build on some ongoing
> > > virtgpu fencing improvement work to solve the latter.  But now that we
> > > have a 2nd use-case for this, it makes sense to respin.
> >
> > Sure, I was looking at the old version already. It is interesting. But
> > also IMO needs quite a bit more work to approach achieving what is
> > implied from the name of the feature. It would need proper deadline
> > based sched job picking, and even then drm sched is mostly just a
> > frontend. So once past runnable status and jobs handed over to backend,
> > without further driver work it probably wouldn't be very effective past
> > very lightly loaded systems.
> 
> Yes, but all of that is not part of dma_fence ;-)
> 
> A pretty common challenging usecase is still the single fullscreen
> game, where scheduling isn't the problem, but landing at an
> appropriate GPU freq absolutely is.  (UI workloads are perhaps more
> interesting from a scheduler standpoint, but they generally aren't
> challenging from a load/freq standpoint.)
> 
> Fwiw, the original motivation of the series was to implement something
> akin to i915 pageflip boosting without having to abandon the atomic
> helpers.  (And, I guess it would also let i915 preserve that feature
> if it switched to atomic helpers.. I'm unsure if there are still other
> things blocking i915's migration.)
> 
> > Then if we fast forward to a world where schedulers perhaps become fully
> > deadline aware (we even had this for i915 few years back) then the
> > question will be does equating waits with immediate deadlines still
> > works. Maybe not too well because we wouldn't have the ability to
> > distinguish between the "someone is waiting" signal from the otherwise
> > propagated deadlines.
> 
> Is there any other way to handle a wait boost than expressing it as an
> ASAP deadline?
> 
> BR,
> -R
> 
> >
> > Regards,
> >
> > Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 20:45               ` Rodrigo Vivi
@ 2023-02-17 23:38                 ` Rob Clark
  -1 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-17 23:38 UTC (permalink / raw)
  To: Rodrigo Vivi
  Cc: Alex Deucher, Tvrtko Ursulin, Intel-gfx, Rob Clark, dri-devel

On Fri, Feb 17, 2023 at 12:45 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
> > On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> > >
> > >
> > > On 17/02/2023 14:55, Rob Clark wrote:
> > > > On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> > > > <tvrtko.ursulin@linux.intel.com> wrote:
> > > >>
> > > >>
> > > >> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> > > >>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> > > >>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > > >>>> <tvrtko.ursulin@linux.intel.com> wrote:
> > > >>>>>
> > > >>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > >>>>>
> > > >>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
> > > >>>>> for instance to fences which are actively waited upon from userspace. This has
> > > >>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
> > > >>>>> some workloads really like it.
> > > >>>>>
> > > >>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> > > >>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > > >>>>> this mini series really (really) quickly to see if some discussion can be had.
> > > >>>>>
> > > >>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
> > > >>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > > >>>>> dma_fence_add_callback but from explicit/userspace wait paths).
> > > >>>>
> > > >>>> I was thinking about a similar thing, but in the context of dma_fence
> > > >>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > > >>>> between "housekeeping" poll()ers that don't want to trigger boost but
> > > >>>> simply know when to do cleanup, and waiters who are waiting with some
> > > >>>> urgency.  I think we could use EPOLLPRI for this purpose.
> > > >>>>
> > > >>>> Not sure how that translates to waits via the syncobj.  But I think we
> > > >>>> want to let userspace give some hint about urgent vs housekeeping
> > > >>>> waits.
> > > >>>
> > > >>> Should the hint be on the waits, or should the hints be on the executed
> > > >>> context?
> > > >>>
> > > >>> In the end we need some way to quickly ramp-up the frequency to avoid
> > > >>> the execution bubbles.
> > > >>>
> > > >>> waitboost is trying to guess that, but in some cases it guess wrong
> > > >>> and waste power.
> > > >>
> > > >> Do we have a list of workloads which shows who benefits and who loses
> > > >> from the current implementation of waitboost?
> > > >>> btw, this is something that other drivers might need:
> > > >>>
> > > >>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> > > >>> Cc: Alex Deucher <alexander.deucher@amd.com>
> > > >>
> > > >> I have several issues with the context hint if it would directly
> > > >> influence frequency selection in the "more power" direction.
> > > >>
> > > >> First of all, assume a context hint would replace the waitboost. Which
> > > >> applications would need to set it to restore the lost performance and
> > > >> how would they set it?
> > > >>
> > > >> Then I don't even think userspace necessarily knows. Think of a layer
> > > >> like OpenCL. It doesn't really know in advance the profile of
> > > >> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> > > >> generation, and the actual size of the workload which can be influenced
> > > >> by the application (or user) and not the library.
> > > >>
> > > >> The approach also lends itself well for the "arms race" where every
> > > >> application can say "Me me me, I am the most important workload there is!".
> > > >
> > > > since there is discussion happening in two places:
> > > >
> > > > https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> > > >
> > > > What I think you might want is a ctx boost_mask which lets an app or
> > > > driver disable certain boost signals/classes.  Where fence waits is
> > > > one class of boost, but hypothetical other signals like touchscreen
> > > > (or other) input events could be another class of boost.  A compute
> > > > workload might be interested in fence wait boosts but could care less
> > > > about input events.
> > >
> > > I think it can only be apps which could have any chance knowing whether
> > > their use of a library is latency sensitive or not. Which means new
> > > library extensions and their adoption. So I have some strong reservation
> > > that route is feasible.
> > >
> > > Or we tie with priority which many drivers do. Normal and above gets the
> > > boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).
> >
> > yeah, that sounds reasonable.
> >
>
> on that gitlab-issue discussion Emma Anholt was against using the priority
> to influence frequency since that should be more about latency.
>
> or we are talking about something different priority here?

I was thinking to only _not_ boost on the lowest priority, but boost
on norm/high priority.

But not something I feel too strongly about.  Ie. deciding on policy
doesn't affect or need to block getting the dma_fence and syncobj
plumbing in place.

BR,
-R

> > > Related note is that we lack any external control of our scheduling
> > > decisions so we really do suck compared to other scheduling domains like
> > > CPU and IO etc.
> > >
> > > >> The last concern is for me shared with the proposal to expose deadlines
> > > >> or high priority waits as explicit uapi knobs. Both come under the "what
> > > >> application told us it will do" category vs what it actually does. So I
> > > >> think it is slightly weaker than basing decisions of waits.
> > > >>
> > > >> The current waitboost is a bit detached from that problem because when
> > > >> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> > > >> chain. When we waitboost for waits we also know someone is waiting. We
> > > >> are not trusting userspace telling us this will be a buffer in the flip
> > > >> chain or that this is a context which will have a certain duty-cycle.
> > > >>
> > > >> But yes, even if the input is truthful, latter is still only a
> > > >> heuristics because nothing says all waits are important. AFAIU it just
> > > >> happened to work well in the past.
> > > >>
> > > >> I do understand I am effectively arguing for more heuristics, which may
> > > >> sound a bit against the common wisdom. This is because in general I
> > > >> think the logic to do the right thing, be it in the driver or in the
> > > >> firmware, can work best if it has a holistic view. Simply put it needs
> > > >> to have more inputs to the decisions it is making.
> > > >>
> > > >> That is what my series is proposing - adding a common signal of "someone
> > > >> in userspace is waiting". What happens with that signal needs not be
> > > >> defined (promised) in the uapi contract.
> > > >>
> > > >> Say you route it to SLPC logic. It doesn't need to do with it what
> > > >> legacy i915 is doing today. It just needs to do something which works
> > > >> best for majority of workloads. It can even ignore it if that works for it.
> > > >>
> > > >> Finally, back to the immediate problem is when people replace the OpenCL
> > > >> NEO driver with clvk that performance tanks. Because former does waits
> > > >> using i915 specific ioctls and so triggers waitboost, latter waits on
> > > >> syncobj so no waitboost and performance is bad. What short term solution
> > > >> can we come up with? Or we say to not use clvk? I also wonder if other
> > > >> Vulkan based stuff is perhaps missing those easy performance gains..
> > > >>
> > > >> Perhaps strictly speaking Rob's and mine proposal are not mutually
> > > >> exclusive. Yes I could piggy back on his with an "immediate deadline for
> > > >> waits" idea, but they could also be separate concepts if we concluded
> > > >> "someone is waiting" signal is useful to have. Or it takes to long to
> > > >> upstream the full deadline idea.
> > > >
> > > > Let me re-spin my series and add the syncobj wait flag and i915 bits
> > >
> > > I think wait flag is questionable unless it is inverted to imply waits
> > > which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
> > > Having a flag which "makes things faster" IMO should require elevated
> > > privilege (to avoid the "arms race") at which point I fear it quickly
> > > becomes uninteresting.
> >
> > I guess you could make the argument in either direction.  Making the
> > default behavior ramp up clocks could be a power regression.
>
> yeap, exactly the media / video conference case.
>
> >
> > I also think the "arms race" scenario isn't really as much of a
> > problem as you think.  There aren't _that_ many things using the GPU
> > at the same time (compared to # of things using CPU).   And a lot of
> > mobile games throttle framerate to avoid draining your battery too
> > quickly (after all, if your battery is dead you can't keep buying loot
> > boxes or whatever).
>
> Very good point.
>
> And in the GPU case they rely a lot on the profiles. Which btw, seems
> to be the Radeon solution. They boost the freq if the high performance
> profile is selected and don't care about the execution bubbles if low
> or mid profiles are selected, or something like that.
>
> >
> > > > adapted from your patches..  I think the basic idea of deadlines
> > > > (which includes "I want it NOW" ;-)) isn't controversial, but the
> > > > original idea got caught up in some bikeshed (what about compositors
> > > > that wait on fences in userspace to decide which surfaces to update in
> > > > the next frame), plus me getting busy and generally not having a good
> > > > plan for how to leverage this from VM guests (which is becoming
> > > > increasingly important for CrOS).  I think I can build on some ongoing
> > > > virtgpu fencing improvement work to solve the latter.  But now that we
> > > > have a 2nd use-case for this, it makes sense to respin.
> > >
> > > Sure, I was looking at the old version already. It is interesting. But
> > > also IMO needs quite a bit more work to approach achieving what is
> > > implied from the name of the feature. It would need proper deadline
> > > based sched job picking, and even then drm sched is mostly just a
> > > frontend. So once past runnable status and jobs handed over to backend,
> > > without further driver work it probably wouldn't be very effective past
> > > very lightly loaded systems.
> >
> > Yes, but all of that is not part of dma_fence ;-)
> >
> > A pretty common challenging usecase is still the single fullscreen
> > game, where scheduling isn't the problem, but landing at an
> > appropriate GPU freq absolutely is.  (UI workloads are perhaps more
> > interesting from a scheduler standpoint, but they generally aren't
> > challenging from a load/freq standpoint.)
> >
> > Fwiw, the original motivation of the series was to implement something
> > akin to i915 pageflip boosting without having to abandon the atomic
> > helpers.  (And, I guess it would also let i915 preserve that feature
> > if it switched to atomic helpers.. I'm unsure if there are still other
> > things blocking i915's migration.)
> >
> > > Then if we fast forward to a world where schedulers perhaps become fully
> > > deadline aware (we even had this for i915 few years back) then the
> > > question will be does equating waits with immediate deadlines still
> > > works. Maybe not too well because we wouldn't have the ability to
> > > distinguish between the "someone is waiting" signal from the otherwise
> > > propagated deadlines.
> >
> > Is there any other way to handle a wait boost than expressing it as an
> > ASAP deadline?
> >
> > BR,
> > -R
> >
> > >
> > > Regards,
> > >
> > > Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
@ 2023-02-17 23:38                 ` Rob Clark
  0 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-17 23:38 UTC (permalink / raw)
  To: Rodrigo Vivi; +Cc: Alex Deucher, Intel-gfx, Rob Clark, dri-devel

On Fri, Feb 17, 2023 at 12:45 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
> > On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> > >
> > >
> > > On 17/02/2023 14:55, Rob Clark wrote:
> > > > On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> > > > <tvrtko.ursulin@linux.intel.com> wrote:
> > > >>
> > > >>
> > > >> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> > > >>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> > > >>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> > > >>>> <tvrtko.ursulin@linux.intel.com> wrote:
> > > >>>>>
> > > >>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > >>>>>
> > > >>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
> > > >>>>> for instance to fences which are actively waited upon from userspace. This has
> > > >>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
> > > >>>>> some workloads really like it.
> > > >>>>>
> > > >>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> > > >>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> > > >>>>> this mini series really (really) quickly to see if some discussion can be had.
> > > >>>>>
> > > >>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
> > > >>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> > > >>>>> dma_fence_add_callback but from explicit/userspace wait paths).
> > > >>>>
> > > >>>> I was thinking about a similar thing, but in the context of dma_fence
> > > >>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> > > >>>> between "housekeeping" poll()ers that don't want to trigger boost but
> > > >>>> simply know when to do cleanup, and waiters who are waiting with some
> > > >>>> urgency.  I think we could use EPOLLPRI for this purpose.
> > > >>>>
> > > >>>> Not sure how that translates to waits via the syncobj.  But I think we
> > > >>>> want to let userspace give some hint about urgent vs housekeeping
> > > >>>> waits.
> > > >>>
> > > >>> Should the hint be on the waits, or should the hints be on the executed
> > > >>> context?
> > > >>>
> > > >>> In the end we need some way to quickly ramp-up the frequency to avoid
> > > >>> the execution bubbles.
> > > >>>
> > > >>> waitboost is trying to guess that, but in some cases it guess wrong
> > > >>> and waste power.
> > > >>
> > > >> Do we have a list of workloads which shows who benefits and who loses
> > > >> from the current implementation of waitboost?
> > > >>> btw, this is something that other drivers might need:
> > > >>>
> > > >>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> > > >>> Cc: Alex Deucher <alexander.deucher@amd.com>
> > > >>
> > > >> I have several issues with the context hint if it would directly
> > > >> influence frequency selection in the "more power" direction.
> > > >>
> > > >> First of all, assume a context hint would replace the waitboost. Which
> > > >> applications would need to set it to restore the lost performance and
> > > >> how would they set it?
> > > >>
> > > >> Then I don't even think userspace necessarily knows. Think of a layer
> > > >> like OpenCL. It doesn't really know in advance the profile of
> > > >> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> > > >> generation, and the actual size of the workload which can be influenced
> > > >> by the application (or user) and not the library.
> > > >>
> > > >> The approach also lends itself well for the "arms race" where every
> > > >> application can say "Me me me, I am the most important workload there is!".
> > > >
> > > > since there is discussion happening in two places:
> > > >
> > > > https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> > > >
> > > > What I think you might want is a ctx boost_mask which lets an app or
> > > > driver disable certain boost signals/classes.  Where fence waits is
> > > > one class of boost, but hypothetical other signals like touchscreen
> > > > (or other) input events could be another class of boost.  A compute
> > > > workload might be interested in fence wait boosts but could care less
> > > > about input events.
> > >
> > > I think it can only be apps which could have any chance knowing whether
> > > their use of a library is latency sensitive or not. Which means new
> > > library extensions and their adoption. So I have some strong reservation
> > > that route is feasible.
> > >
> > > Or we tie with priority which many drivers do. Normal and above gets the
> > > boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).
> >
> > yeah, that sounds reasonable.
> >
>
> on that gitlab-issue discussion Emma Anholt was against using the priority
> to influence frequency since that should be more about latency.
>
> or we are talking about something different priority here?

I was thinking to only _not_ boost on the lowest priority, but boost
on norm/high priority.

But not something I feel too strongly about.  Ie. deciding on policy
doesn't affect or need to block getting the dma_fence and syncobj
plumbing in place.

BR,
-R

> > > Related note is that we lack any external control of our scheduling
> > > decisions so we really do suck compared to other scheduling domains like
> > > CPU and IO etc.
> > >
> > > >> The last concern is for me shared with the proposal to expose deadlines
> > > >> or high priority waits as explicit uapi knobs. Both come under the "what
> > > >> application told us it will do" category vs what it actually does. So I
> > > >> think it is slightly weaker than basing decisions of waits.
> > > >>
> > > >> The current waitboost is a bit detached from that problem because when
> > > >> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> > > >> chain. When we waitboost for waits we also know someone is waiting. We
> > > >> are not trusting userspace telling us this will be a buffer in the flip
> > > >> chain or that this is a context which will have a certain duty-cycle.
> > > >>
> > > >> But yes, even if the input is truthful, latter is still only a
> > > >> heuristics because nothing says all waits are important. AFAIU it just
> > > >> happened to work well in the past.
> > > >>
> > > >> I do understand I am effectively arguing for more heuristics, which may
> > > >> sound a bit against the common wisdom. This is because in general I
> > > >> think the logic to do the right thing, be it in the driver or in the
> > > >> firmware, can work best if it has a holistic view. Simply put it needs
> > > >> to have more inputs to the decisions it is making.
> > > >>
> > > >> That is what my series is proposing - adding a common signal of "someone
> > > >> in userspace is waiting". What happens with that signal needs not be
> > > >> defined (promised) in the uapi contract.
> > > >>
> > > >> Say you route it to SLPC logic. It doesn't need to do with it what
> > > >> legacy i915 is doing today. It just needs to do something which works
> > > >> best for majority of workloads. It can even ignore it if that works for it.
> > > >>
> > > >> Finally, back to the immediate problem is when people replace the OpenCL
> > > >> NEO driver with clvk that performance tanks. Because former does waits
> > > >> using i915 specific ioctls and so triggers waitboost, latter waits on
> > > >> syncobj so no waitboost and performance is bad. What short term solution
> > > >> can we come up with? Or we say to not use clvk? I also wonder if other
> > > >> Vulkan based stuff is perhaps missing those easy performance gains..
> > > >>
> > > >> Perhaps strictly speaking Rob's and mine proposal are not mutually
> > > >> exclusive. Yes I could piggy back on his with an "immediate deadline for
> > > >> waits" idea, but they could also be separate concepts if we concluded
> > > >> "someone is waiting" signal is useful to have. Or it takes to long to
> > > >> upstream the full deadline idea.
> > > >
> > > > Let me re-spin my series and add the syncobj wait flag and i915 bits
> > >
> > > I think wait flag is questionable unless it is inverted to imply waits
> > > which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
> > > Having a flag which "makes things faster" IMO should require elevated
> > > privilege (to avoid the "arms race") at which point I fear it quickly
> > > becomes uninteresting.
> >
> > I guess you could make the argument in either direction.  Making the
> > default behavior ramp up clocks could be a power regression.
>
> yeap, exactly the media / video conference case.
>
> >
> > I also think the "arms race" scenario isn't really as much of a
> > problem as you think.  There aren't _that_ many things using the GPU
> > at the same time (compared to # of things using CPU).   And a lot of
> > mobile games throttle framerate to avoid draining your battery too
> > quickly (after all, if your battery is dead you can't keep buying loot
> > boxes or whatever).
>
> Very good point.
>
> And in the GPU case they rely a lot on the profiles. Which btw, seems
> to be the Radeon solution. They boost the freq if the high performance
> profile is selected and don't care about the execution bubbles if low
> or mid profiles are selected, or something like that.
>
> >
> > > > adapted from your patches..  I think the basic idea of deadlines
> > > > (which includes "I want it NOW" ;-)) isn't controversial, but the
> > > > original idea got caught up in some bikeshed (what about compositors
> > > > that wait on fences in userspace to decide which surfaces to update in
> > > > the next frame), plus me getting busy and generally not having a good
> > > > plan for how to leverage this from VM guests (which is becoming
> > > > increasingly important for CrOS).  I think I can build on some ongoing
> > > > virtgpu fencing improvement work to solve the latter.  But now that we
> > > > have a 2nd use-case for this, it makes sense to respin.
> > >
> > > Sure, I was looking at the old version already. It is interesting. But
> > > also IMO needs quite a bit more work to approach achieving what is
> > > implied from the name of the feature. It would need proper deadline
> > > based sched job picking, and even then drm sched is mostly just a
> > > frontend. So once past runnable status and jobs handed over to backend,
> > > without further driver work it probably wouldn't be very effective past
> > > very lightly loaded systems.
> >
> > Yes, but all of that is not part of dma_fence ;-)
> >
> > A pretty common challenging usecase is still the single fullscreen
> > game, where scheduling isn't the problem, but landing at an
> > appropriate GPU freq absolutely is.  (UI workloads are perhaps more
> > interesting from a scheduler standpoint, but they generally aren't
> > challenging from a load/freq standpoint.)
> >
> > Fwiw, the original motivation of the series was to implement something
> > akin to i915 pageflip boosting without having to abandon the atomic
> > helpers.  (And, I guess it would also let i915 preserve that feature
> > if it switched to atomic helpers.. I'm unsure if there are still other
> > things blocking i915's migration.)
> >
> > > Then if we fast forward to a world where schedulers perhaps become fully
> > > deadline aware (we even had this for i915 few years back) then the
> > > question will be does equating waits with immediate deadlines still
> > > works. Maybe not too well because we wouldn't have the ability to
> > > distinguish between the "someone is waiting" signal from the otherwise
> > > propagated deadlines.
> >
> > Is there any other way to handle a wait boost than expressing it as an
> > ASAP deadline?
> >
> > BR,
> > -R
> >
> > >
> > > Regards,
> > >
> > > Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 20:45               ` Rodrigo Vivi
  (?)
  (?)
@ 2023-02-20 11:33               ` Tvrtko Ursulin
  2023-02-20 15:52                 ` Rob Clark
  -1 siblings, 1 reply; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-20 11:33 UTC (permalink / raw)
  To: Rodrigo Vivi, Rob Clark; +Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel


On 17/02/2023 20:45, Rodrigo Vivi wrote:
> On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
>> On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>
>>>
>>> On 17/02/2023 14:55, Rob Clark wrote:
>>>> On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
>>>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>>>
>>>>>
>>>>> On 16/02/2023 18:19, Rodrigo Vivi wrote:
>>>>>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
>>>>>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
>>>>>>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>>>>>>
>>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>>>>>
>>>>>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
>>>>>>>> for instance to fences which are actively waited upon from userspace. This has
>>>>>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
>>>>>>>> some workloads really like it.
>>>>>>>>
>>>>>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
>>>>>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
>>>>>>>> this mini series really (really) quickly to see if some discussion can be had.
>>>>>>>>
>>>>>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
>>>>>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
>>>>>>>> dma_fence_add_callback but from explicit/userspace wait paths).
>>>>>>>
>>>>>>> I was thinking about a similar thing, but in the context of dma_fence
>>>>>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
>>>>>>> between "housekeeping" poll()ers that don't want to trigger boost but
>>>>>>> simply know when to do cleanup, and waiters who are waiting with some
>>>>>>> urgency.  I think we could use EPOLLPRI for this purpose.
>>>>>>>
>>>>>>> Not sure how that translates to waits via the syncobj.  But I think we
>>>>>>> want to let userspace give some hint about urgent vs housekeeping
>>>>>>> waits.
>>>>>>
>>>>>> Should the hint be on the waits, or should the hints be on the executed
>>>>>> context?
>>>>>>
>>>>>> In the end we need some way to quickly ramp-up the frequency to avoid
>>>>>> the execution bubbles.
>>>>>>
>>>>>> waitboost is trying to guess that, but in some cases it guess wrong
>>>>>> and waste power.
>>>>>
>>>>> Do we have a list of workloads which shows who benefits and who loses
>>>>> from the current implementation of waitboost?
>>>>>> btw, this is something that other drivers might need:
>>>>>>
>>>>>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
>>>>>> Cc: Alex Deucher <alexander.deucher@amd.com>
>>>>>
>>>>> I have several issues with the context hint if it would directly
>>>>> influence frequency selection in the "more power" direction.
>>>>>
>>>>> First of all, assume a context hint would replace the waitboost. Which
>>>>> applications would need to set it to restore the lost performance and
>>>>> how would they set it?
>>>>>
>>>>> Then I don't even think userspace necessarily knows. Think of a layer
>>>>> like OpenCL. It doesn't really know in advance the profile of
>>>>> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
>>>>> generation, and the actual size of the workload which can be influenced
>>>>> by the application (or user) and not the library.
>>>>>
>>>>> The approach also lends itself well for the "arms race" where every
>>>>> application can say "Me me me, I am the most important workload there is!".
>>>>
>>>> since there is discussion happening in two places:
>>>>
>>>> https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
>>>>
>>>> What I think you might want is a ctx boost_mask which lets an app or
>>>> driver disable certain boost signals/classes.  Where fence waits is
>>>> one class of boost, but hypothetical other signals like touchscreen
>>>> (or other) input events could be another class of boost.  A compute
>>>> workload might be interested in fence wait boosts but could care less
>>>> about input events.
>>>
>>> I think it can only be apps which could have any chance knowing whether
>>> their use of a library is latency sensitive or not. Which means new
>>> library extensions and their adoption. So I have some strong reservation
>>> that route is feasible.
>>>
>>> Or we tie with priority which many drivers do. Normal and above gets the
>>> boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).
>>
>> yeah, that sounds reasonable.
>>
> 
> on that gitlab-issue discussion Emma Anholt was against using the priority
> to influence frequency since that should be more about latency.
> 
> or we are talking about something different priority here?

As Rob already explained - I was suggesting skipping waitboost for 
contexts which explicitly made themselves low priority. I don't see a 
controversial angle there.

>>> Related note is that we lack any external control of our scheduling
>>> decisions so we really do suck compared to other scheduling domains like
>>> CPU and IO etc.
>>>
>>>>> The last concern is for me shared with the proposal to expose deadlines
>>>>> or high priority waits as explicit uapi knobs. Both come under the "what
>>>>> application told us it will do" category vs what it actually does. So I
>>>>> think it is slightly weaker than basing decisions of waits.
>>>>>
>>>>> The current waitboost is a bit detached from that problem because when
>>>>> we waitboost for flips we _know_ it is an actual framebuffer in the flip
>>>>> chain. When we waitboost for waits we also know someone is waiting. We
>>>>> are not trusting userspace telling us this will be a buffer in the flip
>>>>> chain or that this is a context which will have a certain duty-cycle.
>>>>>
>>>>> But yes, even if the input is truthful, latter is still only a
>>>>> heuristics because nothing says all waits are important. AFAIU it just
>>>>> happened to work well in the past.
>>>>>
>>>>> I do understand I am effectively arguing for more heuristics, which may
>>>>> sound a bit against the common wisdom. This is because in general I
>>>>> think the logic to do the right thing, be it in the driver or in the
>>>>> firmware, can work best if it has a holistic view. Simply put it needs
>>>>> to have more inputs to the decisions it is making.
>>>>>
>>>>> That is what my series is proposing - adding a common signal of "someone
>>>>> in userspace is waiting". What happens with that signal needs not be
>>>>> defined (promised) in the uapi contract.
>>>>>
>>>>> Say you route it to SLPC logic. It doesn't need to do with it what
>>>>> legacy i915 is doing today. It just needs to do something which works
>>>>> best for majority of workloads. It can even ignore it if that works for it.
>>>>>
>>>>> Finally, back to the immediate problem is when people replace the OpenCL
>>>>> NEO driver with clvk that performance tanks. Because former does waits
>>>>> using i915 specific ioctls and so triggers waitboost, latter waits on
>>>>> syncobj so no waitboost and performance is bad. What short term solution
>>>>> can we come up with? Or we say to not use clvk? I also wonder if other
>>>>> Vulkan based stuff is perhaps missing those easy performance gains..
>>>>>
>>>>> Perhaps strictly speaking Rob's and mine proposal are not mutually
>>>>> exclusive. Yes I could piggy back on his with an "immediate deadline for
>>>>> waits" idea, but they could also be separate concepts if we concluded
>>>>> "someone is waiting" signal is useful to have. Or it takes to long to
>>>>> upstream the full deadline idea.
>>>>
>>>> Let me re-spin my series and add the syncobj wait flag and i915 bits
>>>
>>> I think wait flag is questionable unless it is inverted to imply waits
>>> which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
>>> Having a flag which "makes things faster" IMO should require elevated
>>> privilege (to avoid the "arms race") at which point I fear it quickly
>>> becomes uninteresting.
>>
>> I guess you could make the argument in either direction.  Making the
>> default behavior ramp up clocks could be a power regression.
> 
> yeap, exactly the media / video conference case.

Yeah I agree. And as not all media use cases are the same, as are not 
all compute contexts someone somewhere will need to run a series of 
workloads for power and performance numbers. Ideally that someone would 
be the entity for which it makes sense to look at all use cases, from 
server room to client, 3d, media and compute for both. If we could get 
the capability to run this in some automated fashion, akin to CI, we 
would even have a chance to keep making good decisions in the future.

Or we do some one off testing for this instance, but we still need a 
range of workloads and parts to do it properly..

>> I also think the "arms race" scenario isn't really as much of a
>> problem as you think.  There aren't _that_ many things using the GPU
>> at the same time (compared to # of things using CPU).   And a lot of
>> mobile games throttle framerate to avoid draining your battery too
>> quickly (after all, if your battery is dead you can't keep buying loot
>> boxes or whatever).
> 
> Very good point.

On this one I still disagree from the point of view that it does not 
make it good uapi if we allow everyone to select themselves for priority 
handling (one flavour or the other).

> And in the GPU case they rely a lot on the profiles. Which btw, seems
> to be the Radeon solution. They boost the freq if the high performance
> profile is selected and don't care about the execution bubbles if low
> or mid profiles are selected, or something like that.

Profile as something which controls the waitboost globally? What would 
be the mechanism for communicating it to the driver?

Also, how would that reconcile the fact waitboost harms some workloads 
but helps others? If the latter not only improves the performance but 
also efficiency then assuming "battery" profile must mean "waitboost 
off" would be leaving battery life on the table. Conversely, if the "on 
a/c - max performance", would be global "waitboost on", then it could 
even be possible it wouldn't always be truly best performance if it 
causes thermal throttling.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-17 17:00           ` Rob Clark
  2023-02-17 20:45               ` Rodrigo Vivi
@ 2023-02-20 12:22             ` Tvrtko Ursulin
  2023-02-20 15:45               ` Rob Clark
  1 sibling, 1 reply; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-20 12:22 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi


On 17/02/2023 17:00, Rob Clark wrote:
> On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:

[snip]

>>> adapted from your patches..  I think the basic idea of deadlines
>>> (which includes "I want it NOW" ;-)) isn't controversial, but the
>>> original idea got caught up in some bikeshed (what about compositors
>>> that wait on fences in userspace to decide which surfaces to update in
>>> the next frame), plus me getting busy and generally not having a good
>>> plan for how to leverage this from VM guests (which is becoming
>>> increasingly important for CrOS).  I think I can build on some ongoing
>>> virtgpu fencing improvement work to solve the latter.  But now that we
>>> have a 2nd use-case for this, it makes sense to respin.
>>
>> Sure, I was looking at the old version already. It is interesting. But
>> also IMO needs quite a bit more work to approach achieving what is
>> implied from the name of the feature. It would need proper deadline
>> based sched job picking, and even then drm sched is mostly just a
>> frontend. So once past runnable status and jobs handed over to backend,
>> without further driver work it probably wouldn't be very effective past
>> very lightly loaded systems.
> 
> Yes, but all of that is not part of dma_fence ;-)

:) Okay.

Having said that, do we need a step back to think about whether adding 
deadline to dma-fences is not making them something too much different 
to what they were? Going from purely synchronisation primitive more 
towards scheduling paradigms. Just to brainstorm if there will not be 
any unintended consequences. I should mention this in your RFC thread 
actually.

> A pretty common challenging usecase is still the single fullscreen
> game, where scheduling isn't the problem, but landing at an
> appropriate GPU freq absolutely is.  (UI workloads are perhaps more
> interesting from a scheduler standpoint, but they generally aren't
> challenging from a load/freq standpoint.)

Challenging as in picking the right operating point? Might be latency 
impacted (and so user perceived UI smoothness) due missing waitboost for 
anything syncobj related. I don't know if anything to measure that 
exists currently though. Assuming it is measurable then the question 
would be is it perceivable.
> Fwiw, the original motivation of the series was to implement something
> akin to i915 pageflip boosting without having to abandon the atomic
> helpers.  (And, I guess it would also let i915 preserve that feature
> if it switched to atomic helpers.. I'm unsure if there are still other
> things blocking i915's migration.)

Question for display folks I guess.

>> Then if we fast forward to a world where schedulers perhaps become fully
>> deadline aware (we even had this for i915 few years back) then the
>> question will be does equating waits with immediate deadlines still
>> works. Maybe not too well because we wouldn't have the ability to
>> distinguish between the "someone is waiting" signal from the otherwise
>> propagated deadlines.
> 
> Is there any other way to handle a wait boost than expressing it as an
> ASAP deadline?

A leading question or just a question? Nothing springs to my mind at the 
moment.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 12:22             ` Tvrtko Ursulin
@ 2023-02-20 15:45               ` Rob Clark
  2023-02-20 15:56                 ` Tvrtko Ursulin
  0 siblings, 1 reply; 40+ messages in thread
From: Rob Clark @ 2023-02-20 15:45 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi

On Mon, Feb 20, 2023 at 4:22 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 17/02/2023 17:00, Rob Clark wrote:
> > On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
>
> [snip]
>
> >>> adapted from your patches..  I think the basic idea of deadlines
> >>> (which includes "I want it NOW" ;-)) isn't controversial, but the
> >>> original idea got caught up in some bikeshed (what about compositors
> >>> that wait on fences in userspace to decide which surfaces to update in
> >>> the next frame), plus me getting busy and generally not having a good
> >>> plan for how to leverage this from VM guests (which is becoming
> >>> increasingly important for CrOS).  I think I can build on some ongoing
> >>> virtgpu fencing improvement work to solve the latter.  But now that we
> >>> have a 2nd use-case for this, it makes sense to respin.
> >>
> >> Sure, I was looking at the old version already. It is interesting. But
> >> also IMO needs quite a bit more work to approach achieving what is
> >> implied from the name of the feature. It would need proper deadline
> >> based sched job picking, and even then drm sched is mostly just a
> >> frontend. So once past runnable status and jobs handed over to backend,
> >> without further driver work it probably wouldn't be very effective past
> >> very lightly loaded systems.
> >
> > Yes, but all of that is not part of dma_fence ;-)
>
> :) Okay.
>
> Having said that, do we need a step back to think about whether adding
> deadline to dma-fences is not making them something too much different
> to what they were? Going from purely synchronisation primitive more
> towards scheduling paradigms. Just to brainstorm if there will not be
> any unintended consequences. I should mention this in your RFC thread
> actually.

Perhaps "deadline" isn't quite the right name, but I haven't thought
of anything better.  It is really a hint to the fence signaller about
how soon it is interested in a result so the driver can factor that
into freq scaling decisions.  Maybe "goal" or some other term would be
better?

I guess that can factor into scheduling decisions as well.. but we
already have priority for that.  My main interest is freq mgmt.

(Thankfully we don't have performance and efficiency cores to worry
about, like CPUs ;-))

> > A pretty common challenging usecase is still the single fullscreen
> > game, where scheduling isn't the problem, but landing at an
> > appropriate GPU freq absolutely is.  (UI workloads are perhaps more
> > interesting from a scheduler standpoint, but they generally aren't
> > challenging from a load/freq standpoint.)
>
> Challenging as in picking the right operating point? Might be latency
> impacted (and so user perceived UI smoothness) due missing waitboost for
> anything syncobj related. I don't know if anything to measure that
> exists currently though. Assuming it is measurable then the question
> would be is it perceivable.
> > Fwiw, the original motivation of the series was to implement something
> > akin to i915 pageflip boosting without having to abandon the atomic
> > helpers.  (And, I guess it would also let i915 preserve that feature
> > if it switched to atomic helpers.. I'm unsure if there are still other
> > things blocking i915's migration.)
>
> Question for display folks I guess.
>
> >> Then if we fast forward to a world where schedulers perhaps become fully
> >> deadline aware (we even had this for i915 few years back) then the
> >> question will be does equating waits with immediate deadlines still
> >> works. Maybe not too well because we wouldn't have the ability to
> >> distinguish between the "someone is waiting" signal from the otherwise
> >> propagated deadlines.
> >
> > Is there any other way to handle a wait boost than expressing it as an
> > ASAP deadline?
>
> A leading question or just a question? Nothing springs to my mind at the
> moment.

Just a question.  The immediate deadline is the only thing that makes
sense to me, but that could be because I'm looking at it from the
perspective of also trying to handle the case where missing vblank
reduces utilization and provides the wrong signal to gpufreq.. i915
already has a way to handle this internally, but it involves bypassing
the atomic helpers, which isn't a thing I want to encourage other
drivers to do.  And completely doesn't work for situations where the
gpu and display are separate devices.

BR,
-R

> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 11:33               ` Tvrtko Ursulin
@ 2023-02-20 15:52                 ` Rob Clark
  2023-02-20 16:44                   ` Tvrtko Ursulin
  0 siblings, 1 reply; 40+ messages in thread
From: Rob Clark @ 2023-02-20 15:52 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi

On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 17/02/2023 20:45, Rodrigo Vivi wrote:
> > On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
> >> On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
> >> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>
> >>>
> >>> On 17/02/2023 14:55, Rob Clark wrote:
> >>>> On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
> >>>> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>>>
> >>>>>
> >>>>> On 16/02/2023 18:19, Rodrigo Vivi wrote:
> >>>>>> On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
> >>>>>>> On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
> >>>>>>> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>>>>>>
> >>>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>>>>>>>
> >>>>>>>> In i915 we have this concept of "wait boosting" where we give a priority boost
> >>>>>>>> for instance to fences which are actively waited upon from userspace. This has
> >>>>>>>> it's pros and cons and can certainly be discussed at lenght. However fact is
> >>>>>>>> some workloads really like it.
> >>>>>>>>
> >>>>>>>> Problem is that with the arrival of drm syncobj and a new userspace waiting
> >>>>>>>> entry point it added, the waitboost mechanism was bypassed. Hence I cooked up
> >>>>>>>> this mini series really (really) quickly to see if some discussion can be had.
> >>>>>>>>
> >>>>>>>> It adds a concept of "wait count" to dma fence, which is incremented for every
> >>>>>>>> explicit dma_fence_enable_sw_signaling and dma_fence_add_wait_callback (like
> >>>>>>>> dma_fence_add_callback but from explicit/userspace wait paths).
> >>>>>>>
> >>>>>>> I was thinking about a similar thing, but in the context of dma_fence
> >>>>>>> (or rather sync_file) fd poll()ing.  How does the kernel differentiate
> >>>>>>> between "housekeeping" poll()ers that don't want to trigger boost but
> >>>>>>> simply know when to do cleanup, and waiters who are waiting with some
> >>>>>>> urgency.  I think we could use EPOLLPRI for this purpose.
> >>>>>>>
> >>>>>>> Not sure how that translates to waits via the syncobj.  But I think we
> >>>>>>> want to let userspace give some hint about urgent vs housekeeping
> >>>>>>> waits.
> >>>>>>
> >>>>>> Should the hint be on the waits, or should the hints be on the executed
> >>>>>> context?
> >>>>>>
> >>>>>> In the end we need some way to quickly ramp-up the frequency to avoid
> >>>>>> the execution bubbles.
> >>>>>>
> >>>>>> waitboost is trying to guess that, but in some cases it guess wrong
> >>>>>> and waste power.
> >>>>>
> >>>>> Do we have a list of workloads which shows who benefits and who loses
> >>>>> from the current implementation of waitboost?
> >>>>>> btw, this is something that other drivers might need:
> >>>>>>
> >>>>>> https://gitlab.freedesktop.org/drm/amd/-/issues/1500#note_825883
> >>>>>> Cc: Alex Deucher <alexander.deucher@amd.com>
> >>>>>
> >>>>> I have several issues with the context hint if it would directly
> >>>>> influence frequency selection in the "more power" direction.
> >>>>>
> >>>>> First of all, assume a context hint would replace the waitboost. Which
> >>>>> applications would need to set it to restore the lost performance and
> >>>>> how would they set it?
> >>>>>
> >>>>> Then I don't even think userspace necessarily knows. Think of a layer
> >>>>> like OpenCL. It doesn't really know in advance the profile of
> >>>>> submissions vs waits. It depends on the CPU vs GPU speed, so hardware
> >>>>> generation, and the actual size of the workload which can be influenced
> >>>>> by the application (or user) and not the library.
> >>>>>
> >>>>> The approach also lends itself well for the "arms race" where every
> >>>>> application can say "Me me me, I am the most important workload there is!".
> >>>>
> >>>> since there is discussion happening in two places:
> >>>>
> >>>> https://gitlab.freedesktop.org/drm/intel/-/issues/8014#note_1777433
> >>>>
> >>>> What I think you might want is a ctx boost_mask which lets an app or
> >>>> driver disable certain boost signals/classes.  Where fence waits is
> >>>> one class of boost, but hypothetical other signals like touchscreen
> >>>> (or other) input events could be another class of boost.  A compute
> >>>> workload might be interested in fence wait boosts but could care less
> >>>> about input events.
> >>>
> >>> I think it can only be apps which could have any chance knowing whether
> >>> their use of a library is latency sensitive or not. Which means new
> >>> library extensions and their adoption. So I have some strong reservation
> >>> that route is feasible.
> >>>
> >>> Or we tie with priority which many drivers do. Normal and above gets the
> >>> boosting and what lowered itself does not (aka SCHED_IDLE/SCHED_BATCH).
> >>
> >> yeah, that sounds reasonable.
> >>
> >
> > on that gitlab-issue discussion Emma Anholt was against using the priority
> > to influence frequency since that should be more about latency.
> >
> > or we are talking about something different priority here?
>
> As Rob already explained - I was suggesting skipping waitboost for
> contexts which explicitly made themselves low priority. I don't see a
> controversial angle there.
>
> >>> Related note is that we lack any external control of our scheduling
> >>> decisions so we really do suck compared to other scheduling domains like
> >>> CPU and IO etc.
> >>>
> >>>>> The last concern is for me shared with the proposal to expose deadlines
> >>>>> or high priority waits as explicit uapi knobs. Both come under the "what
> >>>>> application told us it will do" category vs what it actually does. So I
> >>>>> think it is slightly weaker than basing decisions of waits.
> >>>>>
> >>>>> The current waitboost is a bit detached from that problem because when
> >>>>> we waitboost for flips we _know_ it is an actual framebuffer in the flip
> >>>>> chain. When we waitboost for waits we also know someone is waiting. We
> >>>>> are not trusting userspace telling us this will be a buffer in the flip
> >>>>> chain or that this is a context which will have a certain duty-cycle.
> >>>>>
> >>>>> But yes, even if the input is truthful, latter is still only a
> >>>>> heuristics because nothing says all waits are important. AFAIU it just
> >>>>> happened to work well in the past.
> >>>>>
> >>>>> I do understand I am effectively arguing for more heuristics, which may
> >>>>> sound a bit against the common wisdom. This is because in general I
> >>>>> think the logic to do the right thing, be it in the driver or in the
> >>>>> firmware, can work best if it has a holistic view. Simply put it needs
> >>>>> to have more inputs to the decisions it is making.
> >>>>>
> >>>>> That is what my series is proposing - adding a common signal of "someone
> >>>>> in userspace is waiting". What happens with that signal needs not be
> >>>>> defined (promised) in the uapi contract.
> >>>>>
> >>>>> Say you route it to SLPC logic. It doesn't need to do with it what
> >>>>> legacy i915 is doing today. It just needs to do something which works
> >>>>> best for majority of workloads. It can even ignore it if that works for it.
> >>>>>
> >>>>> Finally, back to the immediate problem is when people replace the OpenCL
> >>>>> NEO driver with clvk that performance tanks. Because former does waits
> >>>>> using i915 specific ioctls and so triggers waitboost, latter waits on
> >>>>> syncobj so no waitboost and performance is bad. What short term solution
> >>>>> can we come up with? Or we say to not use clvk? I also wonder if other
> >>>>> Vulkan based stuff is perhaps missing those easy performance gains..
> >>>>>
> >>>>> Perhaps strictly speaking Rob's and mine proposal are not mutually
> >>>>> exclusive. Yes I could piggy back on his with an "immediate deadline for
> >>>>> waits" idea, but they could also be separate concepts if we concluded
> >>>>> "someone is waiting" signal is useful to have. Or it takes to long to
> >>>>> upstream the full deadline idea.
> >>>>
> >>>> Let me re-spin my series and add the syncobj wait flag and i915 bits
> >>>
> >>> I think wait flag is questionable unless it is inverted to imply waits
> >>> which can be de-prioritized (again same parallel with SCHED_IDLE/BATCH).
> >>> Having a flag which "makes things faster" IMO should require elevated
> >>> privilege (to avoid the "arms race") at which point I fear it quickly
> >>> becomes uninteresting.
> >>
> >> I guess you could make the argument in either direction.  Making the
> >> default behavior ramp up clocks could be a power regression.
> >
> > yeap, exactly the media / video conference case.
>
> Yeah I agree. And as not all media use cases are the same, as are not
> all compute contexts someone somewhere will need to run a series of
> workloads for power and performance numbers. Ideally that someone would
> be the entity for which it makes sense to look at all use cases, from
> server room to client, 3d, media and compute for both. If we could get
> the capability to run this in some automated fashion, akin to CI, we
> would even have a chance to keep making good decisions in the future.
>
> Or we do some one off testing for this instance, but we still need a
> range of workloads and parts to do it properly..
>
> >> I also think the "arms race" scenario isn't really as much of a
> >> problem as you think.  There aren't _that_ many things using the GPU
> >> at the same time (compared to # of things using CPU).   And a lot of
> >> mobile games throttle framerate to avoid draining your battery too
> >> quickly (after all, if your battery is dead you can't keep buying loot
> >> boxes or whatever).
> >
> > Very good point.
>
> On this one I still disagree from the point of view that it does not
> make it good uapi if we allow everyone to select themselves for priority
> handling (one flavour or the other).

There is plenty of precedent for userspace giving hints to the kernel
about scheduling and freq mgmt.  Like schedutil uclamp stuff.
Although I think that is all based on cgroups.

In the fence/syncobj case, I think we need per-wait hints.. because
for a single process the driver will be doing both housekeeping waits
and potentially urgent waits.  There may also be some room for some
cgroup or similar knobs to control things like what max priority an
app can ask for, and whether or how aggressively the kernel responds
to the "deadline" hints.  So as far as "arms race", I don't think I'd
change anything about my "fence deadline" proposal.. but that it might
just be one piece of the overall puzzle.

BR,
-R

> > And in the GPU case they rely a lot on the profiles. Which btw, seems
> > to be the Radeon solution. They boost the freq if the high performance
> > profile is selected and don't care about the execution bubbles if low
> > or mid profiles are selected, or something like that.
>
> Profile as something which controls the waitboost globally? What would
> be the mechanism for communicating it to the driver?
>
> Also, how would that reconcile the fact waitboost harms some workloads
> but helps others? If the latter not only improves the performance but
> also efficiency then assuming "battery" profile must mean "waitboost
> off" would be leaving battery life on the table. Conversely, if the "on
> a/c - max performance", would be global "waitboost on", then it could
> even be possible it wouldn't always be truly best performance if it
> causes thermal throttling.
>
> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 15:45               ` Rob Clark
@ 2023-02-20 15:56                 ` Tvrtko Ursulin
  0 siblings, 0 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-20 15:56 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi


On 20/02/2023 15:45, Rob Clark wrote:
> On Mon, Feb 20, 2023 at 4:22 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 17/02/2023 17:00, Rob Clark wrote:
>>> On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
>>> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>> [snip]
>>
>>>>> adapted from your patches..  I think the basic idea of deadlines
>>>>> (which includes "I want it NOW" ;-)) isn't controversial, but the
>>>>> original idea got caught up in some bikeshed (what about compositors
>>>>> that wait on fences in userspace to decide which surfaces to update in
>>>>> the next frame), plus me getting busy and generally not having a good
>>>>> plan for how to leverage this from VM guests (which is becoming
>>>>> increasingly important for CrOS).  I think I can build on some ongoing
>>>>> virtgpu fencing improvement work to solve the latter.  But now that we
>>>>> have a 2nd use-case for this, it makes sense to respin.
>>>>
>>>> Sure, I was looking at the old version already. It is interesting. But
>>>> also IMO needs quite a bit more work to approach achieving what is
>>>> implied from the name of the feature. It would need proper deadline
>>>> based sched job picking, and even then drm sched is mostly just a
>>>> frontend. So once past runnable status and jobs handed over to backend,
>>>> without further driver work it probably wouldn't be very effective past
>>>> very lightly loaded systems.
>>>
>>> Yes, but all of that is not part of dma_fence ;-)
>>
>> :) Okay.
>>
>> Having said that, do we need a step back to think about whether adding
>> deadline to dma-fences is not making them something too much different
>> to what they were? Going from purely synchronisation primitive more
>> towards scheduling paradigms. Just to brainstorm if there will not be
>> any unintended consequences. I should mention this in your RFC thread
>> actually.
> 
> Perhaps "deadline" isn't quite the right name, but I haven't thought
> of anything better.  It is really a hint to the fence signaller about
> how soon it is interested in a result so the driver can factor that
> into freq scaling decisions.  Maybe "goal" or some other term would be
> better?

Don't know, no strong opinion on the name at the moment. For me it was 
more about the change of what type of side channel data is getting 
attached to dma-fence and whether it changes what the primitive is for.

> I guess that can factor into scheduling decisions as well.. but we
> already have priority for that.  My main interest is freq mgmt.
> 
> (Thankfully we don't have performance and efficiency cores to worry
> about, like CPUs ;-))
> 
>>> A pretty common challenging usecase is still the single fullscreen
>>> game, where scheduling isn't the problem, but landing at an
>>> appropriate GPU freq absolutely is.  (UI workloads are perhaps more
>>> interesting from a scheduler standpoint, but they generally aren't
>>> challenging from a load/freq standpoint.)
>>
>> Challenging as in picking the right operating point? Might be latency
>> impacted (and so user perceived UI smoothness) due missing waitboost for
>> anything syncobj related. I don't know if anything to measure that
>> exists currently though. Assuming it is measurable then the question
>> would be is it perceivable.
>>> Fwiw, the original motivation of the series was to implement something
>>> akin to i915 pageflip boosting without having to abandon the atomic
>>> helpers.  (And, I guess it would also let i915 preserve that feature
>>> if it switched to atomic helpers.. I'm unsure if there are still other
>>> things blocking i915's migration.)
>>
>> Question for display folks I guess.
>>
>>>> Then if we fast forward to a world where schedulers perhaps become fully
>>>> deadline aware (we even had this for i915 few years back) then the
>>>> question will be does equating waits with immediate deadlines still
>>>> works. Maybe not too well because we wouldn't have the ability to
>>>> distinguish between the "someone is waiting" signal from the otherwise
>>>> propagated deadlines.
>>>
>>> Is there any other way to handle a wait boost than expressing it as an
>>> ASAP deadline?
>>
>> A leading question or just a question? Nothing springs to my mind at the
>> moment.
> 
> Just a question.  The immediate deadline is the only thing that makes
> sense to me, but that could be because I'm looking at it from the
> perspective of also trying to handle the case where missing vblank
> reduces utilization and provides the wrong signal to gpufreq.. i915
> already has a way to handle this internally, but it involves bypassing
> the atomic helpers, which isn't a thing I want to encourage other
> drivers to do.  And completely doesn't work for situations where the
> gpu and display are separate devices.

Right, there is yet another angle to discuss with Daniel here who AFAIR 
was a bit against i915 priority inheritance going past a single device 
instance. In which case DRI_PRIME=1 would lose the ability to boost 
frame buffer dependency chains. Opens up the question of deadline 
inheritance across different drivers too. Or perhaps Daniel would be 
okay with this working if implemented at the dma-fence layer.

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 15:52                 ` Rob Clark
@ 2023-02-20 16:44                   ` Tvrtko Ursulin
  2023-02-20 16:51                     ` Tvrtko Ursulin
  2023-02-20 17:07                     ` Rob Clark
  0 siblings, 2 replies; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-20 16:44 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi


On 20/02/2023 15:52, Rob Clark wrote:
> On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 17/02/2023 20:45, Rodrigo Vivi wrote:

[snip]

>> Yeah I agree. And as not all media use cases are the same, as are not
>> all compute contexts someone somewhere will need to run a series of
>> workloads for power and performance numbers. Ideally that someone would
>> be the entity for which it makes sense to look at all use cases, from
>> server room to client, 3d, media and compute for both. If we could get
>> the capability to run this in some automated fashion, akin to CI, we
>> would even have a chance to keep making good decisions in the future.
>>
>> Or we do some one off testing for this instance, but we still need a
>> range of workloads and parts to do it properly..
>>
>>>> I also think the "arms race" scenario isn't really as much of a
>>>> problem as you think.  There aren't _that_ many things using the GPU
>>>> at the same time (compared to # of things using CPU).   And a lot of
>>>> mobile games throttle framerate to avoid draining your battery too
>>>> quickly (after all, if your battery is dead you can't keep buying loot
>>>> boxes or whatever).
>>>
>>> Very good point.
>>
>> On this one I still disagree from the point of view that it does not
>> make it good uapi if we allow everyone to select themselves for priority
>> handling (one flavour or the other).
> 
> There is plenty of precedent for userspace giving hints to the kernel
> about scheduling and freq mgmt.  Like schedutil uclamp stuff.
> Although I think that is all based on cgroups.

I knew about SCHED_DEADLINE and that it requires CAP_SYS_NICE, but I did 
not know about uclamp. Quick experiment with uclampset suggests it 
indeed does not require elevated privilege. If that is indeed so, it is 
good enough for me as a precedent.

It appears to work using sched_setscheduler so maybe could define 
something similar in i915/xe, per context or per client, not sure.

Maybe it would start as a primitive implementation but the uapi would 
not preclude making it smart(er) afterwards. Or passing along to GuC to 
do it's thing with it.

> In the fence/syncobj case, I think we need per-wait hints.. because
> for a single process the driver will be doing both housekeeping waits
> and potentially urgent waits.  There may also be some room for some
> cgroup or similar knobs to control things like what max priority an
> app can ask for, and whether or how aggressively the kernel responds
> to the "deadline" hints.  So as far as "arms race", I don't think I'd

Per wait hints are okay I guess even with "I am important" in their name 
if sched_setscheduler allows raising uclamp.min just like that. In which 
case cgroup limits to mimick cpu uclamp also make sense.

> change anything about my "fence deadline" proposal.. but that it might
> just be one piece of the overall puzzle.

That SCHED_DEADLINE requires CAP_SYS_NICE does not worry you?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 16:44                   ` Tvrtko Ursulin
@ 2023-02-20 16:51                     ` Tvrtko Ursulin
  2023-02-20 17:14                       ` Rob Clark
  2023-02-20 17:07                     ` Rob Clark
  1 sibling, 1 reply; 40+ messages in thread
From: Tvrtko Ursulin @ 2023-02-20 16:51 UTC (permalink / raw)
  To: Rob Clark; +Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi


On 20/02/2023 16:44, Tvrtko Ursulin wrote:
> 
> On 20/02/2023 15:52, Rob Clark wrote:
>> On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>
>>>
>>> On 17/02/2023 20:45, Rodrigo Vivi wrote:
> 
> [snip]
> 
>>> Yeah I agree. And as not all media use cases are the same, as are not
>>> all compute contexts someone somewhere will need to run a series of
>>> workloads for power and performance numbers. Ideally that someone would
>>> be the entity for which it makes sense to look at all use cases, from
>>> server room to client, 3d, media and compute for both. If we could get
>>> the capability to run this in some automated fashion, akin to CI, we
>>> would even have a chance to keep making good decisions in the future.
>>>
>>> Or we do some one off testing for this instance, but we still need a
>>> range of workloads and parts to do it properly..
>>>
>>>>> I also think the "arms race" scenario isn't really as much of a
>>>>> problem as you think.  There aren't _that_ many things using the GPU
>>>>> at the same time (compared to # of things using CPU).   And a lot of
>>>>> mobile games throttle framerate to avoid draining your battery too
>>>>> quickly (after all, if your battery is dead you can't keep buying loot
>>>>> boxes or whatever).
>>>>
>>>> Very good point.
>>>
>>> On this one I still disagree from the point of view that it does not
>>> make it good uapi if we allow everyone to select themselves for priority
>>> handling (one flavour or the other).
>>
>> There is plenty of precedent for userspace giving hints to the kernel
>> about scheduling and freq mgmt.  Like schedutil uclamp stuff.
>> Although I think that is all based on cgroups.
> 
> I knew about SCHED_DEADLINE and that it requires CAP_SYS_NICE, but I did 
> not know about uclamp. Quick experiment with uclampset suggests it 
> indeed does not require elevated privilege. If that is indeed so, it is 
> good enough for me as a precedent.
> 
> It appears to work using sched_setscheduler so maybe could define 
> something similar in i915/xe, per context or per client, not sure.
> 
> Maybe it would start as a primitive implementation but the uapi would 
> not preclude making it smart(er) afterwards. Or passing along to GuC to 
> do it's thing with it.

Hmmm having said that, how would we fix clvk performance using that? We 
would either need the library to do a new step when creating contexts, 
or allow external control so outside entity can do it. And then the 
question is based on what it decides to do it? Is it possible to know 
which, for instance, Chrome tab will be (or is) using clvk so that tab 
management code does it?

Regards,

Tvrtko

>> In the fence/syncobj case, I think we need per-wait hints.. because
>> for a single process the driver will be doing both housekeeping waits
>> and potentially urgent waits.  There may also be some room for some
>> cgroup or similar knobs to control things like what max priority an
>> app can ask for, and whether or how aggressively the kernel responds
>> to the "deadline" hints.  So as far as "arms race", I don't think I'd
> 
> Per wait hints are okay I guess even with "I am important" in their name 
> if sched_setscheduler allows raising uclamp.min just like that. In which 
> case cgroup limits to mimick cpu uclamp also make sense.
> 
>> change anything about my "fence deadline" proposal.. but that it might
>> just be one piece of the overall puzzle.
> 
> That SCHED_DEADLINE requires CAP_SYS_NICE does not worry you?
> 
> Regards,
> 
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 16:44                   ` Tvrtko Ursulin
  2023-02-20 16:51                     ` Tvrtko Ursulin
@ 2023-02-20 17:07                     ` Rob Clark
  1 sibling, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-20 17:07 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi

On Mon, Feb 20, 2023 at 8:44 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 20/02/2023 15:52, Rob Clark wrote:
> > On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >>
> >> On 17/02/2023 20:45, Rodrigo Vivi wrote:
>
> [snip]
>
> >> Yeah I agree. And as not all media use cases are the same, as are not
> >> all compute contexts someone somewhere will need to run a series of
> >> workloads for power and performance numbers. Ideally that someone would
> >> be the entity for which it makes sense to look at all use cases, from
> >> server room to client, 3d, media and compute for both. If we could get
> >> the capability to run this in some automated fashion, akin to CI, we
> >> would even have a chance to keep making good decisions in the future.
> >>
> >> Or we do some one off testing for this instance, but we still need a
> >> range of workloads and parts to do it properly..
> >>
> >>>> I also think the "arms race" scenario isn't really as much of a
> >>>> problem as you think.  There aren't _that_ many things using the GPU
> >>>> at the same time (compared to # of things using CPU).   And a lot of
> >>>> mobile games throttle framerate to avoid draining your battery too
> >>>> quickly (after all, if your battery is dead you can't keep buying loot
> >>>> boxes or whatever).
> >>>
> >>> Very good point.
> >>
> >> On this one I still disagree from the point of view that it does not
> >> make it good uapi if we allow everyone to select themselves for priority
> >> handling (one flavour or the other).
> >
> > There is plenty of precedent for userspace giving hints to the kernel
> > about scheduling and freq mgmt.  Like schedutil uclamp stuff.
> > Although I think that is all based on cgroups.
>
> I knew about SCHED_DEADLINE and that it requires CAP_SYS_NICE, but I did
> not know about uclamp. Quick experiment with uclampset suggests it
> indeed does not require elevated privilege. If that is indeed so, it is
> good enough for me as a precedent.
>
> It appears to work using sched_setscheduler so maybe could define
> something similar in i915/xe, per context or per client, not sure.
>
> Maybe it would start as a primitive implementation but the uapi would
> not preclude making it smart(er) afterwards. Or passing along to GuC to
> do it's thing with it.
>
> > In the fence/syncobj case, I think we need per-wait hints.. because
> > for a single process the driver will be doing both housekeeping waits
> > and potentially urgent waits.  There may also be some room for some
> > cgroup or similar knobs to control things like what max priority an
> > app can ask for, and whether or how aggressively the kernel responds
> > to the "deadline" hints.  So as far as "arms race", I don't think I'd
>
> Per wait hints are okay I guess even with "I am important" in their name
> if sched_setscheduler allows raising uclamp.min just like that. In which
> case cgroup limits to mimick cpu uclamp also make sense.
>
> > change anything about my "fence deadline" proposal.. but that it might
> > just be one piece of the overall puzzle.
>
> That SCHED_DEADLINE requires CAP_SYS_NICE does not worry you?

This gets to why the name "fence deadline" is perhaps not the best..
it really isn't meant to be analogous to SCHED_DEADLINE, but rather
just a hint to the driver about what userspace is doing.  Maybe we
just document it more strongly as a hint?

BR,
-R

> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits
  2023-02-20 16:51                     ` Tvrtko Ursulin
@ 2023-02-20 17:14                       ` Rob Clark
  0 siblings, 0 replies; 40+ messages in thread
From: Rob Clark @ 2023-02-20 17:14 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Alex Deucher, Rob Clark, Intel-gfx, dri-devel, Rodrigo Vivi

On Mon, Feb 20, 2023 at 8:51 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 20/02/2023 16:44, Tvrtko Ursulin wrote:
> >
> > On 20/02/2023 15:52, Rob Clark wrote:
> >> On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
> >> <tvrtko.ursulin@linux.intel.com> wrote:
> >>>
> >>>
> >>> On 17/02/2023 20:45, Rodrigo Vivi wrote:
> >
> > [snip]
> >
> >>> Yeah I agree. And as not all media use cases are the same, as are not
> >>> all compute contexts someone somewhere will need to run a series of
> >>> workloads for power and performance numbers. Ideally that someone would
> >>> be the entity for which it makes sense to look at all use cases, from
> >>> server room to client, 3d, media and compute for both. If we could get
> >>> the capability to run this in some automated fashion, akin to CI, we
> >>> would even have a chance to keep making good decisions in the future.
> >>>
> >>> Or we do some one off testing for this instance, but we still need a
> >>> range of workloads and parts to do it properly..
> >>>
> >>>>> I also think the "arms race" scenario isn't really as much of a
> >>>>> problem as you think.  There aren't _that_ many things using the GPU
> >>>>> at the same time (compared to # of things using CPU).   And a lot of
> >>>>> mobile games throttle framerate to avoid draining your battery too
> >>>>> quickly (after all, if your battery is dead you can't keep buying loot
> >>>>> boxes or whatever).
> >>>>
> >>>> Very good point.
> >>>
> >>> On this one I still disagree from the point of view that it does not
> >>> make it good uapi if we allow everyone to select themselves for priority
> >>> handling (one flavour or the other).
> >>
> >> There is plenty of precedent for userspace giving hints to the kernel
> >> about scheduling and freq mgmt.  Like schedutil uclamp stuff.
> >> Although I think that is all based on cgroups.
> >
> > I knew about SCHED_DEADLINE and that it requires CAP_SYS_NICE, but I did
> > not know about uclamp. Quick experiment with uclampset suggests it
> > indeed does not require elevated privilege. If that is indeed so, it is
> > good enough for me as a precedent.
> >
> > It appears to work using sched_setscheduler so maybe could define
> > something similar in i915/xe, per context or per client, not sure.
> >
> > Maybe it would start as a primitive implementation but the uapi would
> > not preclude making it smart(er) afterwards. Or passing along to GuC to
> > do it's thing with it.
>
> Hmmm having said that, how would we fix clvk performance using that? We
> would either need the library to do a new step when creating contexts,
> or allow external control so outside entity can do it. And then the
> question is based on what it decides to do it? Is it possible to know
> which, for instance, Chrome tab will be (or is) using clvk so that tab
> management code does it?

I am not sure.. the clvk usage is, I think, not actually in chrome
itself, but something camera related?

Presumably we could build some cgroup knobs to control how the driver
reacts to the "deadline" hints (ie. ignore them completely, or impose
some upper limit on how much freq boost will be applied, etc).  I
think this sort of control of how the driver responds to hints
probably fits best with cgroups, as that is how we are already
implementing similar tuning for cpufreq/sched.  (Ie. foreground app or
tab gets moved to a different cgroup.)  But admittedly I haven't
looked too closely at how cgroups work on the kernel side.

BR,
-R

> Regards,
>
> Tvrtko
>
> >> In the fence/syncobj case, I think we need per-wait hints.. because
> >> for a single process the driver will be doing both housekeeping waits
> >> and potentially urgent waits.  There may also be some room for some
> >> cgroup or similar knobs to control things like what max priority an
> >> app can ask for, and whether or how aggressively the kernel responds
> >> to the "deadline" hints.  So as far as "arms race", I don't think I'd
> >
> > Per wait hints are okay I guess even with "I am important" in their name
> > if sched_setscheduler allows raising uclamp.min just like that. In which
> > case cgroup limits to mimick cpu uclamp also make sense.
> >
> >> change anything about my "fence deadline" proposal.. but that it might
> >> just be one piece of the overall puzzle.
> >
> > That SCHED_DEADLINE requires CAP_SYS_NICE does not worry you?
> >
> > Regards,
> >
> > Tvrtko

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2023-02-20 17:14 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-10 13:06 [RFC v2 0/5] Waitboost drm syncobj waits Tvrtko Ursulin
2023-02-10 13:06 ` [Intel-gfx] " Tvrtko Ursulin
2023-02-10 13:06 ` [RFC 1/5] dma-fence: Track explicit waiters Tvrtko Ursulin
2023-02-10 13:06   ` [Intel-gfx] " Tvrtko Ursulin
2023-02-10 13:06 ` [RFC 2/5] drm/syncobj: Mark syncobj waits as external waiters Tvrtko Ursulin
2023-02-10 13:06   ` [Intel-gfx] " Tvrtko Ursulin
2023-02-10 13:06 ` [RFC 3/5] drm/i915: Waitboost external waits Tvrtko Ursulin
2023-02-10 13:06   ` [Intel-gfx] " Tvrtko Ursulin
2023-02-10 13:06 ` [RFC 4/5] drm/i915: Mark waits as explicit Tvrtko Ursulin
2023-02-10 13:06   ` [Intel-gfx] " Tvrtko Ursulin
2023-02-10 13:06 ` [RFC 5/5] drm/i915: Wait boost requests waited upon by others Tvrtko Ursulin
2023-02-10 13:06   ` [Intel-gfx] " Tvrtko Ursulin
2023-02-10 13:24 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Waitboost drm syncobj waits (rev2) Patchwork
2023-02-10 17:05 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2023-02-14 19:14 ` [Intel-gfx] [RFC v2 0/5] Waitboost drm syncobj waits Rob Clark
2023-02-14 19:26   ` Rob Clark
2023-02-16 11:19   ` Tvrtko Ursulin
2023-02-16 15:43     ` Rob Clark
2023-02-16 15:43       ` Rob Clark
2023-02-16 18:19   ` Rodrigo Vivi
2023-02-16 18:19     ` Rodrigo Vivi
2023-02-16 19:59     ` Rob Clark
2023-02-16 19:59       ` Rob Clark
2023-02-17 12:56     ` Tvrtko Ursulin
2023-02-17 14:55       ` Rob Clark
2023-02-17 16:03         ` Tvrtko Ursulin
2023-02-17 17:00           ` Rob Clark
2023-02-17 20:45             ` Rodrigo Vivi
2023-02-17 20:45               ` Rodrigo Vivi
2023-02-17 23:38               ` Rob Clark
2023-02-17 23:38                 ` Rob Clark
2023-02-20 11:33               ` Tvrtko Ursulin
2023-02-20 15:52                 ` Rob Clark
2023-02-20 16:44                   ` Tvrtko Ursulin
2023-02-20 16:51                     ` Tvrtko Ursulin
2023-02-20 17:14                       ` Rob Clark
2023-02-20 17:07                     ` Rob Clark
2023-02-20 12:22             ` Tvrtko Ursulin
2023-02-20 15:45               ` Rob Clark
2023-02-20 15:56                 ` Tvrtko Ursulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.