linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/14] dma-fence: Deadline awareness
@ 2023-02-18 21:15 Rob Clark
  2023-02-18 21:15 ` [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness Rob Clark
                   ` (13 more replies)
  0 siblings, 14 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Gustavo Padovan, intel-gfx,
	moderated list:DMA BUFFER SHARING FRAMEWORK,
	open list:DRM DRIVER FOR MSM ADRENO GPU, open list,
	open list:DMA BUFFER SHARING FRAMEWORK, Sean Paul

From: Rob Clark <robdclark@chromium.org>

This series adds deadline awareness to fences, so realtime deadlines
such as vblank can be communicated to the fence signaller for power/
frequency management decisions.

This is partially inspired by a trick i915 does, but implemented
via dma-fence for a couple of reasons:

1) To continue to be able to use the atomic helpers
2) To support cases where display and gpu are different drivers

This iteration adds a dma-fence ioctl to set a deadline (both to
support igt-tests, and compositors which delay decisions about which
client buffer to display), and a sw_sync ioctl to read back the
deadline.  IGT tests utilizing these can be found at:

  https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline


v1: https://patchwork.freedesktop.org/series/93035/
v2: Move filtering out of later deadlines to fence implementation
    to avoid increasing the size of dma_fence
v3: Add support in fence-array and fence-chain; Add some uabi to
    support igt tests and userspace compositors.
v4: Rebase, address various comments, and add syncobj deadline
    support, and sync_file EPOLLPRI based on experience with perf/
    freq issues with clvk compute workloads on i915 (anv)

Rob Clark (14):
  dma-buf/dma-fence: Add deadline awareness
  dma-buf/fence-array: Add fence deadline support
  dma-buf/fence-chain: Add fence deadline support
  dma-buf/dma-resv: Add a way to set fence deadline
  dma-buf/sync_file: Add SET_DEADLINE ioctl
  dma-buf/sync_file: Support (E)POLLPRI
  dma-buf/sw_sync: Add fence deadline support
  drm/scheduler: Add fence deadline support
  drm/syncobj: Add deadline support for syncobj waits
  drm/vblank: Add helper to get next vblank time
  drm/atomic-helper: Set fence deadline for vblank
  drm/msm: Add deadline based boost support
  drm/msm: Add wait-boost support
  drm/i915: Add deadline based boost support

 drivers/dma-buf/dma-fence-array.c       | 11 ++++
 drivers/dma-buf/dma-fence-chain.c       | 13 +++++
 drivers/dma-buf/dma-fence.c             | 20 +++++++
 drivers/dma-buf/dma-resv.c              | 19 +++++++
 drivers/dma-buf/sw_sync.c               | 58 +++++++++++++++++++
 drivers/dma-buf/sync_debug.h            |  2 +
 drivers/dma-buf/sync_file.c             | 27 +++++++++
 drivers/gpu/drm/drm_atomic_helper.c     | 36 ++++++++++++
 drivers/gpu/drm/drm_ioctl.c             |  3 +
 drivers/gpu/drm/drm_syncobj.c           | 59 ++++++++++++++++----
 drivers/gpu/drm/drm_vblank.c            | 32 +++++++++++
 drivers/gpu/drm/i915/i915_driver.c      |  2 +-
 drivers/gpu/drm/i915/i915_request.c     | 20 +++++++
 drivers/gpu/drm/msm/msm_drv.c           | 16 ++++--
 drivers/gpu/drm/msm/msm_fence.c         | 74 +++++++++++++++++++++++++
 drivers/gpu/drm/msm/msm_fence.h         | 20 +++++++
 drivers/gpu/drm/msm/msm_gem.c           |  5 ++
 drivers/gpu/drm/scheduler/sched_fence.c | 46 +++++++++++++++
 drivers/gpu/drm/scheduler/sched_main.c  |  2 +-
 include/drm/drm_drv.h                   |  6 ++
 include/drm/drm_vblank.h                |  1 +
 include/drm/gpu_scheduler.h             |  8 +++
 include/linux/dma-fence.h               | 20 +++++++
 include/linux/dma-resv.h                |  2 +
 include/uapi/drm/drm.h                  | 16 +++++-
 include/uapi/drm/msm_drm.h              | 14 ++++-
 include/uapi/linux/sync_file.h          | 22 ++++++++
 27 files changed, 532 insertions(+), 22 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 93+ messages in thread

* [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-22 10:23   ` Tvrtko Ursulin
  2023-02-22 11:01   ` Luben Tuikov
  2023-02-18 21:15 ` [PATCH v4 02/14] dma-buf/fence-array: Add fence deadline support Rob Clark
                   ` (12 subsequent siblings)
  13 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Christian König,
	Sumit Semwal, Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

Add a way to hint to the fence signaler of an upcoming deadline, such as
vblank, which the fence waiter would prefer not to miss.  This is to aid
the fence signaler in making power management decisions, like boosting
frequency as the deadline approaches and awareness of missing deadlines
so that can be factored in to the frequency scaling.

v2: Drop dma_fence::deadline and related logic to filter duplicate
    deadlines, to avoid increasing dma_fence size.  The fence-context
    implementation will need similar logic to track deadlines of all
    the fences on the same timeline.  [ckoenig]
v3: Clarify locking wrt. set_deadline callback

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
 include/linux/dma-fence.h   | 20 ++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 0de0482cd36e..763b32627684 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
 }
 EXPORT_SYMBOL(dma_fence_wait_any_timeout);
 
+
+/**
+ * dma_fence_set_deadline - set desired fence-wait deadline
+ * @fence:    the fence that is to be waited on
+ * @deadline: the time by which the waiter hopes for the fence to be
+ *            signaled
+ *
+ * Inform the fence signaler of an upcoming deadline, such as vblank, by
+ * which point the waiter would prefer the fence to be signaled by.  This
+ * is intended to give feedback to the fence signaler to aid in power
+ * management decisions, such as boosting GPU frequency if a periodic
+ * vblank deadline is approaching.
+ */
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
+{
+	if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
+		fence->ops->set_deadline(fence, deadline);
+}
+EXPORT_SYMBOL(dma_fence_set_deadline);
+
 /**
  * dma_fence_describe - Dump fence describtion into seq_file
  * @fence: the 6fence to describe
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 775cdc0b4f24..d77f6591c453 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
 	DMA_FENCE_FLAG_SIGNALED_BIT,
 	DMA_FENCE_FLAG_TIMESTAMP_BIT,
 	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
 	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
 };
 
@@ -257,6 +258,23 @@ struct dma_fence_ops {
 	 */
 	void (*timeline_value_str)(struct dma_fence *fence,
 				   char *str, int size);
+
+	/**
+	 * @set_deadline:
+	 *
+	 * Callback to allow a fence waiter to inform the fence signaler of
+	 * an upcoming deadline, such as vblank, by which point the waiter
+	 * would prefer the fence to be signaled by.  This is intended to
+	 * give feedback to the fence signaler to aid in power management
+	 * decisions, such as boosting GPU frequency.
+	 *
+	 * This is called without &dma_fence.lock held, it can be called
+	 * multiple times and from any context.  Locking is up to the callee
+	 * if it has some state to manage.
+	 *
+	 * This callback is optional.
+	 */
+	void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
 };
 
 void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
@@ -583,6 +601,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
 	return ret < 0 ? ret : 0;
 }
 
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
+
 struct dma_fence *dma_fence_get_stub(void);
 struct dma_fence *dma_fence_allocate_private_stub(void);
 u64 dma_fence_context_alloc(unsigned num);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 02/14] dma-buf/fence-array: Add fence deadline support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
  2023-02-18 21:15 ` [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-18 21:15 ` [PATCH v4 03/14] dma-buf/fence-chain: " Rob Clark
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Christian König,
	Sumit Semwal, Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

Propagate the deadline to all the fences in the array.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/dma-buf/dma-fence-array.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index 5c8a7084577b..9b3ce8948351 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -123,12 +123,23 @@ static void dma_fence_array_release(struct dma_fence *fence)
 	dma_fence_free(fence);
 }
 
+static void dma_fence_array_set_deadline(struct dma_fence *fence,
+					 ktime_t deadline)
+{
+	struct dma_fence_array *array = to_dma_fence_array(fence);
+	unsigned i;
+
+	for (i = 0; i < array->num_fences; ++i)
+		dma_fence_set_deadline(array->fences[i], deadline);
+}
+
 const struct dma_fence_ops dma_fence_array_ops = {
 	.get_driver_name = dma_fence_array_get_driver_name,
 	.get_timeline_name = dma_fence_array_get_timeline_name,
 	.enable_signaling = dma_fence_array_enable_signaling,
 	.signaled = dma_fence_array_signaled,
 	.release = dma_fence_array_release,
+	.set_deadline = dma_fence_array_set_deadline,
 };
 EXPORT_SYMBOL(dma_fence_array_ops);
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 03/14] dma-buf/fence-chain: Add fence deadline support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
  2023-02-18 21:15 ` [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness Rob Clark
  2023-02-18 21:15 ` [PATCH v4 02/14] dma-buf/fence-array: Add fence deadline support Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-22 10:27   ` Tvrtko Ursulin
  2023-02-18 21:15 ` [PATCH v4 04/14] dma-buf/dma-resv: Add a way to set fence deadline Rob Clark
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Christian König,
	Sumit Semwal, Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

Propagate the deadline to all the fences in the chain.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com> for this one.
---
 drivers/dma-buf/dma-fence-chain.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
index a0d920576ba6..4684874af612 100644
--- a/drivers/dma-buf/dma-fence-chain.c
+++ b/drivers/dma-buf/dma-fence-chain.c
@@ -206,6 +206,18 @@ static void dma_fence_chain_release(struct dma_fence *fence)
 	dma_fence_free(fence);
 }
 
+
+static void dma_fence_chain_set_deadline(struct dma_fence *fence,
+					 ktime_t deadline)
+{
+	dma_fence_chain_for_each(fence, fence) {
+		struct dma_fence_chain *chain = to_dma_fence_chain(fence);
+		struct dma_fence *f = chain ? chain->fence : fence;
+
+		dma_fence_set_deadline(f, deadline);
+	}
+}
+
 const struct dma_fence_ops dma_fence_chain_ops = {
 	.use_64bit_seqno = true,
 	.get_driver_name = dma_fence_chain_get_driver_name,
@@ -213,6 +225,7 @@ const struct dma_fence_ops dma_fence_chain_ops = {
 	.enable_signaling = dma_fence_chain_enable_signaling,
 	.signaled = dma_fence_chain_signaled,
 	.release = dma_fence_chain_release,
+	.set_deadline = dma_fence_chain_set_deadline,
 };
 EXPORT_SYMBOL(dma_fence_chain_ops);
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 04/14] dma-buf/dma-resv: Add a way to set fence deadline
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (2 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 03/14] dma-buf/fence-chain: " Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-20  8:16   ` Christian König
  2023-02-18 21:15 ` [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl Rob Clark
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Christian König, open list:DMA BUFFER SHARING FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

Add a way to set a deadline on remaining resv fences according to the
requested usage.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/dma-buf/dma-resv.c | 19 +++++++++++++++++++
 include/linux/dma-resv.h   |  2 ++
 2 files changed, 21 insertions(+)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 1c76aed8e262..0c86f6d577ab 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -684,6 +684,25 @@ long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
 }
 EXPORT_SYMBOL_GPL(dma_resv_wait_timeout);
 
+/**
+ * dma_resv_set_deadline - Set a deadline on reservation's objects fences
+ * @obj: the reservation object
+ * @usage: controls which fences to include, see enum dma_resv_usage.
+ * @deadline: the requested deadline (MONOTONIC)
+ */
+void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage,
+			   ktime_t deadline)
+{
+	struct dma_resv_iter cursor;
+	struct dma_fence *fence;
+
+	dma_resv_iter_begin(&cursor, obj, usage);
+	dma_resv_for_each_fence_unlocked(&cursor, fence) {
+		dma_fence_set_deadline(fence, deadline);
+	}
+	dma_resv_iter_end(&cursor);
+}
+EXPORT_SYMBOL_GPL(dma_resv_set_deadline);
 
 /**
  * dma_resv_test_signaled - Test if a reservation object's fences have been
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index 0637659a702c..8d0e34dad446 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -479,6 +479,8 @@ int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage,
 int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
 long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
 			   bool intr, unsigned long timeout);
+void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage,
+			   ktime_t deadline);
 bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage);
 void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq);
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (3 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 04/14] dma-buf/dma-resv: Add a way to set fence deadline Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-20  8:27   ` Christian König
  2023-02-20  8:48   ` Pekka Paalanen
  2023-02-18 21:15 ` [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI Rob Clark
                   ` (8 subsequent siblings)
  13 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, Christian König,
	open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

The initial purpose is for igt tests, but this would also be useful for
compositors that wait until close to vblank deadline to make decisions
about which frame to show.

The igt tests can be found at:

https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline

v2: Clarify the timebase, add link to igt tests

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/dma-buf/sync_file.c    | 19 +++++++++++++++++++
 include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
index af57799c86ce..fb6ca1032885 100644
--- a/drivers/dma-buf/sync_file.c
+++ b/drivers/dma-buf/sync_file.c
@@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
 	return ret;
 }
 
+static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
+					unsigned long arg)
+{
+	struct sync_set_deadline ts;
+
+	if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
+		return -EFAULT;
+
+	if (ts.pad)
+		return -EINVAL;
+
+	dma_fence_set_deadline(sync_file->fence, ktime_set(ts.tv_sec, ts.tv_nsec));
+
+	return 0;
+}
+
 static long sync_file_ioctl(struct file *file, unsigned int cmd,
 			    unsigned long arg)
 {
@@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
 	case SYNC_IOC_FILE_INFO:
 		return sync_file_ioctl_fence_info(sync_file, arg);
 
+	case SYNC_IOC_SET_DEADLINE:
+		return sync_file_ioctl_set_deadline(sync_file, arg);
+
 	default:
 		return -ENOTTY;
 	}
diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
index ee2dcfb3d660..c8666580816f 100644
--- a/include/uapi/linux/sync_file.h
+++ b/include/uapi/linux/sync_file.h
@@ -67,6 +67,20 @@ struct sync_file_info {
 	__u64	sync_fence_info;
 };
 
+/**
+ * struct sync_set_deadline - set a deadline on a fence
+ * @tv_sec:	seconds elapsed since epoch
+ * @tv_nsec:	nanoseconds elapsed since the time given by the tv_sec
+ * @pad:	must be zero
+ *
+ * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank)
+ */
+struct sync_set_deadline {
+	__s64	tv_sec;
+	__s32	tv_nsec;
+	__u32	pad;
+};
+
 #define SYNC_IOC_MAGIC		'>'
 
 /**
@@ -95,4 +109,12 @@ struct sync_file_info {
  */
 #define SYNC_IOC_FILE_INFO	_IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
 
+
+/**
+ * DOC: SYNC_IOC_SET_DEADLINE - set a deadline on a fence
+ *
+ * Allows userspace to set a deadline on a fence, see dma_fence_set_deadline()
+ */
+#define SYNC_IOC_SET_DEADLINE	_IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
+
 #endif /* _UAPI_LINUX_SYNC_H */
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (4 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-20  8:31   ` Christian König
  2023-02-20  8:53   ` Pekka Paalanen
  2023-02-18 21:15 ` [PATCH v4 07/14] dma-buf/sw_sync: Add fence deadline support Rob Clark
                   ` (7 subsequent siblings)
  13 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, Christian König,
	open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
wait (as opposed to a "housekeeping" wait to know when to cleanup after
some work has completed).  Usermode components of GPU driver stacks
often poll() on fence fd's to know when it is safe to do things like
free or reuse a buffer, but they can also poll() on a fence fd when
waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
lets the kernel differentiate these two cases.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/dma-buf/sync_file.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
index fb6ca1032885..c30b2085ee0a 100644
--- a/drivers/dma-buf/sync_file.c
+++ b/drivers/dma-buf/sync_file.c
@@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
 {
 	struct sync_file *sync_file = file->private_data;
 
+	/*
+	 * The POLLPRI/EPOLLPRI flag can be used to signal that
+	 * userspace wants the fence to signal ASAP, express this
+	 * as an immediate deadline.
+	 */
+	if (poll_requested_events(wait) & EPOLLPRI)
+		dma_fence_set_deadline(sync_file->fence, ktime_get());
+
 	poll_wait(file, &sync_file->wq, wait);
 
 	if (list_empty(&sync_file->cb.node) &&
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 07/14] dma-buf/sw_sync: Add fence deadline support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (5 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-20  8:29   ` Christian König
  2023-02-18 21:15 ` [PATCH v4 08/14] drm/scheduler: " Rob Clark
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, Christian König,
	open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

From: Rob Clark <robdclark@chromium.org>

This consists of simply storing the most recent deadline, and adding an
ioctl to retrieve the deadline.  This can be used in conjunction with
the SET_DEADLINE ioctl on a fence fd for testing.  Ie. create various
sw_sync fences, merge them into a fence-array, set deadline on the
fence-array and confirm that it is propagated properly to each fence.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/dma-buf/sw_sync.c    | 58 ++++++++++++++++++++++++++++++++++++
 drivers/dma-buf/sync_debug.h |  2 ++
 2 files changed, 60 insertions(+)

diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
index 348b3a9170fa..50f2638cccd3 100644
--- a/drivers/dma-buf/sw_sync.c
+++ b/drivers/dma-buf/sw_sync.c
@@ -52,12 +52,26 @@ struct sw_sync_create_fence_data {
 	__s32	fence; /* fd of new fence */
 };
 
+/**
+ * struct sw_sync_get_deadline - get the deadline of a sw_sync fence
+ * @tv_sec:	seconds elapsed since epoch (out)
+ * @tv_nsec:	nanoseconds elapsed since the time given by the tv_sec (out)
+ * @fence_fd:	the sw_sync fence fd (in)
+ */
+struct sw_sync_get_deadline {
+	__s64	tv_sec;
+	__s32	tv_nsec;
+	__s32	fence_fd;
+};
+
 #define SW_SYNC_IOC_MAGIC	'W'
 
 #define SW_SYNC_IOC_CREATE_FENCE	_IOWR(SW_SYNC_IOC_MAGIC, 0,\
 		struct sw_sync_create_fence_data)
 
 #define SW_SYNC_IOC_INC			_IOW(SW_SYNC_IOC_MAGIC, 1, __u32)
+#define SW_SYNC_GET_DEADLINE		_IOWR(SW_SYNC_IOC_MAGIC, 2, \
+		struct sw_sync_get_deadline)
 
 static const struct dma_fence_ops timeline_fence_ops;
 
@@ -171,6 +185,13 @@ static void timeline_fence_timeline_value_str(struct dma_fence *fence,
 	snprintf(str, size, "%d", parent->value);
 }
 
+static void timeline_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
+{
+	struct sync_pt *pt = dma_fence_to_sync_pt(fence);
+
+	pt->deadline = deadline;
+}
+
 static const struct dma_fence_ops timeline_fence_ops = {
 	.get_driver_name = timeline_fence_get_driver_name,
 	.get_timeline_name = timeline_fence_get_timeline_name,
@@ -179,6 +200,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
 	.release = timeline_fence_release,
 	.fence_value_str = timeline_fence_value_str,
 	.timeline_value_str = timeline_fence_timeline_value_str,
+	.set_deadline = timeline_fence_set_deadline,
 };
 
 /**
@@ -387,6 +409,39 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg)
 	return 0;
 }
 
+static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long arg)
+{
+	struct sw_sync_get_deadline data;
+	struct timespec64 ts;
+	struct dma_fence *fence;
+	struct sync_pt *pt;
+
+	if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
+		return -EFAULT;
+
+	if (data.tv_sec || data.tv_nsec)
+		return -EINVAL;
+
+	fence = sync_file_get_fence(data.fence_fd);
+	if (!fence)
+		return -EINVAL;
+
+	pt = dma_fence_to_sync_pt(fence);
+	if (!pt)
+		return -EINVAL;
+
+	ts = ktime_to_timespec64(pt->deadline);
+	data.tv_sec  = ts.tv_sec;
+	data.tv_nsec = ts.tv_nsec;
+
+	dma_fence_put(fence);
+
+	if (copy_to_user((void __user *)arg, &data, sizeof(data)))
+		return -EFAULT;
+
+	return 0;
+}
+
 static long sw_sync_ioctl(struct file *file, unsigned int cmd,
 			  unsigned long arg)
 {
@@ -399,6 +454,9 @@ static long sw_sync_ioctl(struct file *file, unsigned int cmd,
 	case SW_SYNC_IOC_INC:
 		return sw_sync_ioctl_inc(obj, arg);
 
+	case SW_SYNC_GET_DEADLINE:
+		return sw_sync_ioctl_get_deadline(obj, arg);
+
 	default:
 		return -ENOTTY;
 	}
diff --git a/drivers/dma-buf/sync_debug.h b/drivers/dma-buf/sync_debug.h
index 6176e52ba2d7..2e0146d0bdbb 100644
--- a/drivers/dma-buf/sync_debug.h
+++ b/drivers/dma-buf/sync_debug.h
@@ -55,11 +55,13 @@ static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
  * @base: base fence object
  * @link: link on the sync timeline's list
  * @node: node in the sync timeline's tree
+ * @deadline: the most recently set fence deadline
  */
 struct sync_pt {
 	struct dma_fence base;
 	struct list_head link;
 	struct rb_node node;
+	ktime_t deadline;
 };
 
 extern const struct file_operations sw_sync_debugfs_fops;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 08/14] drm/scheduler: Add fence deadline support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (6 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 07/14] dma-buf/sw_sync: Add fence deadline support Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-21 19:40   ` Luben Tuikov
  2023-02-18 21:15 ` [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits Rob Clark
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Rob Clark, Luben Tuikov,
	David Airlie, Sumit Semwal, Christian König, open list,
	open list:DMA BUFFER SHARING FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK

As the finished fence is the one that is exposed to userspace, and
therefore the one that other operations, like atomic update, would
block on, we need to propagate the deadline from from the finished
fence to the actual hw fence.

v2: Split into drm_sched_fence_set_parent() (ckoenig)
v3: Ensure a thread calling drm_sched_fence_set_deadline_finished() sees
    fence->parent set before drm_sched_fence_set_parent() does this
    test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT).

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/scheduler/sched_fence.c | 46 +++++++++++++++++++++++++
 drivers/gpu/drm/scheduler/sched_main.c  |  2 +-
 include/drm/gpu_scheduler.h             |  8 +++++
 3 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
index 7fd869520ef2..43e2d4f5fe3b 100644
--- a/drivers/gpu/drm/scheduler/sched_fence.c
+++ b/drivers/gpu/drm/scheduler/sched_fence.c
@@ -123,6 +123,37 @@ static void drm_sched_fence_release_finished(struct dma_fence *f)
 	dma_fence_put(&fence->scheduled);
 }
 
+static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
+						  ktime_t deadline)
+{
+	struct drm_sched_fence *fence = to_drm_sched_fence(f);
+	struct dma_fence *parent;
+	unsigned long flags;
+
+	spin_lock_irqsave(&fence->lock, flags);
+
+	/* If we already have an earlier deadline, keep it: */
+	if (test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
+	    ktime_before(fence->deadline, deadline)) {
+		spin_unlock_irqrestore(&fence->lock, flags);
+		return;
+	}
+
+	fence->deadline = deadline;
+	set_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
+
+	spin_unlock_irqrestore(&fence->lock, flags);
+
+	/*
+	 * smp_load_aquire() to ensure that if we are racing another
+	 * thread calling drm_sched_fence_set_parent(), that we see
+	 * the parent set before it calls test_bit(HAS_DEADLINE_BIT)
+	 */
+	parent = smp_load_acquire(&fence->parent);
+	if (parent)
+		dma_fence_set_deadline(parent, deadline);
+}
+
 static const struct dma_fence_ops drm_sched_fence_ops_scheduled = {
 	.get_driver_name = drm_sched_fence_get_driver_name,
 	.get_timeline_name = drm_sched_fence_get_timeline_name,
@@ -133,6 +164,7 @@ static const struct dma_fence_ops drm_sched_fence_ops_finished = {
 	.get_driver_name = drm_sched_fence_get_driver_name,
 	.get_timeline_name = drm_sched_fence_get_timeline_name,
 	.release = drm_sched_fence_release_finished,
+	.set_deadline = drm_sched_fence_set_deadline_finished,
 };
 
 struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f)
@@ -147,6 +179,20 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f)
 }
 EXPORT_SYMBOL(to_drm_sched_fence);
 
+void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
+				struct dma_fence *fence)
+{
+	/*
+	 * smp_store_release() to ensure another thread racing us
+	 * in drm_sched_fence_set_deadline_finished() sees the
+	 * fence's parent set before test_bit()
+	 */
+	smp_store_release(&s_fence->parent, dma_fence_get(fence));
+	if (test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
+		     &s_fence->finished.flags))
+		dma_fence_set_deadline(fence, s_fence->deadline);
+}
+
 struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
 					      void *owner)
 {
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 4e6ad6e122bc..007f98c48f8d 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1019,7 +1019,7 @@ static int drm_sched_main(void *param)
 		drm_sched_fence_scheduled(s_fence);
 
 		if (!IS_ERR_OR_NULL(fence)) {
-			s_fence->parent = dma_fence_get(fence);
+			drm_sched_fence_set_parent(s_fence, fence);
 			/* Drop for original kref_init of the fence */
 			dma_fence_put(fence);
 
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 9db9e5e504ee..8b31a954a44d 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -280,6 +280,12 @@ struct drm_sched_fence {
          */
 	struct dma_fence		finished;
 
+	/**
+	 * @deadline: deadline set on &drm_sched_fence.finished which
+	 * potentially needs to be propagated to &drm_sched_fence.parent
+	 */
+	ktime_t				deadline;
+
         /**
          * @parent: the fence returned by &drm_sched_backend_ops.run_job
          * when scheduling the job on hardware. We signal the
@@ -568,6 +574,8 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
 				   enum drm_sched_priority priority);
 bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
 
+void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
+				struct dma_fence *fence);
 struct drm_sched_fence *drm_sched_fence_alloc(
 	struct drm_sched_entity *s_entity, void *owner);
 void drm_sched_fence_init(struct drm_sched_fence *fence,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (7 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 08/14] drm/scheduler: " Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-19 16:09   ` Rob Clark
                     ` (2 more replies)
  2023-02-18 21:15 ` [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time Rob Clark
                   ` (4 subsequent siblings)
  13 siblings, 3 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, open list

From: Rob Clark <robdclark@chromium.org>

Add a new flag to let userspace provide a deadline as a hint for syncobj
and timeline waits.  This gives a hint to the driver signaling the
backing fences about how soon userspace needs it to compete work, so it
can addjust GPU frequency accordingly.  An immediate deadline can be
given to provide something equivalent to i915 "wait boost".

Signed-off-by: Rob Clark <robdclark@chromium.org>
---

I'm a bit on the fence about the addition of the DRM_CAP, but it seems
useful to give userspace a way to probe whether the kernel and driver
supports the new wait flag, especially since we have vk-common code
dealing with syncobjs.  But open to suggestions.

 drivers/gpu/drm/drm_ioctl.c   |  3 ++
 drivers/gpu/drm/drm_syncobj.c | 59 ++++++++++++++++++++++++++++-------
 include/drm/drm_drv.h         |  6 ++++
 include/uapi/drm/drm.h        | 16 ++++++++--
 4 files changed, 71 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
index 7c9d66ee917d..1c5c942cf0f9 100644
--- a/drivers/gpu/drm/drm_ioctl.c
+++ b/drivers/gpu/drm/drm_ioctl.c
@@ -254,6 +254,9 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
 	case DRM_CAP_SYNCOBJ_TIMELINE:
 		req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
 		return 0;
+	case DRM_CAP_SYNCOBJ_DEADLINE:
+		req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
+		return 0;
 	}
 
 	/* Other caps only work with KMS drivers */
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index 0c2be8360525..61cf97972a60 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -973,7 +973,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
 						  uint32_t count,
 						  uint32_t flags,
 						  signed long timeout,
-						  uint32_t *idx)
+						  uint32_t *idx,
+						  ktime_t *deadline)
 {
 	struct syncobj_wait_entry *entries;
 	struct dma_fence *fence;
@@ -1053,6 +1054,15 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
 			drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
 	}
 
+	if (deadline) {
+		for (i = 0; i < count; ++i) {
+			fence = entries[i].fence;
+			if (!fence)
+				continue;
+			dma_fence_set_deadline(fence, *deadline);
+		}
+	}
+
 	do {
 		set_current_state(TASK_INTERRUPTIBLE);
 
@@ -1151,7 +1161,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
 				  struct drm_file *file_private,
 				  struct drm_syncobj_wait *wait,
 				  struct drm_syncobj_timeline_wait *timeline_wait,
-				  struct drm_syncobj **syncobjs, bool timeline)
+				  struct drm_syncobj **syncobjs, bool timeline,
+				  ktime_t *deadline)
 {
 	signed long timeout = 0;
 	uint32_t first = ~0;
@@ -1162,7 +1173,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
 							 NULL,
 							 wait->count_handles,
 							 wait->flags,
-							 timeout, &first);
+							 timeout, &first,
+							 deadline);
 		if (timeout < 0)
 			return timeout;
 		wait->first_signaled = first;
@@ -1172,7 +1184,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
 							 u64_to_user_ptr(timeline_wait->points),
 							 timeline_wait->count_handles,
 							 timeline_wait->flags,
-							 timeout, &first);
+							 timeout, &first,
+							 deadline);
 		if (timeout < 0)
 			return timeout;
 		timeline_wait->first_signaled = first;
@@ -1243,13 +1256,20 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
 {
 	struct drm_syncobj_wait *args = data;
 	struct drm_syncobj **syncobjs;
+	unsigned possible_flags;
+	ktime_t t, *tp = NULL;
 	int ret = 0;
 
 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
 		return -EOPNOTSUPP;
 
-	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
-			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
+	possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
+			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT;
+
+	if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
+		possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
+
+	if (args->flags & ~possible_flags)
 		return -EINVAL;
 
 	if (args->count_handles == 0)
@@ -1262,8 +1282,13 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
 	if (ret < 0)
 		return ret;
 
+	if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
+		t = ktime_set(args->deadline_sec, args->deadline_nsec);
+		tp = &t;
+	}
+
 	ret = drm_syncobj_array_wait(dev, file_private,
-				     args, NULL, syncobjs, false);
+				     args, NULL, syncobjs, false, tp);
 
 	drm_syncobj_array_free(syncobjs, args->count_handles);
 
@@ -1276,14 +1301,21 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
 {
 	struct drm_syncobj_timeline_wait *args = data;
 	struct drm_syncobj **syncobjs;
+	unsigned possible_flags;
+	ktime_t t, *tp = NULL;
 	int ret = 0;
 
 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
 		return -EOPNOTSUPP;
 
-	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
-			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
-			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
+	possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
+			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
+			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE;
+
+	if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
+		possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
+
+	if (args->flags & ~possible_flags)
 		return -EINVAL;
 
 	if (args->count_handles == 0)
@@ -1296,8 +1328,13 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
 	if (ret < 0)
 		return ret;
 
+	if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
+		t = ktime_set(args->deadline_sec, args->deadline_nsec);
+		tp = &t;
+	}
+
 	ret = drm_syncobj_array_wait(dev, file_private,
-				     NULL, args, syncobjs, true);
+				     NULL, args, syncobjs, true, tp);
 
 	drm_syncobj_array_free(syncobjs, args->count_handles);
 
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index 1d76d0686b03..9aa24f097e22 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -104,6 +104,12 @@ enum drm_driver_feature {
 	 * acceleration should be handled by two drivers that are connected using auxiliary bus.
 	 */
 	DRIVER_COMPUTE_ACCEL            = BIT(7),
+	/**
+	 * @DRIVER_SYNCOBJ_DEADLINE:
+	 *
+	 * Driver supports &dma_fence_ops.set_deadline
+	 */
+	DRIVER_SYNCOBJ_DEADLINE         = BIT(8),
 
 	/* IMPORTANT: Below are all the legacy flags, add new ones above. */
 
diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
index 642808520d92..c6b85bb13810 100644
--- a/include/uapi/drm/drm.h
+++ b/include/uapi/drm/drm.h
@@ -767,6 +767,13 @@ struct drm_gem_open {
  * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
  */
 #define DRM_CAP_SYNCOBJ_TIMELINE	0x14
+/**
+ * DRM_CAP_SYNCOBJ_DEADLINE
+ *
+ * If set to 1, the driver supports DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE flag
+ * on the SYNCOBJ_TIMELINE_WAIT/SYNCOBJ_WAIT ioctls.
+ */
+#define DRM_CAP_SYNCOBJ_DEADLINE	0x15
 
 /* DRM_IOCTL_GET_CAP ioctl argument type */
 struct drm_get_cap {
@@ -887,6 +894,7 @@ struct drm_syncobj_transfer {
 #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL (1 << 0)
 #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT (1 << 1)
 #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE (1 << 2) /* wait for time point to become available */
+#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE (1 << 3) /* set fence deadline based to deadline_nsec/sec */
 struct drm_syncobj_wait {
 	__u64 handles;
 	/* absolute timeout */
@@ -894,7 +902,9 @@ struct drm_syncobj_wait {
 	__u32 count_handles;
 	__u32 flags;
 	__u32 first_signaled; /* only valid when not waiting all */
-	__u32 pad;
+	/* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
+	__u32 deadline_nsec;
+	__u64 deadline_sec;
 };
 
 struct drm_syncobj_timeline_wait {
@@ -906,7 +916,9 @@ struct drm_syncobj_timeline_wait {
 	__u32 count_handles;
 	__u32 flags;
 	__u32 first_signaled; /* only valid when not waiting all */
-	__u32 pad;
+	/* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
+	__u32 deadline_nsec;
+	__u64 deadline_sec;
 };
 
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (8 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-20  9:08   ` Pekka Paalanen
  2023-02-22 10:37   ` Luben Tuikov
  2023-02-18 21:15 ` [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank Rob Clark
                   ` (3 subsequent siblings)
  13 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, open list

From: Rob Clark <robdclark@chromium.org>

Will be used in the next commit to set a deadline on fences that an
atomic update is waiting on.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
 include/drm/drm_vblank.h     |  1 +
 2 files changed, 33 insertions(+)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 2ff31717a3de..caf25ebb34c5 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
 }
 EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
 
+/**
+ * drm_crtc_next_vblank_time - calculate the time of the next vblank
+ * @crtc: the crtc for which to calculate next vblank time
+ * @vblanktime: pointer to time to receive the next vblank timestamp.
+ *
+ * Calculate the expected time of the next vblank based on time of previous
+ * vblank and frame duration
+ */
+int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
+{
+	unsigned int pipe = drm_crtc_index(crtc);
+	struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
+	u64 count;
+
+	if (!vblank->framedur_ns)
+		return -EINVAL;
+
+	count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
+
+	/*
+	 * If we don't get a valid count, then we probably also don't
+	 * have a valid time:
+	 */
+	if (!count)
+		return -EINVAL;
+
+	*vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_crtc_next_vblank_time);
+
 static void send_vblank_event(struct drm_device *dev,
 		struct drm_pending_vblank_event *e,
 		u64 seq, ktime_t now)
diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
index 733a3e2d1d10..a63bc2c92f3c 100644
--- a/include/drm/drm_vblank.h
+++ b/include/drm/drm_vblank.h
@@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
 u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
 u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
 				   ktime_t *vblanktime);
+int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
 void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
 			       struct drm_pending_vblank_event *e);
 void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (9 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-22 10:46   ` Luben Tuikov
  2023-02-18 21:15 ` [PATCH v4 12/14] drm/msm: Add deadline based boost support Rob Clark
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Daniel Vetter,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, open list

From: Rob Clark <robdclark@chromium.org>

For an atomic commit updating a single CRTC (ie. a pageflip) calculate
the next vblank time, and inform the fence(s) of that deadline.

v2: Comment typo fix (danvet)

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/drm_atomic_helper.c | 36 +++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index d579fd8f7cb8..35a4dc714920 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1511,6 +1511,40 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
 }
 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables);
 
+/*
+ * For atomic updates which touch just a single CRTC, calculate the time of the
+ * next vblank, and inform all the fences of the deadline.
+ */
+static void set_fence_deadline(struct drm_device *dev,
+			       struct drm_atomic_state *state)
+{
+	struct drm_crtc *crtc, *wait_crtc = NULL;
+	struct drm_crtc_state *new_crtc_state;
+	struct drm_plane *plane;
+	struct drm_plane_state *new_plane_state;
+	ktime_t vbltime;
+	int i;
+
+	for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
+		if (wait_crtc)
+			return;
+		wait_crtc = crtc;
+	}
+
+	/* If no CRTCs updated, then nothing to do: */
+	if (!wait_crtc)
+		return;
+
+	if (drm_crtc_next_vblank_time(wait_crtc, &vbltime))
+		return;
+
+	for_each_new_plane_in_state (state, plane, new_plane_state, i) {
+		if (!new_plane_state->fence)
+			continue;
+		dma_fence_set_deadline(new_plane_state->fence, vbltime);
+	}
+}
+
 /**
  * drm_atomic_helper_wait_for_fences - wait for fences stashed in plane state
  * @dev: DRM device
@@ -1540,6 +1574,8 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
 	struct drm_plane_state *new_plane_state;
 	int i, ret;
 
+	set_fence_deadline(dev, state);
+
 	for_each_new_plane_in_state(state, plane, new_plane_state, i) {
 		if (!new_plane_state->fence)
 			continue;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 12/14] drm/msm: Add deadline based boost support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (10 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-18 21:15 ` [PATCH v4 13/14] drm/msm: Add wait-boost support Rob Clark
  2023-02-18 21:15 ` [PATCH v4 14/14] drm/i915: Add deadline based boost support Rob Clark
  13 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Rob Clark, Abhinav Kumar,
	Dmitry Baryshkov, Sean Paul, David Airlie, Sumit Semwal,
	Christian König, open list:DRM DRIVER FOR MSM ADRENO GPU,
	open list, open list:DMA BUFFER SHARING FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK

From: Rob Clark <robdclark@chromium.org>

Track the nearest deadline on a fence timeline and set a timer to expire
shortly before to trigger boost if the fence has not yet been signaled.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/msm/msm_drv.c   |  4 +-
 drivers/gpu/drm/msm/msm_fence.c | 74 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/msm/msm_fence.h | 20 +++++++++
 3 files changed, 97 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index aca48c868c14..be2a68f8e78d 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -1065,7 +1065,9 @@ static const struct drm_driver msm_driver = {
 				DRIVER_RENDER |
 				DRIVER_ATOMIC |
 				DRIVER_MODESET |
-				DRIVER_SYNCOBJ,
+				DRIVER_SYNCOBJ |
+				DRIVER_SYNCOBJ_DEADLINE |
+				0,
 	.open               = msm_open,
 	.postclose           = msm_postclose,
 	.lastclose          = drm_fb_helper_lastclose,
diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
index 56641408ea74..51b461f32103 100644
--- a/drivers/gpu/drm/msm/msm_fence.c
+++ b/drivers/gpu/drm/msm/msm_fence.c
@@ -8,6 +8,35 @@
 
 #include "msm_drv.h"
 #include "msm_fence.h"
+#include "msm_gpu.h"
+
+static struct msm_gpu *fctx2gpu(struct msm_fence_context *fctx)
+{
+	struct msm_drm_private *priv = fctx->dev->dev_private;
+	return priv->gpu;
+}
+
+static enum hrtimer_restart deadline_timer(struct hrtimer *t)
+{
+	struct msm_fence_context *fctx = container_of(t,
+			struct msm_fence_context, deadline_timer);
+
+	kthread_queue_work(fctx2gpu(fctx)->worker, &fctx->deadline_work);
+
+	return HRTIMER_NORESTART;
+}
+
+static void deadline_work(struct kthread_work *work)
+{
+	struct msm_fence_context *fctx = container_of(work,
+			struct msm_fence_context, deadline_work);
+
+	/* If deadline fence has already passed, nothing to do: */
+	if (msm_fence_completed(fctx, fctx->next_deadline_fence))
+		return;
+
+	msm_devfreq_boost(fctx2gpu(fctx), 2);
+}
 
 
 struct msm_fence_context *
@@ -36,6 +65,13 @@ msm_fence_context_alloc(struct drm_device *dev, volatile uint32_t *fenceptr,
 	fctx->completed_fence = fctx->last_fence;
 	*fctx->fenceptr = fctx->last_fence;
 
+	hrtimer_init(&fctx->deadline_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	fctx->deadline_timer.function = deadline_timer;
+
+	kthread_init_work(&fctx->deadline_work, deadline_work);
+
+	fctx->next_deadline = ktime_get();
+
 	return fctx;
 }
 
@@ -62,6 +98,8 @@ void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence)
 	spin_lock_irqsave(&fctx->spinlock, flags);
 	if (fence_after(fence, fctx->completed_fence))
 		fctx->completed_fence = fence;
+	if (msm_fence_completed(fctx, fctx->next_deadline_fence))
+		hrtimer_cancel(&fctx->deadline_timer);
 	spin_unlock_irqrestore(&fctx->spinlock, flags);
 }
 
@@ -92,10 +130,46 @@ static bool msm_fence_signaled(struct dma_fence *fence)
 	return msm_fence_completed(f->fctx, f->base.seqno);
 }
 
+static void msm_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
+{
+	struct msm_fence *f = to_msm_fence(fence);
+	struct msm_fence_context *fctx = f->fctx;
+	unsigned long flags;
+	ktime_t now;
+
+	spin_lock_irqsave(&fctx->spinlock, flags);
+	now = ktime_get();
+
+	if (ktime_after(now, fctx->next_deadline) ||
+			ktime_before(deadline, fctx->next_deadline)) {
+		fctx->next_deadline = deadline;
+		fctx->next_deadline_fence =
+			max(fctx->next_deadline_fence, (uint32_t)fence->seqno);
+
+		/*
+		 * Set timer to trigger boost 3ms before deadline, or
+		 * if we are already less than 3ms before the deadline
+		 * schedule boost work immediately.
+		 */
+		deadline = ktime_sub(deadline, ms_to_ktime(3));
+
+		if (ktime_after(now, deadline)) {
+			kthread_queue_work(fctx2gpu(fctx)->worker,
+					&fctx->deadline_work);
+		} else {
+			hrtimer_start(&fctx->deadline_timer, deadline,
+					HRTIMER_MODE_ABS);
+		}
+	}
+
+	spin_unlock_irqrestore(&fctx->spinlock, flags);
+}
+
 static const struct dma_fence_ops msm_fence_ops = {
 	.get_driver_name = msm_fence_get_driver_name,
 	.get_timeline_name = msm_fence_get_timeline_name,
 	.signaled = msm_fence_signaled,
+	.set_deadline = msm_fence_set_deadline,
 };
 
 struct dma_fence *
diff --git a/drivers/gpu/drm/msm/msm_fence.h b/drivers/gpu/drm/msm/msm_fence.h
index 7f1798c54cd1..cdaebfb94f5c 100644
--- a/drivers/gpu/drm/msm/msm_fence.h
+++ b/drivers/gpu/drm/msm/msm_fence.h
@@ -52,6 +52,26 @@ struct msm_fence_context {
 	volatile uint32_t *fenceptr;
 
 	spinlock_t spinlock;
+
+	/*
+	 * TODO this doesn't really deal with multiple deadlines, like
+	 * if userspace got multiple frames ahead.. OTOH atomic updates
+	 * don't queue, so maybe that is ok
+	 */
+
+	/** next_deadline: Time of next deadline */
+	ktime_t next_deadline;
+
+	/**
+	 * next_deadline_fence:
+	 *
+	 * Fence value for next pending deadline.  The deadline timer is
+	 * canceled when this fence is signaled.
+	 */
+	uint32_t next_deadline_fence;
+
+	struct hrtimer deadline_timer;
+	struct kthread_work deadline_work;
 };
 
 struct msm_fence_context * msm_fence_context_alloc(struct drm_device *dev,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 13/14] drm/msm: Add wait-boost support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (11 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 12/14] drm/msm: Add deadline based boost support Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-18 21:15 ` [PATCH v4 14/14] drm/i915: Add deadline based boost support Rob Clark
  13 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Rob Clark, Abhinav Kumar,
	Dmitry Baryshkov, Sean Paul, David Airlie,
	open list:DRM DRIVER FOR MSM ADRENO GPU, open list

From: Rob Clark <robdclark@chromium.org>

Add a way for various userspace waits to signal urgency.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/msm/msm_drv.c | 12 ++++++++----
 drivers/gpu/drm/msm/msm_gem.c |  5 +++++
 include/uapi/drm/msm_drm.h    | 14 ++++++++++++--
 3 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index be2a68f8e78d..b5af81a536f7 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -46,6 +46,7 @@
  * - 1.8.0 - Add MSM_BO_CACHED_COHERENT for supported GPUs (a6xx)
  * - 1.9.0 - Add MSM_SUBMIT_FENCE_SN_IN
  * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT
+ * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST)
  */
 #define MSM_VERSION_MAJOR	1
 #define MSM_VERSION_MINOR	10
@@ -899,7 +900,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void *data,
 }
 
 static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id,
-		      ktime_t timeout)
+		      ktime_t timeout, uint32_t flags)
 {
 	struct dma_fence *fence;
 	int ret;
@@ -929,6 +930,9 @@ static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id,
 	if (!fence)
 		return 0;
 
+	if (flags & MSM_WAIT_FENCE_BOOST)
+		dma_fence_set_deadline(fence, ktime_get());
+
 	ret = dma_fence_wait_timeout(fence, true, timeout_to_jiffies(&timeout));
 	if (ret == 0) {
 		ret = -ETIMEDOUT;
@@ -949,8 +953,8 @@ static int msm_ioctl_wait_fence(struct drm_device *dev, void *data,
 	struct msm_gpu_submitqueue *queue;
 	int ret;
 
-	if (args->pad) {
-		DRM_ERROR("invalid pad: %08x\n", args->pad);
+	if (args->flags & ~MSM_WAIT_FENCE_FLAGS) {
+		DRM_ERROR("invalid flags: %08x\n", args->flags);
 		return -EINVAL;
 	}
 
@@ -961,7 +965,7 @@ static int msm_ioctl_wait_fence(struct drm_device *dev, void *data,
 	if (!queue)
 		return -ENOENT;
 
-	ret = wait_fence(queue, args->fence, to_ktime(args->timeout));
+	ret = wait_fence(queue, args->fence, to_ktime(args->timeout), args->flags);
 
 	msm_submitqueue_put(queue);
 
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 1dee0d18abbb..dd4a0d773f6e 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -846,6 +846,11 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout)
 		op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout);
 	long ret;
 
+	if (op & MSM_PREP_BOOST) {
+		dma_resv_set_deadline(obj->resv, dma_resv_usage_rw(write),
+				      ktime_get());
+	}
+
 	ret = dma_resv_wait_timeout(obj->resv, dma_resv_usage_rw(write),
 				    true,  remain);
 	if (ret == 0)
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 329100016e7c..dbf0d6f43fa9 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -151,8 +151,13 @@ struct drm_msm_gem_info {
 #define MSM_PREP_READ        0x01
 #define MSM_PREP_WRITE       0x02
 #define MSM_PREP_NOSYNC      0x04
+#define MSM_PREP_BOOST       0x08
 
-#define MSM_PREP_FLAGS       (MSM_PREP_READ | MSM_PREP_WRITE | MSM_PREP_NOSYNC)
+#define MSM_PREP_FLAGS       (MSM_PREP_READ | \
+			      MSM_PREP_WRITE | \
+			      MSM_PREP_NOSYNC | \
+			      MSM_PREP_BOOST | \
+			      0)
 
 struct drm_msm_gem_cpu_prep {
 	__u32 handle;         /* in */
@@ -286,6 +291,11 @@ struct drm_msm_gem_submit {
 
 };
 
+#define MSM_WAIT_FENCE_BOOST	0x00000001
+#define MSM_WAIT_FENCE_FLAGS	( \
+		MSM_WAIT_FENCE_BOOST | \
+		0)
+
 /* The normal way to synchronize with the GPU is just to CPU_PREP on
  * a buffer if you need to access it from the CPU (other cmdstream
  * submission from same or other contexts, PAGE_FLIP ioctl, etc, all
@@ -295,7 +305,7 @@ struct drm_msm_gem_submit {
  */
 struct drm_msm_wait_fence {
 	__u32 fence;          /* in */
-	__u32 pad;
+	__u32 flags;          /* in, bitmask of MSM_WAIT_FENCE_x */
 	struct drm_msm_timespec timeout;   /* in */
 	__u32 queueid;         /* in, submitqueue id */
 };
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* [PATCH v4 14/14] drm/i915: Add deadline based boost support
  2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
                   ` (12 preceding siblings ...)
  2023-02-18 21:15 ` [PATCH v4 13/14] drm/msm: Add wait-boost support Rob Clark
@ 2023-02-18 21:15 ` Rob Clark
  2023-02-20 15:46   ` Tvrtko Ursulin
  13 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-18 21:15 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Jani Nikula,
	Joonas Lahtinen, Tvrtko Ursulin, David Airlie, Sumit Semwal,
	Christian König, intel-gfx, open list,
	open list:DMA BUFFER SHARING FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK

From: Rob Clark <robdclark@chromium.org>

Signed-off-by: Rob Clark <robdclark@chromium.org>
---

This should probably be re-written by someone who knows the i915
request/timeline stuff better, to deal with non-immediate deadlines.
But as-is I think this should be enough to handle the case where
we want syncobj waits to trigger boost.

 drivers/gpu/drm/i915/i915_driver.c  |  2 +-
 drivers/gpu/drm/i915/i915_request.c | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index cf1c0970ecb4..bd40b7bcb38a 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -1781,7 +1781,7 @@ static const struct drm_driver i915_drm_driver = {
 	.driver_features =
 	    DRIVER_GEM |
 	    DRIVER_RENDER | DRIVER_MODESET | DRIVER_ATOMIC | DRIVER_SYNCOBJ |
-	    DRIVER_SYNCOBJ_TIMELINE,
+	    DRIVER_SYNCOBJ_TIMELINE | DRIVER_SYNCOBJ_DEADLINE,
 	.release = i915_driver_release,
 	.open = i915_driver_open,
 	.lastclose = i915_driver_lastclose,
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 7503dcb9043b..44491e7e214c 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -97,6 +97,25 @@ static bool i915_fence_enable_signaling(struct dma_fence *fence)
 	return i915_request_enable_breadcrumb(to_request(fence));
 }
 
+static void i915_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
+{
+	struct i915_request *rq = to_request(fence);
+
+	if (i915_request_completed(rq))
+		return;
+
+	if (i915_request_started(rq))
+		return;
+
+	/*
+	 * TODO something more clever for deadlines that are in the
+	 * future.  I think probably track the nearest deadline in
+	 * rq->timeline and set timer to trigger boost accordingly?
+	 */
+
+	intel_rps_boost(rq);
+}
+
 static signed long i915_fence_wait(struct dma_fence *fence,
 				   bool interruptible,
 				   signed long timeout)
@@ -182,6 +201,7 @@ const struct dma_fence_ops i915_fence_ops = {
 	.signaled = i915_fence_signaled,
 	.wait = i915_fence_wait,
 	.release = i915_fence_release,
+	.set_deadline = i915_fence_set_deadline,
 };
 
 static void irq_execute_cb(struct irq_work *wrk)
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits
  2023-02-18 21:15 ` [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits Rob Clark
@ 2023-02-19 16:09   ` Rob Clark
  2023-02-20  9:05   ` Pekka Paalanen
  2023-02-24  9:51   ` Tvrtko Ursulin
  2 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-19 16:09 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, open list

On Sat, Feb 18, 2023 at 1:16 PM Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> Add a new flag to let userspace provide a deadline as a hint for syncobj
> and timeline waits.  This gives a hint to the driver signaling the
> backing fences about how soon userspace needs it to compete work, so it
> can addjust GPU frequency accordingly.  An immediate deadline can be
> given to provide something equivalent to i915 "wait boost".
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>
> I'm a bit on the fence about the addition of the DRM_CAP, but it seems
> useful to give userspace a way to probe whether the kernel and driver
> supports the new wait flag, especially since we have vk-common code
> dealing with syncobjs.  But open to suggestions.

I guess an alternative would be to allow count_handles as a way to
probe the supported flags

BR,
-R

>  drivers/gpu/drm/drm_ioctl.c   |  3 ++
>  drivers/gpu/drm/drm_syncobj.c | 59 ++++++++++++++++++++++++++++-------
>  include/drm/drm_drv.h         |  6 ++++
>  include/uapi/drm/drm.h        | 16 ++++++++--
>  4 files changed, 71 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 7c9d66ee917d..1c5c942cf0f9 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -254,6 +254,9 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
>         case DRM_CAP_SYNCOBJ_TIMELINE:
>                 req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
>                 return 0;
> +       case DRM_CAP_SYNCOBJ_DEADLINE:
> +               req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
> +               return 0;
>         }
>
>         /* Other caps only work with KMS drivers */
> diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> index 0c2be8360525..61cf97972a60 100644
> --- a/drivers/gpu/drm/drm_syncobj.c
> +++ b/drivers/gpu/drm/drm_syncobj.c
> @@ -973,7 +973,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>                                                   uint32_t count,
>                                                   uint32_t flags,
>                                                   signed long timeout,
> -                                                 uint32_t *idx)
> +                                                 uint32_t *idx,
> +                                                 ktime_t *deadline)
>  {
>         struct syncobj_wait_entry *entries;
>         struct dma_fence *fence;
> @@ -1053,6 +1054,15 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>                         drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
>         }
>
> +       if (deadline) {
> +               for (i = 0; i < count; ++i) {
> +                       fence = entries[i].fence;
> +                       if (!fence)
> +                               continue;
> +                       dma_fence_set_deadline(fence, *deadline);
> +               }
> +       }
> +
>         do {
>                 set_current_state(TASK_INTERRUPTIBLE);
>
> @@ -1151,7 +1161,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>                                   struct drm_file *file_private,
>                                   struct drm_syncobj_wait *wait,
>                                   struct drm_syncobj_timeline_wait *timeline_wait,
> -                                 struct drm_syncobj **syncobjs, bool timeline)
> +                                 struct drm_syncobj **syncobjs, bool timeline,
> +                                 ktime_t *deadline)
>  {
>         signed long timeout = 0;
>         uint32_t first = ~0;
> @@ -1162,7 +1173,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>                                                          NULL,
>                                                          wait->count_handles,
>                                                          wait->flags,
> -                                                        timeout, &first);
> +                                                        timeout, &first,
> +                                                        deadline);
>                 if (timeout < 0)
>                         return timeout;
>                 wait->first_signaled = first;
> @@ -1172,7 +1184,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>                                                          u64_to_user_ptr(timeline_wait->points),
>                                                          timeline_wait->count_handles,
>                                                          timeline_wait->flags,
> -                                                        timeout, &first);
> +                                                        timeout, &first,
> +                                                        deadline);
>                 if (timeout < 0)
>                         return timeout;
>                 timeline_wait->first_signaled = first;
> @@ -1243,13 +1256,20 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>  {
>         struct drm_syncobj_wait *args = data;
>         struct drm_syncobj **syncobjs;
> +       unsigned possible_flags;
> +       ktime_t t, *tp = NULL;
>         int ret = 0;
>
>         if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
>                 return -EOPNOTSUPP;
>
> -       if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> -                           DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
> +       possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> +                        DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT;
> +
> +       if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> +               possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> +
> +       if (args->flags & ~possible_flags)
>                 return -EINVAL;
>
>         if (args->count_handles == 0)
> @@ -1262,8 +1282,13 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>         if (ret < 0)
>                 return ret;
>
> +       if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> +               t = ktime_set(args->deadline_sec, args->deadline_nsec);
> +               tp = &t;
> +       }
> +
>         ret = drm_syncobj_array_wait(dev, file_private,
> -                                    args, NULL, syncobjs, false);
> +                                    args, NULL, syncobjs, false, tp);
>
>         drm_syncobj_array_free(syncobjs, args->count_handles);
>
> @@ -1276,14 +1301,21 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
>  {
>         struct drm_syncobj_timeline_wait *args = data;
>         struct drm_syncobj **syncobjs;
> +       unsigned possible_flags;
> +       ktime_t t, *tp = NULL;
>         int ret = 0;
>
>         if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
>                 return -EOPNOTSUPP;
>
> -       if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> -                           DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> -                           DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
> +       possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> +                        DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> +                        DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE;
> +
> +       if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> +               possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> +
> +       if (args->flags & ~possible_flags)
>                 return -EINVAL;
>
>         if (args->count_handles == 0)
> @@ -1296,8 +1328,13 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
>         if (ret < 0)
>                 return ret;
>
> +       if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> +               t = ktime_set(args->deadline_sec, args->deadline_nsec);
> +               tp = &t;
> +       }
> +
>         ret = drm_syncobj_array_wait(dev, file_private,
> -                                    NULL, args, syncobjs, true);
> +                                    NULL, args, syncobjs, true, tp);
>
>         drm_syncobj_array_free(syncobjs, args->count_handles);
>
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 1d76d0686b03..9aa24f097e22 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -104,6 +104,12 @@ enum drm_driver_feature {
>          * acceleration should be handled by two drivers that are connected using auxiliary bus.
>          */
>         DRIVER_COMPUTE_ACCEL            = BIT(7),
> +       /**
> +        * @DRIVER_SYNCOBJ_DEADLINE:
> +        *
> +        * Driver supports &dma_fence_ops.set_deadline
> +        */
> +       DRIVER_SYNCOBJ_DEADLINE         = BIT(8),
>
>         /* IMPORTANT: Below are all the legacy flags, add new ones above. */
>
> diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
> index 642808520d92..c6b85bb13810 100644
> --- a/include/uapi/drm/drm.h
> +++ b/include/uapi/drm/drm.h
> @@ -767,6 +767,13 @@ struct drm_gem_open {
>   * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
>   */
>  #define DRM_CAP_SYNCOBJ_TIMELINE       0x14
> +/**
> + * DRM_CAP_SYNCOBJ_DEADLINE
> + *
> + * If set to 1, the driver supports DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE flag
> + * on the SYNCOBJ_TIMELINE_WAIT/SYNCOBJ_WAIT ioctls.
> + */
> +#define DRM_CAP_SYNCOBJ_DEADLINE       0x15
>
>  /* DRM_IOCTL_GET_CAP ioctl argument type */
>  struct drm_get_cap {
> @@ -887,6 +894,7 @@ struct drm_syncobj_transfer {
>  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL (1 << 0)
>  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT (1 << 1)
>  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE (1 << 2) /* wait for time point to become available */
> +#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE (1 << 3) /* set fence deadline based to deadline_nsec/sec */
>  struct drm_syncobj_wait {
>         __u64 handles;
>         /* absolute timeout */
> @@ -894,7 +902,9 @@ struct drm_syncobj_wait {
>         __u32 count_handles;
>         __u32 flags;
>         __u32 first_signaled; /* only valid when not waiting all */
> -       __u32 pad;
> +       /* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> +       __u32 deadline_nsec;
> +       __u64 deadline_sec;
>  };
>
>  struct drm_syncobj_timeline_wait {
> @@ -906,7 +916,9 @@ struct drm_syncobj_timeline_wait {
>         __u32 count_handles;
>         __u32 flags;
>         __u32 first_signaled; /* only valid when not waiting all */
> -       __u32 pad;
> +       /* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> +       __u32 deadline_nsec;
> +       __u64 deadline_sec;
>  };
>
>
> --
> 2.39.1
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 04/14] dma-buf/dma-resv: Add a way to set fence deadline
  2023-02-18 21:15 ` [PATCH v4 04/14] dma-buf/dma-resv: Add a way to set fence deadline Rob Clark
@ 2023-02-20  8:16   ` Christian König
  0 siblings, 0 replies; 93+ messages in thread
From: Christian König @ 2023-02-20  8:16 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: freedreno, Daniel Vetter, Michel Dänzer, Tvrtko Ursulin,
	Rodrigo Vivi, Alex Deucher, Pekka Paalanen, Simon Ser, Rob Clark,
	Sumit Semwal, Christian König,
	open list:DMA BUFFER SHARING FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

Am 18.02.23 um 22:15 schrieb Rob Clark:
> From: Rob Clark <robdclark@chromium.org>
>
> Add a way to set a deadline on remaining resv fences according to the
> requested usage.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>   drivers/dma-buf/dma-resv.c | 19 +++++++++++++++++++
>   include/linux/dma-resv.h   |  2 ++
>   2 files changed, 21 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 1c76aed8e262..0c86f6d577ab 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -684,6 +684,25 @@ long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
>   }
>   EXPORT_SYMBOL_GPL(dma_resv_wait_timeout);
>   
> +/**
> + * dma_resv_set_deadline - Set a deadline on reservation's objects fences
> + * @obj: the reservation object
> + * @usage: controls which fences to include, see enum dma_resv_usage.
> + * @deadline: the requested deadline (MONOTONIC)

Please add an additional description line, something like "Can be called 
without holding the dma_resv lock and sets @deadline on all fences 
filtered by @usage.".

With that done the patch is Reviewed-by: Christian König 
<christian.koenig@amd.com>

Regards,
Christian.

> + */
> +void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage,
> +			   ktime_t deadline)
> +{
> +	struct dma_resv_iter cursor;
> +	struct dma_fence *fence;
> +
> +	dma_resv_iter_begin(&cursor, obj, usage);
> +	dma_resv_for_each_fence_unlocked(&cursor, fence) {
> +		dma_fence_set_deadline(fence, deadline);
> +	}
> +	dma_resv_iter_end(&cursor);
> +}
> +EXPORT_SYMBOL_GPL(dma_resv_set_deadline);
>   
>   /**
>    * dma_resv_test_signaled - Test if a reservation object's fences have been
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 0637659a702c..8d0e34dad446 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -479,6 +479,8 @@ int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage,
>   int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
>   long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
>   			   bool intr, unsigned long timeout);
> +void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage,
> +			   ktime_t deadline);
>   bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage);
>   void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq);
>   


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl
  2023-02-18 21:15 ` [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl Rob Clark
@ 2023-02-20  8:27   ` Christian König
  2023-02-20 16:09     ` Rob Clark
  2023-02-20  8:48   ` Pekka Paalanen
  1 sibling, 1 reply; 93+ messages in thread
From: Christian König @ 2023-02-20  8:27 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

Am 18.02.23 um 22:15 schrieb Rob Clark:
> From: Rob Clark <robdclark@chromium.org>
>
> The initial purpose is for igt tests, but this would also be useful for
> compositors that wait until close to vblank deadline to make decisions
> about which frame to show.
>
> The igt tests can be found at:
>
> https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline
>
> v2: Clarify the timebase, add link to igt tests
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>   drivers/dma-buf/sync_file.c    | 19 +++++++++++++++++++
>   include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++
>   2 files changed, 41 insertions(+)
>
> diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> index af57799c86ce..fb6ca1032885 100644
> --- a/drivers/dma-buf/sync_file.c
> +++ b/drivers/dma-buf/sync_file.c
> @@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
>   	return ret;
>   }
>   
> +static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
> +					unsigned long arg)
> +{
> +	struct sync_set_deadline ts;
> +
> +	if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
> +		return -EFAULT;
> +
> +	if (ts.pad)
> +		return -EINVAL;
> +
> +	dma_fence_set_deadline(sync_file->fence, ktime_set(ts.tv_sec, ts.tv_nsec));
> +
> +	return 0;
> +}
> +
>   static long sync_file_ioctl(struct file *file, unsigned int cmd,
>   			    unsigned long arg)
>   {
> @@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
>   	case SYNC_IOC_FILE_INFO:
>   		return sync_file_ioctl_fence_info(sync_file, arg);
>   
> +	case SYNC_IOC_SET_DEADLINE:
> +		return sync_file_ioctl_set_deadline(sync_file, arg);
> +
>   	default:
>   		return -ENOTTY;
>   	}
> diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
> index ee2dcfb3d660..c8666580816f 100644
> --- a/include/uapi/linux/sync_file.h
> +++ b/include/uapi/linux/sync_file.h
> @@ -67,6 +67,20 @@ struct sync_file_info {
>   	__u64	sync_fence_info;
>   };
>   
> +/**
> + * struct sync_set_deadline - set a deadline on a fence
> + * @tv_sec:	seconds elapsed since epoch
> + * @tv_nsec:	nanoseconds elapsed since the time given by the tv_sec
> + * @pad:	must be zero
> + *
> + * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank)
> + */
> +struct sync_set_deadline {
> +	__s64	tv_sec;
> +	__s32	tv_nsec;
> +	__u32	pad;

IIRC struct timespec defined this as time_t/long (which is horrible for 
an UAPI because of the sizeof(long) dependency), one possible 
alternative is to use 64bit nanoseconds from CLOCK_MONOTONIC (which is 
essentially ktime).

Not 100% sure if there is any preferences documented, but I think the 
later might be better.

Either way the patch is Acked-by: Christian König 
<christian.koenig@amd.com> for this patch.

Regards,
Christian.

> +};
> +
>   #define SYNC_IOC_MAGIC		'>'
>   
>   /**
> @@ -95,4 +109,12 @@ struct sync_file_info {
>    */
>   #define SYNC_IOC_FILE_INFO	_IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
>   
> +
> +/**
> + * DOC: SYNC_IOC_SET_DEADLINE - set a deadline on a fence
> + *
> + * Allows userspace to set a deadline on a fence, see dma_fence_set_deadline()
> + */
> +#define SYNC_IOC_SET_DEADLINE	_IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
> +
>   #endif /* _UAPI_LINUX_SYNC_H */


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 07/14] dma-buf/sw_sync: Add fence deadline support
  2023-02-18 21:15 ` [PATCH v4 07/14] dma-buf/sw_sync: Add fence deadline support Rob Clark
@ 2023-02-20  8:29   ` Christian König
  0 siblings, 0 replies; 93+ messages in thread
From: Christian König @ 2023-02-20  8:29 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: freedreno, Daniel Vetter, Michel Dänzer, Tvrtko Ursulin,
	Rodrigo Vivi, Alex Deucher, Pekka Paalanen, Simon Ser, Rob Clark,
	Sumit Semwal, Gustavo Padovan, Christian König,
	open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

Am 18.02.23 um 22:15 schrieb Rob Clark:
> From: Rob Clark <robdclark@chromium.org>
>
> This consists of simply storing the most recent deadline, and adding an
> ioctl to retrieve the deadline.  This can be used in conjunction with
> the SET_DEADLINE ioctl on a fence fd for testing.  Ie. create various
> sw_sync fences, merge them into a fence-array, set deadline on the
> fence-array and confirm that it is propagated properly to each fence.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/dma-buf/sw_sync.c    | 58 ++++++++++++++++++++++++++++++++++++
>   drivers/dma-buf/sync_debug.h |  2 ++
>   2 files changed, 60 insertions(+)
>
> diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
> index 348b3a9170fa..50f2638cccd3 100644
> --- a/drivers/dma-buf/sw_sync.c
> +++ b/drivers/dma-buf/sw_sync.c
> @@ -52,12 +52,26 @@ struct sw_sync_create_fence_data {
>   	__s32	fence; /* fd of new fence */
>   };
>   
> +/**
> + * struct sw_sync_get_deadline - get the deadline of a sw_sync fence
> + * @tv_sec:	seconds elapsed since epoch (out)
> + * @tv_nsec:	nanoseconds elapsed since the time given by the tv_sec (out)
> + * @fence_fd:	the sw_sync fence fd (in)
> + */
> +struct sw_sync_get_deadline {
> +	__s64	tv_sec;
> +	__s32	tv_nsec;
> +	__s32	fence_fd;
> +};
> +
>   #define SW_SYNC_IOC_MAGIC	'W'
>   
>   #define SW_SYNC_IOC_CREATE_FENCE	_IOWR(SW_SYNC_IOC_MAGIC, 0,\
>   		struct sw_sync_create_fence_data)
>   
>   #define SW_SYNC_IOC_INC			_IOW(SW_SYNC_IOC_MAGIC, 1, __u32)
> +#define SW_SYNC_GET_DEADLINE		_IOWR(SW_SYNC_IOC_MAGIC, 2, \
> +		struct sw_sync_get_deadline)
>   
>   static const struct dma_fence_ops timeline_fence_ops;
>   
> @@ -171,6 +185,13 @@ static void timeline_fence_timeline_value_str(struct dma_fence *fence,
>   	snprintf(str, size, "%d", parent->value);
>   }
>   
> +static void timeline_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
> +{
> +	struct sync_pt *pt = dma_fence_to_sync_pt(fence);
> +
> +	pt->deadline = deadline;
> +}
> +
>   static const struct dma_fence_ops timeline_fence_ops = {
>   	.get_driver_name = timeline_fence_get_driver_name,
>   	.get_timeline_name = timeline_fence_get_timeline_name,
> @@ -179,6 +200,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
>   	.release = timeline_fence_release,
>   	.fence_value_str = timeline_fence_value_str,
>   	.timeline_value_str = timeline_fence_timeline_value_str,
> +	.set_deadline = timeline_fence_set_deadline,
>   };
>   
>   /**
> @@ -387,6 +409,39 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg)
>   	return 0;
>   }
>   
> +static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long arg)
> +{
> +	struct sw_sync_get_deadline data;
> +	struct timespec64 ts;
> +	struct dma_fence *fence;
> +	struct sync_pt *pt;
> +
> +	if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
> +		return -EFAULT;
> +
> +	if (data.tv_sec || data.tv_nsec)
> +		return -EINVAL;
> +
> +	fence = sync_file_get_fence(data.fence_fd);
> +	if (!fence)
> +		return -EINVAL;
> +
> +	pt = dma_fence_to_sync_pt(fence);
> +	if (!pt)
> +		return -EINVAL;
> +
> +	ts = ktime_to_timespec64(pt->deadline);
> +	data.tv_sec  = ts.tv_sec;
> +	data.tv_nsec = ts.tv_nsec;
> +
> +	dma_fence_put(fence);
> +
> +	if (copy_to_user((void __user *)arg, &data, sizeof(data)))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +
>   static long sw_sync_ioctl(struct file *file, unsigned int cmd,
>   			  unsigned long arg)
>   {
> @@ -399,6 +454,9 @@ static long sw_sync_ioctl(struct file *file, unsigned int cmd,
>   	case SW_SYNC_IOC_INC:
>   		return sw_sync_ioctl_inc(obj, arg);
>   
> +	case SW_SYNC_GET_DEADLINE:
> +		return sw_sync_ioctl_get_deadline(obj, arg);
> +
>   	default:
>   		return -ENOTTY;
>   	}
> diff --git a/drivers/dma-buf/sync_debug.h b/drivers/dma-buf/sync_debug.h
> index 6176e52ba2d7..2e0146d0bdbb 100644
> --- a/drivers/dma-buf/sync_debug.h
> +++ b/drivers/dma-buf/sync_debug.h
> @@ -55,11 +55,13 @@ static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
>    * @base: base fence object
>    * @link: link on the sync timeline's list
>    * @node: node in the sync timeline's tree
> + * @deadline: the most recently set fence deadline
>    */
>   struct sync_pt {
>   	struct dma_fence base;
>   	struct list_head link;
>   	struct rb_node node;
> +	ktime_t deadline;
>   };
>   
>   extern const struct file_operations sw_sync_debugfs_fops;


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-18 21:15 ` [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI Rob Clark
@ 2023-02-20  8:31   ` Christian König
  2023-02-21  8:38     ` Pekka Paalanen
  2023-02-20  8:53   ` Pekka Paalanen
  1 sibling, 1 reply; 93+ messages in thread
From: Christian König @ 2023-02-20  8:31 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

Am 18.02.23 um 22:15 schrieb Rob Clark:
> From: Rob Clark <robdclark@chromium.org>
>
> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> some work has completed).  Usermode components of GPU driver stacks
> often poll() on fence fd's to know when it is safe to do things like
> free or reuse a buffer, but they can also poll() on a fence fd when
> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> lets the kernel differentiate these two cases.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>

The code looks clean, but the different poll flags and their meaning are 
certainly not my field of expertise.

Feel free to add Acked-by: Christian König <christian.koenig@amd.com>, 
somebody with more background in this should probably take a look as well.

Regards,
Christian.

> ---
>   drivers/dma-buf/sync_file.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
>
> diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> index fb6ca1032885..c30b2085ee0a 100644
> --- a/drivers/dma-buf/sync_file.c
> +++ b/drivers/dma-buf/sync_file.c
> @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
>   {
>   	struct sync_file *sync_file = file->private_data;
>   
> +	/*
> +	 * The POLLPRI/EPOLLPRI flag can be used to signal that
> +	 * userspace wants the fence to signal ASAP, express this
> +	 * as an immediate deadline.
> +	 */
> +	if (poll_requested_events(wait) & EPOLLPRI)
> +		dma_fence_set_deadline(sync_file->fence, ktime_get());
> +
>   	poll_wait(file, &sync_file->wq, wait);
>   
>   	if (list_empty(&sync_file->cb.node) &&


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl
  2023-02-18 21:15 ` [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl Rob Clark
  2023-02-20  8:27   ` Christian König
@ 2023-02-20  8:48   ` Pekka Paalanen
  1 sibling, 0 replies; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-20  8:48 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Sumit Semwal, Gustavo Padovan,
	Christian König, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

[-- Attachment #1: Type: text/plain, Size: 3166 bytes --]

On Sat, 18 Feb 2023 13:15:48 -0800
Rob Clark <robdclark@gmail.com> wrote:

> From: Rob Clark <robdclark@chromium.org>
> 
> The initial purpose is for igt tests, but this would also be useful for
> compositors that wait until close to vblank deadline to make decisions
> about which frame to show.
> 
> The igt tests can be found at:
> 
> https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline
> 
> v2: Clarify the timebase, add link to igt tests
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>  drivers/dma-buf/sync_file.c    | 19 +++++++++++++++++++
>  include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++
>  2 files changed, 41 insertions(+)
> 
> diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> index af57799c86ce..fb6ca1032885 100644
> --- a/drivers/dma-buf/sync_file.c
> +++ b/drivers/dma-buf/sync_file.c
> @@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
>  	return ret;
>  }
>  
> +static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
> +					unsigned long arg)
> +{
> +	struct sync_set_deadline ts;
> +
> +	if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
> +		return -EFAULT;
> +
> +	if (ts.pad)
> +		return -EINVAL;
> +
> +	dma_fence_set_deadline(sync_file->fence, ktime_set(ts.tv_sec, ts.tv_nsec));
> +
> +	return 0;
> +}
> +
>  static long sync_file_ioctl(struct file *file, unsigned int cmd,
>  			    unsigned long arg)
>  {
> @@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
>  	case SYNC_IOC_FILE_INFO:
>  		return sync_file_ioctl_fence_info(sync_file, arg);
>  
> +	case SYNC_IOC_SET_DEADLINE:
> +		return sync_file_ioctl_set_deadline(sync_file, arg);
> +
>  	default:
>  		return -ENOTTY;
>  	}
> diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
> index ee2dcfb3d660..c8666580816f 100644
> --- a/include/uapi/linux/sync_file.h
> +++ b/include/uapi/linux/sync_file.h
> @@ -67,6 +67,20 @@ struct sync_file_info {
>  	__u64	sync_fence_info;
>  };
>  
> +/**
> + * struct sync_set_deadline - set a deadline on a fence
> + * @tv_sec:	seconds elapsed since epoch
> + * @tv_nsec:	nanoseconds elapsed since the time given by the tv_sec

Hi,

should tv_sec,tv_nsec be validated like clock_settime() does?

It requires:
- tv_sec >= 0
- tv_nsec >= 0
- tv_nsec < 1e9


Thanks,
pq


> + * @pad:	must be zero
> + *
> + * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank)
> + */
> +struct sync_set_deadline {
> +	__s64	tv_sec;
> +	__s32	tv_nsec;
> +	__u32	pad;
> +};
> +
>  #define SYNC_IOC_MAGIC		'>'
>  
>  /**
> @@ -95,4 +109,12 @@ struct sync_file_info {
>   */
>  #define SYNC_IOC_FILE_INFO	_IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
>  
> +
> +/**
> + * DOC: SYNC_IOC_SET_DEADLINE - set a deadline on a fence
> + *
> + * Allows userspace to set a deadline on a fence, see dma_fence_set_deadline()
> + */
> +#define SYNC_IOC_SET_DEADLINE	_IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
> +
>  #endif /* _UAPI_LINUX_SYNC_H */


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-18 21:15 ` [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI Rob Clark
  2023-02-20  8:31   ` Christian König
@ 2023-02-20  8:53   ` Pekka Paalanen
  2023-02-20 16:14     ` Rob Clark
  1 sibling, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-20  8:53 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Sumit Semwal, Gustavo Padovan,
	Christian König, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

[-- Attachment #1: Type: text/plain, Size: 1902 bytes --]

On Sat, 18 Feb 2023 13:15:49 -0800
Rob Clark <robdclark@gmail.com> wrote:

> From: Rob Clark <robdclark@chromium.org>
> 
> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> some work has completed).  Usermode components of GPU driver stacks
> often poll() on fence fd's to know when it is safe to do things like
> free or reuse a buffer, but they can also poll() on a fence fd when
> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> lets the kernel differentiate these two cases.
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>

Hi,

where would the UAPI documentation of this go?
It seems to be missing.

If a Wayland compositor is polling application fences to know which
client buffer to use in its rendering, should the compositor poll with
PRI or not? If a compositor polls with PRI, then all fences from all
applications would always be PRI. Would that be harmful somehow or
would it be beneficial?


Thanks,
pq

> ---
>  drivers/dma-buf/sync_file.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> index fb6ca1032885..c30b2085ee0a 100644
> --- a/drivers/dma-buf/sync_file.c
> +++ b/drivers/dma-buf/sync_file.c
> @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
>  {
>  	struct sync_file *sync_file = file->private_data;
>  
> +	/*
> +	 * The POLLPRI/EPOLLPRI flag can be used to signal that
> +	 * userspace wants the fence to signal ASAP, express this
> +	 * as an immediate deadline.
> +	 */
> +	if (poll_requested_events(wait) & EPOLLPRI)
> +		dma_fence_set_deadline(sync_file->fence, ktime_get());
> +
>  	poll_wait(file, &sync_file->wq, wait);
>  
>  	if (list_empty(&sync_file->cb.node) &&


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits
  2023-02-18 21:15 ` [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits Rob Clark
  2023-02-19 16:09   ` Rob Clark
@ 2023-02-20  9:05   ` Pekka Paalanen
  2023-02-20 16:20     ` Rob Clark
  2023-02-24  9:51   ` Tvrtko Ursulin
  2 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-20  9:05 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, open list

[-- Attachment #1: Type: text/plain, Size: 9257 bytes --]

On Sat, 18 Feb 2023 13:15:52 -0800
Rob Clark <robdclark@gmail.com> wrote:

> From: Rob Clark <robdclark@chromium.org>
> 
> Add a new flag to let userspace provide a deadline as a hint for syncobj
> and timeline waits.  This gives a hint to the driver signaling the
> backing fences about how soon userspace needs it to compete work, so it
> can addjust GPU frequency accordingly.  An immediate deadline can be
> given to provide something equivalent to i915 "wait boost".
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> 
> I'm a bit on the fence about the addition of the DRM_CAP, but it seems
> useful to give userspace a way to probe whether the kernel and driver
> supports the new wait flag, especially since we have vk-common code
> dealing with syncobjs.  But open to suggestions.
> 
>  drivers/gpu/drm/drm_ioctl.c   |  3 ++
>  drivers/gpu/drm/drm_syncobj.c | 59 ++++++++++++++++++++++++++++-------
>  include/drm/drm_drv.h         |  6 ++++
>  include/uapi/drm/drm.h        | 16 ++++++++--
>  4 files changed, 71 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 7c9d66ee917d..1c5c942cf0f9 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -254,6 +254,9 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
>  	case DRM_CAP_SYNCOBJ_TIMELINE:
>  		req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
>  		return 0;
> +	case DRM_CAP_SYNCOBJ_DEADLINE:
> +		req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);

Hi,

is that a typo for DRIVER_SYNCOBJ_DEADLINE?

> +		return 0;
>  	}
>  
>  	/* Other caps only work with KMS drivers */
> diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> index 0c2be8360525..61cf97972a60 100644
> --- a/drivers/gpu/drm/drm_syncobj.c
> +++ b/drivers/gpu/drm/drm_syncobj.c
> @@ -973,7 +973,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>  						  uint32_t count,
>  						  uint32_t flags,
>  						  signed long timeout,
> -						  uint32_t *idx)
> +						  uint32_t *idx,
> +						  ktime_t *deadline)
>  {
>  	struct syncobj_wait_entry *entries;
>  	struct dma_fence *fence;
> @@ -1053,6 +1054,15 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>  			drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
>  	}
>  
> +	if (deadline) {
> +		for (i = 0; i < count; ++i) {
> +			fence = entries[i].fence;
> +			if (!fence)
> +				continue;
> +			dma_fence_set_deadline(fence, *deadline);
> +		}
> +	}
> +
>  	do {
>  		set_current_state(TASK_INTERRUPTIBLE);
>  
> @@ -1151,7 +1161,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>  				  struct drm_file *file_private,
>  				  struct drm_syncobj_wait *wait,
>  				  struct drm_syncobj_timeline_wait *timeline_wait,
> -				  struct drm_syncobj **syncobjs, bool timeline)
> +				  struct drm_syncobj **syncobjs, bool timeline,
> +				  ktime_t *deadline)
>  {
>  	signed long timeout = 0;
>  	uint32_t first = ~0;
> @@ -1162,7 +1173,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>  							 NULL,
>  							 wait->count_handles,
>  							 wait->flags,
> -							 timeout, &first);
> +							 timeout, &first,
> +							 deadline);
>  		if (timeout < 0)
>  			return timeout;
>  		wait->first_signaled = first;
> @@ -1172,7 +1184,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>  							 u64_to_user_ptr(timeline_wait->points),
>  							 timeline_wait->count_handles,
>  							 timeline_wait->flags,
> -							 timeout, &first);
> +							 timeout, &first,
> +							 deadline);
>  		if (timeout < 0)
>  			return timeout;
>  		timeline_wait->first_signaled = first;
> @@ -1243,13 +1256,20 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>  {
>  	struct drm_syncobj_wait *args = data;
>  	struct drm_syncobj **syncobjs;
> +	unsigned possible_flags;
> +	ktime_t t, *tp = NULL;
>  	int ret = 0;
>  
>  	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
>  		return -EOPNOTSUPP;
>  
> -	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> -			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
> +	possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> +			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT;
> +
> +	if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> +		possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> +
> +	if (args->flags & ~possible_flags)
>  		return -EINVAL;
>  
>  	if (args->count_handles == 0)
> @@ -1262,8 +1282,13 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>  	if (ret < 0)
>  		return ret;
>  
> +	if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> +		t = ktime_set(args->deadline_sec, args->deadline_nsec);
> +		tp = &t;
> +	}
> +
>  	ret = drm_syncobj_array_wait(dev, file_private,
> -				     args, NULL, syncobjs, false);
> +				     args, NULL, syncobjs, false, tp);
>  
>  	drm_syncobj_array_free(syncobjs, args->count_handles);
>  
> @@ -1276,14 +1301,21 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
>  {
>  	struct drm_syncobj_timeline_wait *args = data;
>  	struct drm_syncobj **syncobjs;
> +	unsigned possible_flags;
> +	ktime_t t, *tp = NULL;
>  	int ret = 0;
>  
>  	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
>  		return -EOPNOTSUPP;
>  
> -	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> -			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> -			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
> +	possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> +			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> +			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE;
> +
> +	if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> +		possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> +
> +	if (args->flags & ~possible_flags)
>  		return -EINVAL;
>  
>  	if (args->count_handles == 0)
> @@ -1296,8 +1328,13 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
>  	if (ret < 0)
>  		return ret;
>  
> +	if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> +		t = ktime_set(args->deadline_sec, args->deadline_nsec);
> +		tp = &t;
> +	}
> +
>  	ret = drm_syncobj_array_wait(dev, file_private,
> -				     NULL, args, syncobjs, true);
> +				     NULL, args, syncobjs, true, tp);
>  
>  	drm_syncobj_array_free(syncobjs, args->count_handles);
>  
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 1d76d0686b03..9aa24f097e22 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -104,6 +104,12 @@ enum drm_driver_feature {
>  	 * acceleration should be handled by two drivers that are connected using auxiliary bus.
>  	 */
>  	DRIVER_COMPUTE_ACCEL            = BIT(7),
> +	/**
> +	 * @DRIVER_SYNCOBJ_DEADLINE:
> +	 *
> +	 * Driver supports &dma_fence_ops.set_deadline
> +	 */
> +	DRIVER_SYNCOBJ_DEADLINE         = BIT(8),
>  
>  	/* IMPORTANT: Below are all the legacy flags, add new ones above. */
>  
> diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
> index 642808520d92..c6b85bb13810 100644
> --- a/include/uapi/drm/drm.h
> +++ b/include/uapi/drm/drm.h
> @@ -767,6 +767,13 @@ struct drm_gem_open {
>   * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
>   */
>  #define DRM_CAP_SYNCOBJ_TIMELINE	0x14
> +/**
> + * DRM_CAP_SYNCOBJ_DEADLINE
> + *
> + * If set to 1, the driver supports DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE flag
> + * on the SYNCOBJ_TIMELINE_WAIT/SYNCOBJ_WAIT ioctls.
> + */
> +#define DRM_CAP_SYNCOBJ_DEADLINE	0x15
>  
>  /* DRM_IOCTL_GET_CAP ioctl argument type */
>  struct drm_get_cap {
> @@ -887,6 +894,7 @@ struct drm_syncobj_transfer {
>  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL (1 << 0)
>  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT (1 << 1)
>  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE (1 << 2) /* wait for time point to become available */
> +#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE (1 << 3) /* set fence deadline based to deadline_nsec/sec */

Where was the UAPI documentation explaining what a fence deadline is
and what it does, again?

>  struct drm_syncobj_wait {
>  	__u64 handles;
>  	/* absolute timeout */
> @@ -894,7 +902,9 @@ struct drm_syncobj_wait {
>  	__u32 count_handles;
>  	__u32 flags;
>  	__u32 first_signaled; /* only valid when not waiting all */
> -	__u32 pad;
> +	/* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> +	__u32 deadline_nsec;
> +	__u64 deadline_sec;
>  };
>  
>  struct drm_syncobj_timeline_wait {
> @@ -906,7 +916,9 @@ struct drm_syncobj_timeline_wait {
>  	__u32 count_handles;
>  	__u32 flags;
>  	__u32 first_signaled; /* only valid when not waiting all */
> -	__u32 pad;
> +	/* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> +	__u32 deadline_nsec;
> +	__u64 deadline_sec;
>  };

It seems inconsistent that these sec,nsec are here unsigned, when in
other places they are signed. There is also the question if these need
to meet clock_settime() requirements of valid values.


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-18 21:15 ` [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time Rob Clark
@ 2023-02-20  9:08   ` Pekka Paalanen
  2023-02-20 15:55     ` Rob Clark
  2023-02-22 10:37   ` Luben Tuikov
  1 sibling, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-20  9:08 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, open list

[-- Attachment #1: Type: text/plain, Size: 2687 bytes --]

On Sat, 18 Feb 2023 13:15:53 -0800
Rob Clark <robdclark@gmail.com> wrote:

> From: Rob Clark <robdclark@chromium.org>
> 
> Will be used in the next commit to set a deadline on fences that an
> atomic update is waiting on.
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
>  include/drm/drm_vblank.h     |  1 +
>  2 files changed, 33 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 2ff31717a3de..caf25ebb34c5 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
>  }
>  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
>  
> +/**
> + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> + * @crtc: the crtc for which to calculate next vblank time
> + * @vblanktime: pointer to time to receive the next vblank timestamp.
> + *
> + * Calculate the expected time of the next vblank based on time of previous
> + * vblank and frame duration

Hi,

for VRR this targets the highest frame rate possible for the current
VRR mode, right?


Thanks,
pq

> + */
> +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> +{
> +	unsigned int pipe = drm_crtc_index(crtc);
> +	struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> +	u64 count;
> +
> +	if (!vblank->framedur_ns)
> +		return -EINVAL;
> +
> +	count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> +
> +	/*
> +	 * If we don't get a valid count, then we probably also don't
> +	 * have a valid time:
> +	 */
> +	if (!count)
> +		return -EINVAL;
> +
> +	*vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> +
>  static void send_vblank_event(struct drm_device *dev,
>  		struct drm_pending_vblank_event *e,
>  		u64 seq, ktime_t now)
> diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> index 733a3e2d1d10..a63bc2c92f3c 100644
> --- a/include/drm/drm_vblank.h
> +++ b/include/drm/drm_vblank.h
> @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
>  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
>  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
>  				   ktime_t *vblanktime);
> +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
>  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
>  			       struct drm_pending_vblank_event *e);
>  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 14/14] drm/i915: Add deadline based boost support
  2023-02-18 21:15 ` [PATCH v4 14/14] drm/i915: Add deadline based boost support Rob Clark
@ 2023-02-20 15:46   ` Tvrtko Ursulin
  0 siblings, 0 replies; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-20 15:46 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, intel-gfx, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno, Sumit Semwal,
	open list:DMA BUFFER SHARING FRAMEWORK


On 18/02/2023 21:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> 
> This should probably be re-written by someone who knows the i915
> request/timeline stuff better, to deal with non-immediate deadlines.
> But as-is I think this should be enough to handle the case where
> we want syncobj waits to trigger boost.

Yeah, there are endless possibilities. :) But I think it is effectively 
similar enough to current waitboosting (when waits are done using the 
i915 specific ioctl). So as a first step I'll try to organize some 
internal power and performance testing, at least Chromebook focused, to 
see if modern userspace (syncobj based) even benefits and does not by 
some chance regress over the board.

Regards,

Tvrtko

> 
>   drivers/gpu/drm/i915/i915_driver.c  |  2 +-
>   drivers/gpu/drm/i915/i915_request.c | 20 ++++++++++++++++++++
>   2 files changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
> index cf1c0970ecb4..bd40b7bcb38a 100644
> --- a/drivers/gpu/drm/i915/i915_driver.c
> +++ b/drivers/gpu/drm/i915/i915_driver.c
> @@ -1781,7 +1781,7 @@ static const struct drm_driver i915_drm_driver = {
>   	.driver_features =
>   	    DRIVER_GEM |
>   	    DRIVER_RENDER | DRIVER_MODESET | DRIVER_ATOMIC | DRIVER_SYNCOBJ |
> -	    DRIVER_SYNCOBJ_TIMELINE,
> +	    DRIVER_SYNCOBJ_TIMELINE | DRIVER_SYNCOBJ_DEADLINE,
>   	.release = i915_driver_release,
>   	.open = i915_driver_open,
>   	.lastclose = i915_driver_lastclose,
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 7503dcb9043b..44491e7e214c 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -97,6 +97,25 @@ static bool i915_fence_enable_signaling(struct dma_fence *fence)
>   	return i915_request_enable_breadcrumb(to_request(fence));
>   }
>   
> +static void i915_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
> +{
> +	struct i915_request *rq = to_request(fence);
> +
> +	if (i915_request_completed(rq))
> +		return;
> +
> +	if (i915_request_started(rq))
> +		return;
> +
> +	/*
> +	 * TODO something more clever for deadlines that are in the
> +	 * future.  I think probably track the nearest deadline in
> +	 * rq->timeline and set timer to trigger boost accordingly?
> +	 */
> +
> +	intel_rps_boost(rq);
> +}
> +
>   static signed long i915_fence_wait(struct dma_fence *fence,
>   				   bool interruptible,
>   				   signed long timeout)
> @@ -182,6 +201,7 @@ const struct dma_fence_ops i915_fence_ops = {
>   	.signaled = i915_fence_signaled,
>   	.wait = i915_fence_wait,
>   	.release = i915_fence_release,
> +	.set_deadline = i915_fence_set_deadline,
>   };
>   
>   static void irq_execute_cb(struct irq_work *wrk)

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-20  9:08   ` Pekka Paalanen
@ 2023-02-20 15:55     ` Rob Clark
  2023-02-21  8:45       ` Pekka Paalanen
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-20 15:55 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, open list

On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Sat, 18 Feb 2023 13:15:53 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Will be used in the next commit to set a deadline on fences that an
> > atomic update is waiting on.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> >  include/drm/drm_vblank.h     |  1 +
> >  2 files changed, 33 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > index 2ff31717a3de..caf25ebb34c5 100644
> > --- a/drivers/gpu/drm/drm_vblank.c
> > +++ b/drivers/gpu/drm/drm_vblank.c
> > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> >  }
> >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> >
> > +/**
> > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > + * @crtc: the crtc for which to calculate next vblank time
> > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > + *
> > + * Calculate the expected time of the next vblank based on time of previous
> > + * vblank and frame duration
>
> Hi,
>
> for VRR this targets the highest frame rate possible for the current
> VRR mode, right?
>

It is based on vblank->framedur_ns which is in turn based on
mode->crtc_clock.  Presumably for VRR that ends up being a maximum?

BR,
-R


>
> Thanks,
> pq
>
> > + */
> > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > +{
> > +     unsigned int pipe = drm_crtc_index(crtc);
> > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > +     u64 count;
> > +
> > +     if (!vblank->framedur_ns)
> > +             return -EINVAL;
> > +
> > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > +
> > +     /*
> > +      * If we don't get a valid count, then we probably also don't
> > +      * have a valid time:
> > +      */
> > +     if (!count)
> > +             return -EINVAL;
> > +
> > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> > +
> > +     return 0;
> > +}
> > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > +
> >  static void send_vblank_event(struct drm_device *dev,
> >               struct drm_pending_vblank_event *e,
> >               u64 seq, ktime_t now)
> > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > index 733a3e2d1d10..a63bc2c92f3c 100644
> > --- a/include/drm/drm_vblank.h
> > +++ b/include/drm/drm_vblank.h
> > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> >                                  ktime_t *vblanktime);
> > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> >                              struct drm_pending_vblank_event *e);
> >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl
  2023-02-20  8:27   ` Christian König
@ 2023-02-20 16:09     ` Rob Clark
  2023-02-21  8:41       ` Pekka Paalanen
  2023-02-23  9:19       ` Christian König
  0 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-20 16:09 UTC (permalink / raw)
  To: Christian König
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

On Mon, Feb 20, 2023 at 12:27 AM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 18.02.23 um 22:15 schrieb Rob Clark:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > The initial purpose is for igt tests, but this would also be useful for
> > compositors that wait until close to vblank deadline to make decisions
> > about which frame to show.
> >
> > The igt tests can be found at:
> >
> > https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline
> >
> > v2: Clarify the timebase, add link to igt tests
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> >   drivers/dma-buf/sync_file.c    | 19 +++++++++++++++++++
> >   include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++
> >   2 files changed, 41 insertions(+)
> >
> > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > index af57799c86ce..fb6ca1032885 100644
> > --- a/drivers/dma-buf/sync_file.c
> > +++ b/drivers/dma-buf/sync_file.c
> > @@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
> >       return ret;
> >   }
> >
> > +static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
> > +                                     unsigned long arg)
> > +{
> > +     struct sync_set_deadline ts;
> > +
> > +     if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
> > +             return -EFAULT;
> > +
> > +     if (ts.pad)
> > +             return -EINVAL;
> > +
> > +     dma_fence_set_deadline(sync_file->fence, ktime_set(ts.tv_sec, ts.tv_nsec));
> > +
> > +     return 0;
> > +}
> > +
> >   static long sync_file_ioctl(struct file *file, unsigned int cmd,
> >                           unsigned long arg)
> >   {
> > @@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
> >       case SYNC_IOC_FILE_INFO:
> >               return sync_file_ioctl_fence_info(sync_file, arg);
> >
> > +     case SYNC_IOC_SET_DEADLINE:
> > +             return sync_file_ioctl_set_deadline(sync_file, arg);
> > +
> >       default:
> >               return -ENOTTY;
> >       }
> > diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
> > index ee2dcfb3d660..c8666580816f 100644
> > --- a/include/uapi/linux/sync_file.h
> > +++ b/include/uapi/linux/sync_file.h
> > @@ -67,6 +67,20 @@ struct sync_file_info {
> >       __u64   sync_fence_info;
> >   };
> >
> > +/**
> > + * struct sync_set_deadline - set a deadline on a fence
> > + * @tv_sec:  seconds elapsed since epoch
> > + * @tv_nsec: nanoseconds elapsed since the time given by the tv_sec
> > + * @pad:     must be zero
> > + *
> > + * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank)
> > + */
> > +struct sync_set_deadline {
> > +     __s64   tv_sec;
> > +     __s32   tv_nsec;
> > +     __u32   pad;
>
> IIRC struct timespec defined this as time_t/long (which is horrible for
> an UAPI because of the sizeof(long) dependency), one possible
> alternative is to use 64bit nanoseconds from CLOCK_MONOTONIC (which is
> essentially ktime).
>
> Not 100% sure if there is any preferences documented, but I think the
> later might be better.

The original thought is that this maps directly to clock_gettime()
without extra conversion needed, and is similar to other pre-ktime_t
UAPI.  But OTOH if userspace wants to add an offset, it is maybe
better to convert completely to ns in userspace and use a u64 (as that
is what ns_to_ktime() uses).. (and OFC whatever decision here also
applies to the syncobj wait ioctls)

I'm leaning towards u64 CLOCK_MONOTONIC ns if no one has a good
argument against that.

BR,
-R

> Either way the patch is Acked-by: Christian König
> <christian.koenig@amd.com> for this patch.
>
> Regards,
> Christian.
>
> > +};
> > +
> >   #define SYNC_IOC_MAGIC              '>'
> >
> >   /**
> > @@ -95,4 +109,12 @@ struct sync_file_info {
> >    */
> >   #define SYNC_IOC_FILE_INFO  _IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
> >
> > +
> > +/**
> > + * DOC: SYNC_IOC_SET_DEADLINE - set a deadline on a fence
> > + *
> > + * Allows userspace to set a deadline on a fence, see dma_fence_set_deadline()
> > + */
> > +#define SYNC_IOC_SET_DEADLINE        _IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
> > +
> >   #endif /* _UAPI_LINUX_SYNC_H */
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-20  8:53   ` Pekka Paalanen
@ 2023-02-20 16:14     ` Rob Clark
  2023-02-21  8:37       ` Pekka Paalanen
  2023-02-21 16:48       ` Luben Tuikov
  0 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-20 16:14 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Sumit Semwal, Gustavo Padovan,
	Christian König, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Sat, 18 Feb 2023 13:15:49 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > some work has completed).  Usermode components of GPU driver stacks
> > often poll() on fence fd's to know when it is safe to do things like
> > free or reuse a buffer, but they can also poll() on a fence fd when
> > waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > lets the kernel differentiate these two cases.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
>
> Hi,
>
> where would the UAPI documentation of this go?
> It seems to be missing.

Good question, I am not sure.  The poll() man page has a description,
but my usage doesn't fit that _exactly_ (but OTOH the description is a
bit vague).

> If a Wayland compositor is polling application fences to know which
> client buffer to use in its rendering, should the compositor poll with
> PRI or not? If a compositor polls with PRI, then all fences from all
> applications would always be PRI. Would that be harmful somehow or
> would it be beneficial?

I think a compositor would rather use the deadline ioctl and then poll
without PRI.  Otherwise you are giving an urgency signal to the fence
signaller which might not necessarily be needed.

The places where I expect PRI to be useful is more in mesa (things
like glFinish(), readpix, and other similar sorts of blocking APIs)

BR,
-R

>
>
> Thanks,
> pq
>
> > ---
> >  drivers/dma-buf/sync_file.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > index fb6ca1032885..c30b2085ee0a 100644
> > --- a/drivers/dma-buf/sync_file.c
> > +++ b/drivers/dma-buf/sync_file.c
> > @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
> >  {
> >       struct sync_file *sync_file = file->private_data;
> >
> > +     /*
> > +      * The POLLPRI/EPOLLPRI flag can be used to signal that
> > +      * userspace wants the fence to signal ASAP, express this
> > +      * as an immediate deadline.
> > +      */
> > +     if (poll_requested_events(wait) & EPOLLPRI)
> > +             dma_fence_set_deadline(sync_file->fence, ktime_get());
> > +
> >       poll_wait(file, &sync_file->wq, wait);
> >
> >       if (list_empty(&sync_file->cb.node) &&
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits
  2023-02-20  9:05   ` Pekka Paalanen
@ 2023-02-20 16:20     ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-20 16:20 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, open list

On Mon, Feb 20, 2023 at 1:05 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Sat, 18 Feb 2023 13:15:52 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Add a new flag to let userspace provide a deadline as a hint for syncobj
> > and timeline waits.  This gives a hint to the driver signaling the
> > backing fences about how soon userspace needs it to compete work, so it
> > can addjust GPU frequency accordingly.  An immediate deadline can be
> > given to provide something equivalent to i915 "wait boost".
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> >
> > I'm a bit on the fence about the addition of the DRM_CAP, but it seems
> > useful to give userspace a way to probe whether the kernel and driver
> > supports the new wait flag, especially since we have vk-common code
> > dealing with syncobjs.  But open to suggestions.
> >
> >  drivers/gpu/drm/drm_ioctl.c   |  3 ++
> >  drivers/gpu/drm/drm_syncobj.c | 59 ++++++++++++++++++++++++++++-------
> >  include/drm/drm_drv.h         |  6 ++++
> >  include/uapi/drm/drm.h        | 16 ++++++++--
> >  4 files changed, 71 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> > index 7c9d66ee917d..1c5c942cf0f9 100644
> > --- a/drivers/gpu/drm/drm_ioctl.c
> > +++ b/drivers/gpu/drm/drm_ioctl.c
> > @@ -254,6 +254,9 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
> >       case DRM_CAP_SYNCOBJ_TIMELINE:
> >               req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
> >               return 0;
> > +     case DRM_CAP_SYNCOBJ_DEADLINE:
> > +             req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
>
> Hi,
>
> is that a typo for DRIVER_SYNCOBJ_DEADLINE?

Ahh, yes, that is a typo.. but I'm thinking of dropping the cap and
allowing count_handles==0 instead as a way for userspace to probe
whether the kernel supports the new ioctl flag/fields.

> > +             return 0;
> >       }
> >
> >       /* Other caps only work with KMS drivers */
> > diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> > index 0c2be8360525..61cf97972a60 100644
> > --- a/drivers/gpu/drm/drm_syncobj.c
> > +++ b/drivers/gpu/drm/drm_syncobj.c
> > @@ -973,7 +973,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
> >                                                 uint32_t count,
> >                                                 uint32_t flags,
> >                                                 signed long timeout,
> > -                                               uint32_t *idx)
> > +                                               uint32_t *idx,
> > +                                               ktime_t *deadline)
> >  {
> >       struct syncobj_wait_entry *entries;
> >       struct dma_fence *fence;
> > @@ -1053,6 +1054,15 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
> >                       drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
> >       }
> >
> > +     if (deadline) {
> > +             for (i = 0; i < count; ++i) {
> > +                     fence = entries[i].fence;
> > +                     if (!fence)
> > +                             continue;
> > +                     dma_fence_set_deadline(fence, *deadline);
> > +             }
> > +     }
> > +
> >       do {
> >               set_current_state(TASK_INTERRUPTIBLE);
> >
> > @@ -1151,7 +1161,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
> >                                 struct drm_file *file_private,
> >                                 struct drm_syncobj_wait *wait,
> >                                 struct drm_syncobj_timeline_wait *timeline_wait,
> > -                               struct drm_syncobj **syncobjs, bool timeline)
> > +                               struct drm_syncobj **syncobjs, bool timeline,
> > +                               ktime_t *deadline)
> >  {
> >       signed long timeout = 0;
> >       uint32_t first = ~0;
> > @@ -1162,7 +1173,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
> >                                                        NULL,
> >                                                        wait->count_handles,
> >                                                        wait->flags,
> > -                                                      timeout, &first);
> > +                                                      timeout, &first,
> > +                                                      deadline);
> >               if (timeout < 0)
> >                       return timeout;
> >               wait->first_signaled = first;
> > @@ -1172,7 +1184,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
> >                                                        u64_to_user_ptr(timeline_wait->points),
> >                                                        timeline_wait->count_handles,
> >                                                        timeline_wait->flags,
> > -                                                      timeout, &first);
> > +                                                      timeout, &first,
> > +                                                      deadline);
> >               if (timeout < 0)
> >                       return timeout;
> >               timeline_wait->first_signaled = first;
> > @@ -1243,13 +1256,20 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
> >  {
> >       struct drm_syncobj_wait *args = data;
> >       struct drm_syncobj **syncobjs;
> > +     unsigned possible_flags;
> > +     ktime_t t, *tp = NULL;
> >       int ret = 0;
> >
> >       if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
> >               return -EOPNOTSUPP;
> >
> > -     if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> > -                         DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
> > +     possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> > +                      DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT;
> > +
> > +     if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> > +             possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> > +
> > +     if (args->flags & ~possible_flags)
> >               return -EINVAL;
> >
> >       if (args->count_handles == 0)
> > @@ -1262,8 +1282,13 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
> >       if (ret < 0)
> >               return ret;
> >
> > +     if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> > +             t = ktime_set(args->deadline_sec, args->deadline_nsec);
> > +             tp = &t;
> > +     }
> > +
> >       ret = drm_syncobj_array_wait(dev, file_private,
> > -                                  args, NULL, syncobjs, false);
> > +                                  args, NULL, syncobjs, false, tp);
> >
> >       drm_syncobj_array_free(syncobjs, args->count_handles);
> >
> > @@ -1276,14 +1301,21 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
> >  {
> >       struct drm_syncobj_timeline_wait *args = data;
> >       struct drm_syncobj **syncobjs;
> > +     unsigned possible_flags;
> > +     ktime_t t, *tp = NULL;
> >       int ret = 0;
> >
> >       if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
> >               return -EOPNOTSUPP;
> >
> > -     if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> > -                         DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> > -                         DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
> > +     possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> > +                      DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> > +                      DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE;
> > +
> > +     if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> > +             possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> > +
> > +     if (args->flags & ~possible_flags)
> >               return -EINVAL;
> >
> >       if (args->count_handles == 0)
> > @@ -1296,8 +1328,13 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
> >       if (ret < 0)
> >               return ret;
> >
> > +     if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> > +             t = ktime_set(args->deadline_sec, args->deadline_nsec);
> > +             tp = &t;
> > +     }
> > +
> >       ret = drm_syncobj_array_wait(dev, file_private,
> > -                                  NULL, args, syncobjs, true);
> > +                                  NULL, args, syncobjs, true, tp);
> >
> >       drm_syncobj_array_free(syncobjs, args->count_handles);
> >
> > diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> > index 1d76d0686b03..9aa24f097e22 100644
> > --- a/include/drm/drm_drv.h
> > +++ b/include/drm/drm_drv.h
> > @@ -104,6 +104,12 @@ enum drm_driver_feature {
> >        * acceleration should be handled by two drivers that are connected using auxiliary bus.
> >        */
> >       DRIVER_COMPUTE_ACCEL            = BIT(7),
> > +     /**
> > +      * @DRIVER_SYNCOBJ_DEADLINE:
> > +      *
> > +      * Driver supports &dma_fence_ops.set_deadline
> > +      */
> > +     DRIVER_SYNCOBJ_DEADLINE         = BIT(8),
> >
> >       /* IMPORTANT: Below are all the legacy flags, add new ones above. */
> >
> > diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
> > index 642808520d92..c6b85bb13810 100644
> > --- a/include/uapi/drm/drm.h
> > +++ b/include/uapi/drm/drm.h
> > @@ -767,6 +767,13 @@ struct drm_gem_open {
> >   * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
> >   */
> >  #define DRM_CAP_SYNCOBJ_TIMELINE     0x14
> > +/**
> > + * DRM_CAP_SYNCOBJ_DEADLINE
> > + *
> > + * If set to 1, the driver supports DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE flag
> > + * on the SYNCOBJ_TIMELINE_WAIT/SYNCOBJ_WAIT ioctls.
> > + */
> > +#define DRM_CAP_SYNCOBJ_DEADLINE     0x15
> >
> >  /* DRM_IOCTL_GET_CAP ioctl argument type */
> >  struct drm_get_cap {
> > @@ -887,6 +894,7 @@ struct drm_syncobj_transfer {
> >  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL (1 << 0)
> >  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT (1 << 1)
> >  #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE (1 << 2) /* wait for time point to become available */
> > +#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE (1 << 3) /* set fence deadline based to deadline_nsec/sec */
>
> Where was the UAPI documentation explaining what a fence deadline is
> and what it does, again?
>
> >  struct drm_syncobj_wait {
> >       __u64 handles;
> >       /* absolute timeout */
> > @@ -894,7 +902,9 @@ struct drm_syncobj_wait {
> >       __u32 count_handles;
> >       __u32 flags;
> >       __u32 first_signaled; /* only valid when not waiting all */
> > -     __u32 pad;
> > +     /* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> > +     __u32 deadline_nsec;
> > +     __u64 deadline_sec;
> >  };
> >
> >  struct drm_syncobj_timeline_wait {
> > @@ -906,7 +916,9 @@ struct drm_syncobj_timeline_wait {
> >       __u32 count_handles;
> >       __u32 flags;
> >       __u32 first_signaled; /* only valid when not waiting all */
> > -     __u32 pad;
> > +     /* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> > +     __u32 deadline_nsec;
> > +     __u64 deadline_sec;
> >  };
>
> It seems inconsistent that these sec,nsec are here unsigned, when in
> other places they are signed. There is also the question if these need
> to meet clock_settime() requirements of valid values.
>

Yes, should have been signed.  But I think Christian has convinced me
to use 'u64 ns' (absolute monotonic) instead for the sync_file ioctl.
And it would make sense to use the same here.

BR,
-R

>
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-20 16:14     ` Rob Clark
@ 2023-02-21  8:37       ` Pekka Paalanen
  2023-02-21 16:01         ` Sebastian Wick
  2023-02-21 16:48       ` Luben Tuikov
  1 sibling, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-21  8:37 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Sumit Semwal, Gustavo Padovan,
	Christian König, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

[-- Attachment #1: Type: text/plain, Size: 3234 bytes --]

On Mon, 20 Feb 2023 08:14:47 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Sat, 18 Feb 2023 13:15:49 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >  
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > > wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > > some work has completed).  Usermode components of GPU driver stacks
> > > often poll() on fence fd's to know when it is safe to do things like
> > > free or reuse a buffer, but they can also poll() on a fence fd when
> > > waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > > lets the kernel differentiate these two cases.
> > >
> > > Signed-off-by: Rob Clark <robdclark@chromium.org>  
> >
> > Hi,
> >
> > where would the UAPI documentation of this go?
> > It seems to be missing.  
> 
> Good question, I am not sure.  The poll() man page has a description,
> but my usage doesn't fit that _exactly_ (but OTOH the description is a
> bit vague).
> 
> > If a Wayland compositor is polling application fences to know which
> > client buffer to use in its rendering, should the compositor poll with
> > PRI or not? If a compositor polls with PRI, then all fences from all
> > applications would always be PRI. Would that be harmful somehow or
> > would it be beneficial?  
> 
> I think a compositor would rather use the deadline ioctl and then poll
> without PRI.  Otherwise you are giving an urgency signal to the fence
> signaller which might not necessarily be needed.
> 
> The places where I expect PRI to be useful is more in mesa (things
> like glFinish(), readpix, and other similar sorts of blocking APIs)

Sounds good. Docs... ;-)

Hmm, so a compositor should set the deadline when it processes the
wl_surface.commit, and not when it actually starts repainting, to give
time for the driver to react and the GPU to do some more work. The
deadline would be the time when the compositor starts its repaint, so
it knows if the buffer is ready or not.


Thanks,
pq


> 
> BR,
> -R
> 
> >
> >
> > Thanks,
> > pq
> >  
> > > ---
> > >  drivers/dma-buf/sync_file.c | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > >
> > > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > > index fb6ca1032885..c30b2085ee0a 100644
> > > --- a/drivers/dma-buf/sync_file.c
> > > +++ b/drivers/dma-buf/sync_file.c
> > > @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
> > >  {
> > >       struct sync_file *sync_file = file->private_data;
> > >
> > > +     /*
> > > +      * The POLLPRI/EPOLLPRI flag can be used to signal that
> > > +      * userspace wants the fence to signal ASAP, express this
> > > +      * as an immediate deadline.
> > > +      */
> > > +     if (poll_requested_events(wait) & EPOLLPRI)
> > > +             dma_fence_set_deadline(sync_file->fence, ktime_get());
> > > +
> > >       poll_wait(file, &sync_file->wq, wait);
> > >
> > >       if (list_empty(&sync_file->cb.node) &&  
> >  


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-20  8:31   ` Christian König
@ 2023-02-21  8:38     ` Pekka Paalanen
  0 siblings, 0 replies; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-21  8:38 UTC (permalink / raw)
  To: Christian König
  Cc: Rob Clark, dri-devel, freedreno, Daniel Vetter,
	Christian König, Michel Dänzer, Tvrtko Ursulin,
	Rodrigo Vivi, Alex Deucher, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

[-- Attachment #1: Type: text/plain, Size: 2602 bytes --]

On Mon, 20 Feb 2023 09:31:41 +0100
Christian König <christian.koenig@amd.com> wrote:

> Am 18.02.23 um 22:15 schrieb Rob Clark:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > some work has completed).  Usermode components of GPU driver stacks
> > often poll() on fence fd's to know when it is safe to do things like
> > free or reuse a buffer, but they can also poll() on a fence fd when
> > waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > lets the kernel differentiate these two cases.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>  
> 
> The code looks clean, but the different poll flags and their meaning are 
> certainly not my field of expertise.


A good question. epoll_ctl manual refers to poll(2) which says:

       POLLPRI
              There is some exceptional condition on the file descriptor.  Possibilities include:

              • There is out-of-band data on a TCP socket (see tcp(7)).

              • A pseudoterminal master in packet mode has seen a state change on the slave (see ioctl_tty(2)).

              • A cgroup.events file has been modified (see cgroups(7)).

It seems to be about selecting what events will trigger the poll,
more than how (fast) to poll. At least it is not documented to be
ignored in 'events', so I guess it should work.


Thanks,
pq

> Feel free to add Acked-by: Christian König <christian.koenig@amd.com>, 
> somebody with more background in this should probably take a look as well.
> 
> Regards,
> Christian.
> 
> > ---
> >   drivers/dma-buf/sync_file.c | 8 ++++++++
> >   1 file changed, 8 insertions(+)
> >
> > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > index fb6ca1032885..c30b2085ee0a 100644
> > --- a/drivers/dma-buf/sync_file.c
> > +++ b/drivers/dma-buf/sync_file.c
> > @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
> >   {
> >   	struct sync_file *sync_file = file->private_data;
> >   
> > +	/*
> > +	 * The POLLPRI/EPOLLPRI flag can be used to signal that
> > +	 * userspace wants the fence to signal ASAP, express this
> > +	 * as an immediate deadline.
> > +	 */
> > +	if (poll_requested_events(wait) & EPOLLPRI)
> > +		dma_fence_set_deadline(sync_file->fence, ktime_get());
> > +
> >   	poll_wait(file, &sync_file->wq, wait);
> >   
> >   	if (list_empty(&sync_file->cb.node) &&  
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl
  2023-02-20 16:09     ` Rob Clark
@ 2023-02-21  8:41       ` Pekka Paalanen
  2023-02-23  9:19       ` Christian König
  1 sibling, 0 replies; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-21  8:41 UTC (permalink / raw)
  To: Rob Clark
  Cc: Christian König, dri-devel, freedreno, Daniel Vetter,
	Christian König, Michel Dänzer, Tvrtko Ursulin,
	Rodrigo Vivi, Alex Deucher, Simon Ser, Rob Clark, Sumit Semwal,
	Gustavo Padovan, open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

[-- Attachment #1: Type: text/plain, Size: 5079 bytes --]

On Mon, 20 Feb 2023 08:09:04 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Mon, Feb 20, 2023 at 12:27 AM Christian König
> <christian.koenig@amd.com> wrote:
> >
> > Am 18.02.23 um 22:15 schrieb Rob Clark:  
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > The initial purpose is for igt tests, but this would also be useful for
> > > compositors that wait until close to vblank deadline to make decisions
> > > about which frame to show.
> > >
> > > The igt tests can be found at:
> > >
> > > https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline
> > >
> > > v2: Clarify the timebase, add link to igt tests
> > >
> > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > ---
> > >   drivers/dma-buf/sync_file.c    | 19 +++++++++++++++++++
> > >   include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++
> > >   2 files changed, 41 insertions(+)
> > >
> > > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > > index af57799c86ce..fb6ca1032885 100644
> > > --- a/drivers/dma-buf/sync_file.c
> > > +++ b/drivers/dma-buf/sync_file.c
> > > @@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
> > >       return ret;
> > >   }
> > >
> > > +static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
> > > +                                     unsigned long arg)
> > > +{
> > > +     struct sync_set_deadline ts;
> > > +
> > > +     if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
> > > +             return -EFAULT;
> > > +
> > > +     if (ts.pad)
> > > +             return -EINVAL;
> > > +
> > > +     dma_fence_set_deadline(sync_file->fence, ktime_set(ts.tv_sec, ts.tv_nsec));
> > > +
> > > +     return 0;
> > > +}
> > > +
> > >   static long sync_file_ioctl(struct file *file, unsigned int cmd,
> > >                           unsigned long arg)
> > >   {
> > > @@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
> > >       case SYNC_IOC_FILE_INFO:
> > >               return sync_file_ioctl_fence_info(sync_file, arg);
> > >
> > > +     case SYNC_IOC_SET_DEADLINE:
> > > +             return sync_file_ioctl_set_deadline(sync_file, arg);
> > > +
> > >       default:
> > >               return -ENOTTY;
> > >       }
> > > diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
> > > index ee2dcfb3d660..c8666580816f 100644
> > > --- a/include/uapi/linux/sync_file.h
> > > +++ b/include/uapi/linux/sync_file.h
> > > @@ -67,6 +67,20 @@ struct sync_file_info {
> > >       __u64   sync_fence_info;
> > >   };
> > >
> > > +/**
> > > + * struct sync_set_deadline - set a deadline on a fence
> > > + * @tv_sec:  seconds elapsed since epoch
> > > + * @tv_nsec: nanoseconds elapsed since the time given by the tv_sec
> > > + * @pad:     must be zero
> > > + *
> > > + * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank)
> > > + */
> > > +struct sync_set_deadline {
> > > +     __s64   tv_sec;
> > > +     __s32   tv_nsec;
> > > +     __u32   pad;  
> >
> > IIRC struct timespec defined this as time_t/long (which is horrible for
> > an UAPI because of the sizeof(long) dependency), one possible
> > alternative is to use 64bit nanoseconds from CLOCK_MONOTONIC (which is
> > essentially ktime).
> >
> > Not 100% sure if there is any preferences documented, but I think the
> > later might be better.  
> 
> The original thought is that this maps directly to clock_gettime()
> without extra conversion needed, and is similar to other pre-ktime_t
> UAPI.  But OTOH if userspace wants to add an offset, it is maybe
> better to convert completely to ns in userspace and use a u64 (as that
> is what ns_to_ktime() uses).. (and OFC whatever decision here also
> applies to the syncobj wait ioctls)
> 
> I'm leaning towards u64 CLOCK_MONOTONIC ns if no one has a good
> argument against that.

No, no good argument against that, it's just different from any other
UAPI so far, but a new style has to start somewhere. It's good for 584
years after the epoch.

Just make sure the documentation is explicit on how struct timespec is
converted to and from u64 (signedness issues, overflow and whatnot).


Thanks,
pq


> 
> BR,
> -R
> 
> > Either way the patch is Acked-by: Christian König
> > <christian.koenig@amd.com> for this patch.
> >
> > Regards,
> > Christian.
> >  
> > > +};
> > > +
> > >   #define SYNC_IOC_MAGIC              '>'
> > >
> > >   /**
> > > @@ -95,4 +109,12 @@ struct sync_file_info {
> > >    */
> > >   #define SYNC_IOC_FILE_INFO  _IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
> > >
> > > +
> > > +/**
> > > + * DOC: SYNC_IOC_SET_DEADLINE - set a deadline on a fence
> > > + *
> > > + * Allows userspace to set a deadline on a fence, see dma_fence_set_deadline()
> > > + */
> > > +#define SYNC_IOC_SET_DEADLINE        _IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
> > > +
> > >   #endif /* _UAPI_LINUX_SYNC_H */  
> >  


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-20 15:55     ` Rob Clark
@ 2023-02-21  8:45       ` Pekka Paalanen
  2023-02-21 13:01         ` Ville Syrjälä
  0 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-21  8:45 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Simon Ser, Rob Clark, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, open list

[-- Attachment #1: Type: text/plain, Size: 3704 bytes --]

On Mon, 20 Feb 2023 07:55:41 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Sat, 18 Feb 2023 13:15:53 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >  
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Will be used in the next commit to set a deadline on fences that an
> > > atomic update is waiting on.
> > >
> > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > ---
> > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > >  include/drm/drm_vblank.h     |  1 +
> > >  2 files changed, 33 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > index 2ff31717a3de..caf25ebb34c5 100644
> > > --- a/drivers/gpu/drm/drm_vblank.c
> > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > >  }
> > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > >
> > > +/**
> > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > + * @crtc: the crtc for which to calculate next vblank time
> > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > + *
> > > + * Calculate the expected time of the next vblank based on time of previous
> > > + * vblank and frame duration  
> >
> > Hi,
> >
> > for VRR this targets the highest frame rate possible for the current
> > VRR mode, right?
> >  
> 
> It is based on vblank->framedur_ns which is in turn based on
> mode->crtc_clock.  Presumably for VRR that ends up being a maximum?

I don't know. :-)

You need a number of clock cycles in addition to the clock frequency,
and that could still be minimum, maximum, the last realized one, ...

VRR works by adjusting the front porch length IIRC.


Thanks,
pq

> BR,
> -R
> 
> 
> >
> > Thanks,
> > pq
> >  
> > > + */
> > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > > +{
> > > +     unsigned int pipe = drm_crtc_index(crtc);
> > > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > > +     u64 count;
> > > +
> > > +     if (!vblank->framedur_ns)
> > > +             return -EINVAL;
> > > +
> > > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > > +
> > > +     /*
> > > +      * If we don't get a valid count, then we probably also don't
> > > +      * have a valid time:
> > > +      */
> > > +     if (!count)
> > > +             return -EINVAL;
> > > +
> > > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> > > +
> > > +     return 0;
> > > +}
> > > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > > +
> > >  static void send_vblank_event(struct drm_device *dev,
> > >               struct drm_pending_vblank_event *e,
> > >               u64 seq, ktime_t now)
> > > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > > index 733a3e2d1d10..a63bc2c92f3c 100644
> > > --- a/include/drm/drm_vblank.h
> > > +++ b/include/drm/drm_vblank.h
> > > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> > >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> > >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > >                                  ktime_t *vblanktime);
> > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> > >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> > >                              struct drm_pending_vblank_event *e);
> > >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,  
> >  


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21  8:45       ` Pekka Paalanen
@ 2023-02-21 13:01         ` Ville Syrjälä
  2023-02-21 13:11           ` Pekka Paalanen
                             ` (2 more replies)
  0 siblings, 3 replies; 93+ messages in thread
From: Ville Syrjälä @ 2023-02-21 13:01 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Rob Clark, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> On Mon, 20 Feb 2023 07:55:41 -0800
> Rob Clark <robdclark@gmail.com> wrote:
> 
> > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >
> > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >  
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > Will be used in the next commit to set a deadline on fences that an
> > > > atomic update is waiting on.
> > > >
> > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > ---
> > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > >  include/drm/drm_vblank.h     |  1 +
> > > >  2 files changed, 33 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > >  }
> > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > >
> > > > +/**
> > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > + *
> > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > + * vblank and frame duration  
> > >
> > > Hi,
> > >
> > > for VRR this targets the highest frame rate possible for the current
> > > VRR mode, right?
> > >  
> > 
> > It is based on vblank->framedur_ns which is in turn based on
> > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> 
> I don't know. :-)

At least for i915 this will give you the maximum frame
duration.

Also this does not calculate the the start of vblank, it
calculates the start of active video.

> 
> You need a number of clock cycles in addition to the clock frequency,
> and that could still be minimum, maximum, the last realized one, ...
> 
> VRR works by adjusting the front porch length IIRC.
> 
> 
> Thanks,
> pq
> 
> > BR,
> > -R
> > 
> > 
> > >
> > > Thanks,
> > > pq
> > >  
> > > > + */
> > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > > > +{
> > > > +     unsigned int pipe = drm_crtc_index(crtc);
> > > > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > > > +     u64 count;
> > > > +
> > > > +     if (!vblank->framedur_ns)
> > > > +             return -EINVAL;
> > > > +
> > > > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > > > +
> > > > +     /*
> > > > +      * If we don't get a valid count, then we probably also don't
> > > > +      * have a valid time:
> > > > +      */
> > > > +     if (!count)
> > > > +             return -EINVAL;
> > > > +
> > > > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> > > > +
> > > > +     return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > > > +
> > > >  static void send_vblank_event(struct drm_device *dev,
> > > >               struct drm_pending_vblank_event *e,
> > > >               u64 seq, ktime_t now)
> > > > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > > > index 733a3e2d1d10..a63bc2c92f3c 100644
> > > > --- a/include/drm/drm_vblank.h
> > > > +++ b/include/drm/drm_vblank.h
> > > > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> > > >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> > > >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > >                                  ktime_t *vblanktime);
> > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> > > >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> > > >                              struct drm_pending_vblank_event *e);
> > > >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,  
> > >  
> 



-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 13:01         ` Ville Syrjälä
@ 2023-02-21 13:11           ` Pekka Paalanen
  2023-02-21 13:42             ` Ville Syrjälä
  2023-02-21 17:50           ` Rob Clark
  2023-02-21 19:54           ` Rob Clark
  2 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-21 13:11 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Rob Clark, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

[-- Attachment #1: Type: text/plain, Size: 4834 bytes --]

On Tue, 21 Feb 2023 15:01:35 +0200
Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:

> On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > On Mon, 20 Feb 2023 07:55:41 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >   
> > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > > >
> > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >    
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > atomic update is waiting on.
> > > > >
> > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > ---
> > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > >  include/drm/drm_vblank.h     |  1 +
> > > > >  2 files changed, 33 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > >  }
> > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > >
> > > > > +/**
> > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > + *
> > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > + * vblank and frame duration    
> > > >
> > > > Hi,
> > > >
> > > > for VRR this targets the highest frame rate possible for the current
> > > > VRR mode, right?
> > > >    
> > > 
> > > It is based on vblank->framedur_ns which is in turn based on
> > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?  
> > 
> > I don't know. :-)  
> 
> At least for i915 this will give you the maximum frame
> duration.

Really maximum duration? So minimum VRR frequency?

> Also this does not calculate the the start of vblank, it
> calculates the start of active video.

Oh indeed, so it's too late. What one would actually need for the
deadline is the driver's deadline to present for the immediately next
start of active video.

And with VRR that should probably aim for the maximum frame frequency,
not minimum?



Thanks,
pq

> 
> > 
> > You need a number of clock cycles in addition to the clock frequency,
> > and that could still be minimum, maximum, the last realized one, ...
> > 
> > VRR works by adjusting the front porch length IIRC.
> > 
> > 
> > Thanks,
> > pq
> >   
> > > BR,
> > > -R
> > > 
> > >   
> > > >
> > > > Thanks,
> > > > pq
> > > >    
> > > > > + */
> > > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > > > > +{
> > > > > +     unsigned int pipe = drm_crtc_index(crtc);
> > > > > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > > > > +     u64 count;
> > > > > +
> > > > > +     if (!vblank->framedur_ns)
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > > > > +
> > > > > +     /*
> > > > > +      * If we don't get a valid count, then we probably also don't
> > > > > +      * have a valid time:
> > > > > +      */
> > > > > +     if (!count)
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> > > > > +
> > > > > +     return 0;
> > > > > +}
> > > > > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > > > > +
> > > > >  static void send_vblank_event(struct drm_device *dev,
> > > > >               struct drm_pending_vblank_event *e,
> > > > >               u64 seq, ktime_t now)
> > > > > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > > > > index 733a3e2d1d10..a63bc2c92f3c 100644
> > > > > --- a/include/drm/drm_vblank.h
> > > > > +++ b/include/drm/drm_vblank.h
> > > > > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> > > > >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> > > > >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > >                                  ktime_t *vblanktime);
> > > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> > > > >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> > > > >                              struct drm_pending_vblank_event *e);
> > > > >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,    
> > > >    
> >   
> 
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 13:11           ` Pekka Paalanen
@ 2023-02-21 13:42             ` Ville Syrjälä
  0 siblings, 0 replies; 93+ messages in thread
From: Ville Syrjälä @ 2023-02-21 13:42 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Rob Clark, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 03:11:33PM +0200, Pekka Paalanen wrote:
> On Tue, 21 Feb 2023 15:01:35 +0200
> Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> 
> > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >   
> > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > > > >
> > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > >    
> > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > >
> > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > atomic update is waiting on.
> > > > > >
> > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > ---
> > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > >  2 files changed, 33 insertions(+)
> > > > > >
> > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > >  }
> > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > >
> > > > > > +/**
> > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > + *
> > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > + * vblank and frame duration    
> > > > >
> > > > > Hi,
> > > > >
> > > > > for VRR this targets the highest frame rate possible for the current
> > > > > VRR mode, right?
> > > > >    
> > > > 
> > > > It is based on vblank->framedur_ns which is in turn based on
> > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?  
> > > 
> > > I don't know. :-)  
> > 
> > At least for i915 this will give you the maximum frame
> > duration.
> 
> Really maximum duration? So minimum VRR frequency?

Yes. Doing otherwise would complicate the actual
timestamp calculation even further.

The actual timestamps i915 generates will however match
the start of active video, regardless of how long vblank
was extended.

The only exception might be if you query the timestamp
during vblank but VRR exit has not yet been triggered,
ie. not commit has been made during the frame. In that
case the timestamp will correspond to the max frame
duration, which may or may not end up being the case.
Depends totally whether a commit will still happen
during the vblank to trigger an early vblank exit.

> 
> > Also this does not calculate the the start of vblank, it
> > calculates the start of active video.
> 
> Oh indeed, so it's too late. What one would actually need for the
> deadline is the driver's deadline to present for the immediately next
> start of active video.
> 
> And with VRR that should probably aim for the maximum frame frequency,
> not minimum?

Yeah, max frame rate seems like the easiest thing to use there.

The other option might be some average value based on recent
history, but figuring tht out would seem like a lot more work.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-21  8:37       ` Pekka Paalanen
@ 2023-02-21 16:01         ` Sebastian Wick
  2023-02-21 17:55           ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Sebastian Wick @ 2023-02-21 16:01 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Rob Clark, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On Tue, Feb 21, 2023 at 9:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Mon, 20 Feb 2023 08:14:47 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >
> > > On Sat, 18 Feb 2023 13:15:49 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > > > wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > > > some work has completed).  Usermode components of GPU driver stacks
> > > > often poll() on fence fd's to know when it is safe to do things like
> > > > free or reuse a buffer, but they can also poll() on a fence fd when
> > > > waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > > > lets the kernel differentiate these two cases.
> > > >
> > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > >
> > > Hi,
> > >
> > > where would the UAPI documentation of this go?
> > > It seems to be missing.
> >
> > Good question, I am not sure.  The poll() man page has a description,
> > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > bit vague).
> >
> > > If a Wayland compositor is polling application fences to know which
> > > client buffer to use in its rendering, should the compositor poll with
> > > PRI or not? If a compositor polls with PRI, then all fences from all
> > > applications would always be PRI. Would that be harmful somehow or
> > > would it be beneficial?
> >
> > I think a compositor would rather use the deadline ioctl and then poll
> > without PRI.  Otherwise you are giving an urgency signal to the fence
> > signaller which might not necessarily be needed.
> >
> > The places where I expect PRI to be useful is more in mesa (things
> > like glFinish(), readpix, and other similar sorts of blocking APIs)
>
> Sounds good. Docs... ;-)
>
> Hmm, so a compositor should set the deadline when it processes the
> wl_surface.commit, and not when it actually starts repainting, to give
> time for the driver to react and the GPU to do some more work. The
> deadline would be the time when the compositor starts its repaint, so
> it knows if the buffer is ready or not.

Technically we don't know when the commit is supposed to be shown.
Just passing a deadline of the next possible deadline however is
probably a good enough guess for this feature to be useful.

One thing that neither API allows us to do is tell the kernel in
advance when we're going to submit work and what the deadline for it
is and unfortunately that work is the most timing sensitive.

>
>
> Thanks,
> pq
>
>
> >
> > BR,
> > -R
> >
> > >
> > >
> > > Thanks,
> > > pq
> > >
> > > > ---
> > > >  drivers/dma-buf/sync_file.c | 8 ++++++++
> > > >  1 file changed, 8 insertions(+)
> > > >
> > > > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > > > index fb6ca1032885..c30b2085ee0a 100644
> > > > --- a/drivers/dma-buf/sync_file.c
> > > > +++ b/drivers/dma-buf/sync_file.c
> > > > @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
> > > >  {
> > > >       struct sync_file *sync_file = file->private_data;
> > > >
> > > > +     /*
> > > > +      * The POLLPRI/EPOLLPRI flag can be used to signal that
> > > > +      * userspace wants the fence to signal ASAP, express this
> > > > +      * as an immediate deadline.
> > > > +      */
> > > > +     if (poll_requested_events(wait) & EPOLLPRI)
> > > > +             dma_fence_set_deadline(sync_file->fence, ktime_get());
> > > > +
> > > >       poll_wait(file, &sync_file->wq, wait);
> > > >
> > > >       if (list_empty(&sync_file->cb.node) &&
> > >
>


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-20 16:14     ` Rob Clark
  2023-02-21  8:37       ` Pekka Paalanen
@ 2023-02-21 16:48       ` Luben Tuikov
  2023-02-21 17:53         ` Rob Clark
  1 sibling, 1 reply; 93+ messages in thread
From: Luben Tuikov @ 2023-02-21 16:48 UTC (permalink / raw)
  To: Rob Clark, Pekka Paalanen
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, dri-devel, Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On 2023-02-20 11:14, Rob Clark wrote:
> On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>
>> On Sat, 18 Feb 2023 13:15:49 -0800
>> Rob Clark <robdclark@gmail.com> wrote:
>>
>>> From: Rob Clark <robdclark@chromium.org>
>>>
>>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
>>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
>>> some work has completed).  Usermode components of GPU driver stacks
>>> often poll() on fence fd's to know when it is safe to do things like
>>> free or reuse a buffer, but they can also poll() on a fence fd when
>>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
>>> lets the kernel differentiate these two cases.
>>>
>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>
>> Hi,
>>
>> where would the UAPI documentation of this go?
>> It seems to be missing.
> 
> Good question, I am not sure.  The poll() man page has a description,
> but my usage doesn't fit that _exactly_ (but OTOH the description is a
> bit vague).
> 
>> If a Wayland compositor is polling application fences to know which
>> client buffer to use in its rendering, should the compositor poll with
>> PRI or not? If a compositor polls with PRI, then all fences from all
>> applications would always be PRI. Would that be harmful somehow or
>> would it be beneficial?
> 
> I think a compositor would rather use the deadline ioctl and then poll
> without PRI.  Otherwise you are giving an urgency signal to the fence
> signaller which might not necessarily be needed.
> 
> The places where I expect PRI to be useful is more in mesa (things
> like glFinish(), readpix, and other similar sorts of blocking APIs)
Hi,

Hmm, but then user-space could do the opposite, namely, submit work as usual--never 
using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
time, and then wouldn't that be equivalent to (E)POLLPRI?
-- 
Regards,
Luben


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 13:01         ` Ville Syrjälä
  2023-02-21 13:11           ` Pekka Paalanen
@ 2023-02-21 17:50           ` Rob Clark
  2023-02-22  9:57             ` Pekka Paalanen
  2023-02-21 19:54           ` Rob Clark
  2 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-21 17:50 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Pekka Paalanen, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > On Mon, 20 Feb 2023 07:55:41 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >
> > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > >
> > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > atomic update is waiting on.
> > > > >
> > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > ---
> > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > >  include/drm/drm_vblank.h     |  1 +
> > > > >  2 files changed, 33 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > >  }
> > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > >
> > > > > +/**
> > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > + *
> > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > + * vblank and frame duration
> > > >
> > > > Hi,
> > > >
> > > > for VRR this targets the highest frame rate possible for the current
> > > > VRR mode, right?
> > > >
> > >
> > > It is based on vblank->framedur_ns which is in turn based on
> > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> >
> > I don't know. :-)
>
> At least for i915 this will give you the maximum frame
> duration.

I suppose one could argue that maximum frame duration is the actual
deadline.  Anything less is just moar fps, but not going to involve
stalling until vblank N+1, AFAIU

> Also this does not calculate the the start of vblank, it
> calculates the start of active video.

Probably something like end of previous frame's video..  might not be
_exactly_ correct (because some buffering involved), but OTOH on the
GPU side, I expect the driver to set a timer for a few ms or so before
the deadline.  So there is some wiggle room.

BR,
-R

> >
> > You need a number of clock cycles in addition to the clock frequency,
> > and that could still be minimum, maximum, the last realized one, ...
> >
> > VRR works by adjusting the front porch length IIRC.
> >
> >
> > Thanks,
> > pq
> >
> > > BR,
> > > -R
> > >
> > >
> > > >
> > > > Thanks,
> > > > pq
> > > >
> > > > > + */
> > > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > > > > +{
> > > > > +     unsigned int pipe = drm_crtc_index(crtc);
> > > > > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > > > > +     u64 count;
> > > > > +
> > > > > +     if (!vblank->framedur_ns)
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > > > > +
> > > > > +     /*
> > > > > +      * If we don't get a valid count, then we probably also don't
> > > > > +      * have a valid time:
> > > > > +      */
> > > > > +     if (!count)
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> > > > > +
> > > > > +     return 0;
> > > > > +}
> > > > > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > > > > +
> > > > >  static void send_vblank_event(struct drm_device *dev,
> > > > >               struct drm_pending_vblank_event *e,
> > > > >               u64 seq, ktime_t now)
> > > > > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > > > > index 733a3e2d1d10..a63bc2c92f3c 100644
> > > > > --- a/include/drm/drm_vblank.h
> > > > > +++ b/include/drm/drm_vblank.h
> > > > > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> > > > >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> > > > >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > >                                  ktime_t *vblanktime);
> > > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> > > > >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> > > > >                              struct drm_pending_vblank_event *e);
> > > > >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
> > > >
> >
>
>
>
> --
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-21 16:48       ` Luben Tuikov
@ 2023-02-21 17:53         ` Rob Clark
  2023-02-22  9:49           ` Pekka Paalanen
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-21 17:53 UTC (permalink / raw)
  To: Luben Tuikov
  Cc: Pekka Paalanen, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On Tue, Feb 21, 2023 at 8:48 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> On 2023-02-20 11:14, Rob Clark wrote:
> > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >>
> >> On Sat, 18 Feb 2023 13:15:49 -0800
> >> Rob Clark <robdclark@gmail.com> wrote:
> >>
> >>> From: Rob Clark <robdclark@chromium.org>
> >>>
> >>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> >>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> >>> some work has completed).  Usermode components of GPU driver stacks
> >>> often poll() on fence fd's to know when it is safe to do things like
> >>> free or reuse a buffer, but they can also poll() on a fence fd when
> >>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> >>> lets the kernel differentiate these two cases.
> >>>
> >>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> >>
> >> Hi,
> >>
> >> where would the UAPI documentation of this go?
> >> It seems to be missing.
> >
> > Good question, I am not sure.  The poll() man page has a description,
> > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > bit vague).
> >
> >> If a Wayland compositor is polling application fences to know which
> >> client buffer to use in its rendering, should the compositor poll with
> >> PRI or not? If a compositor polls with PRI, then all fences from all
> >> applications would always be PRI. Would that be harmful somehow or
> >> would it be beneficial?
> >
> > I think a compositor would rather use the deadline ioctl and then poll
> > without PRI.  Otherwise you are giving an urgency signal to the fence
> > signaller which might not necessarily be needed.
> >
> > The places where I expect PRI to be useful is more in mesa (things
> > like glFinish(), readpix, and other similar sorts of blocking APIs)
> Hi,
>
> Hmm, but then user-space could do the opposite, namely, submit work as usual--never
> using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
> like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
> this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
> time, and then wouldn't that be equivalent to (E)POLLPRI?

Yeah, (E)POLLPRI isn't strictly needed if we have SET_DEADLINE.  It is
slightly more convenient if you want an immediate deadline (single
syscall instead of two), but not strictly needed.  OTOH it piggy-backs
on existing UABI.

BR,
-R

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-21 16:01         ` Sebastian Wick
@ 2023-02-21 17:55           ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-21 17:55 UTC (permalink / raw)
  To: Sebastian Wick
  Cc: Pekka Paalanen, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On Tue, Feb 21, 2023 at 8:01 AM Sebastian Wick
<sebastian.wick@redhat.com> wrote:
>
> On Tue, Feb 21, 2023 at 9:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Mon, 20 Feb 2023 08:14:47 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >
> > > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > >
> > > > On Sat, 18 Feb 2023 13:15:49 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > > > > wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > > > > some work has completed).  Usermode components of GPU driver stacks
> > > > > often poll() on fence fd's to know when it is safe to do things like
> > > > > free or reuse a buffer, but they can also poll() on a fence fd when
> > > > > waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > > > > lets the kernel differentiate these two cases.
> > > > >
> > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > >
> > > > Hi,
> > > >
> > > > where would the UAPI documentation of this go?
> > > > It seems to be missing.
> > >
> > > Good question, I am not sure.  The poll() man page has a description,
> > > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > > bit vague).
> > >
> > > > If a Wayland compositor is polling application fences to know which
> > > > client buffer to use in its rendering, should the compositor poll with
> > > > PRI or not? If a compositor polls with PRI, then all fences from all
> > > > applications would always be PRI. Would that be harmful somehow or
> > > > would it be beneficial?
> > >
> > > I think a compositor would rather use the deadline ioctl and then poll
> > > without PRI.  Otherwise you are giving an urgency signal to the fence
> > > signaller which might not necessarily be needed.
> > >
> > > The places where I expect PRI to be useful is more in mesa (things
> > > like glFinish(), readpix, and other similar sorts of blocking APIs)
> >
> > Sounds good. Docs... ;-)
> >
> > Hmm, so a compositor should set the deadline when it processes the
> > wl_surface.commit, and not when it actually starts repainting, to give
> > time for the driver to react and the GPU to do some more work. The
> > deadline would be the time when the compositor starts its repaint, so
> > it knows if the buffer is ready or not.
>
> Technically we don't know when the commit is supposed to be shown.
> Just passing a deadline of the next possible deadline however is
> probably a good enough guess for this feature to be useful.
>
> One thing that neither API allows us to do is tell the kernel in
> advance when we're going to submit work and what the deadline for it
> is and unfortunately that work is the most timing sensitive.

Presumably you are talking about the final compositing step?
Elsewhere in this series that atomic wait-for-fences step sets the
deadline hint.

BR,
-R

> >
> >
> > Thanks,
> > pq
> >
> >
> > >
> > > BR,
> > > -R
> > >
> > > >
> > > >
> > > > Thanks,
> > > > pq
> > > >
> > > > > ---
> > > > >  drivers/dma-buf/sync_file.c | 8 ++++++++
> > > > >  1 file changed, 8 insertions(+)
> > > > >
> > > > > diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
> > > > > index fb6ca1032885..c30b2085ee0a 100644
> > > > > --- a/drivers/dma-buf/sync_file.c
> > > > > +++ b/drivers/dma-buf/sync_file.c
> > > > > @@ -192,6 +192,14 @@ static __poll_t sync_file_poll(struct file *file, poll_table *wait)
> > > > >  {
> > > > >       struct sync_file *sync_file = file->private_data;
> > > > >
> > > > > +     /*
> > > > > +      * The POLLPRI/EPOLLPRI flag can be used to signal that
> > > > > +      * userspace wants the fence to signal ASAP, express this
> > > > > +      * as an immediate deadline.
> > > > > +      */
> > > > > +     if (poll_requested_events(wait) & EPOLLPRI)
> > > > > +             dma_fence_set_deadline(sync_file->fence, ktime_get());
> > > > > +
> > > > >       poll_wait(file, &sync_file->wq, wait);
> > > > >
> > > > >       if (list_empty(&sync_file->cb.node) &&
> > > >
> >
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 08/14] drm/scheduler: Add fence deadline support
  2023-02-18 21:15 ` [PATCH v4 08/14] drm/scheduler: " Rob Clark
@ 2023-02-21 19:40   ` Luben Tuikov
  0 siblings, 0 replies; 93+ messages in thread
From: Luben Tuikov @ 2023-02-21 19:40 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: freedreno, Daniel Vetter, Christian König,
	Michel Dänzer, Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher,
	Pekka Paalanen, Simon Ser, Rob Clark, David Airlie, Sumit Semwal,
	Christian König, open list,
	open list:DMA BUFFER SHARING FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK

On 2023-02-18 16:15, Rob Clark wrote:
> As the finished fence is the one that is exposed to userspace, and
> therefore the one that other operations, like atomic update, would
> block on, we need to propagate the deadline from from the finished
> fence to the actual hw fence.
> 
> v2: Split into drm_sched_fence_set_parent() (ckoenig)
> v3: Ensure a thread calling drm_sched_fence_set_deadline_finished() sees
>     fence->parent set before drm_sched_fence_set_parent() does this
>     test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT).
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>

Looks good.

This patch is Acked-by: Luben Tuikov <luben.tuikov@amd.com>
-- 
Regards,
Luben

> ---
>  drivers/gpu/drm/scheduler/sched_fence.c | 46 +++++++++++++++++++++++++
>  drivers/gpu/drm/scheduler/sched_main.c  |  2 +-
>  include/drm/gpu_scheduler.h             |  8 +++++
>  3 files changed, 55 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
> index 7fd869520ef2..43e2d4f5fe3b 100644
> --- a/drivers/gpu/drm/scheduler/sched_fence.c
> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
> @@ -123,6 +123,37 @@ static void drm_sched_fence_release_finished(struct dma_fence *f)
>  	dma_fence_put(&fence->scheduled);
>  }
>  
> +static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
> +						  ktime_t deadline)
> +{
> +	struct drm_sched_fence *fence = to_drm_sched_fence(f);
> +	struct dma_fence *parent;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&fence->lock, flags);
> +
> +	/* If we already have an earlier deadline, keep it: */
> +	if (test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
> +	    ktime_before(fence->deadline, deadline)) {
> +		spin_unlock_irqrestore(&fence->lock, flags);
> +		return;
> +	}
> +
> +	fence->deadline = deadline;
> +	set_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
> +
> +	spin_unlock_irqrestore(&fence->lock, flags);
> +
> +	/*
> +	 * smp_load_aquire() to ensure that if we are racing another
> +	 * thread calling drm_sched_fence_set_parent(), that we see
> +	 * the parent set before it calls test_bit(HAS_DEADLINE_BIT)
> +	 */
> +	parent = smp_load_acquire(&fence->parent);
> +	if (parent)
> +		dma_fence_set_deadline(parent, deadline);
> +}
> +
>  static const struct dma_fence_ops drm_sched_fence_ops_scheduled = {
>  	.get_driver_name = drm_sched_fence_get_driver_name,
>  	.get_timeline_name = drm_sched_fence_get_timeline_name,
> @@ -133,6 +164,7 @@ static const struct dma_fence_ops drm_sched_fence_ops_finished = {
>  	.get_driver_name = drm_sched_fence_get_driver_name,
>  	.get_timeline_name = drm_sched_fence_get_timeline_name,
>  	.release = drm_sched_fence_release_finished,
> +	.set_deadline = drm_sched_fence_set_deadline_finished,
>  };
>  
>  struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f)
> @@ -147,6 +179,20 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f)
>  }
>  EXPORT_SYMBOL(to_drm_sched_fence);
>  
> +void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
> +				struct dma_fence *fence)
> +{
> +	/*
> +	 * smp_store_release() to ensure another thread racing us
> +	 * in drm_sched_fence_set_deadline_finished() sees the
> +	 * fence's parent set before test_bit()
> +	 */
> +	smp_store_release(&s_fence->parent, dma_fence_get(fence));
> +	if (test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
> +		     &s_fence->finished.flags))
> +		dma_fence_set_deadline(fence, s_fence->deadline);
> +}
> +
>  struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>  					      void *owner)
>  {
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 4e6ad6e122bc..007f98c48f8d 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1019,7 +1019,7 @@ static int drm_sched_main(void *param)
>  		drm_sched_fence_scheduled(s_fence);
>  
>  		if (!IS_ERR_OR_NULL(fence)) {
> -			s_fence->parent = dma_fence_get(fence);
> +			drm_sched_fence_set_parent(s_fence, fence);
>  			/* Drop for original kref_init of the fence */
>  			dma_fence_put(fence);
>  
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 9db9e5e504ee..8b31a954a44d 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -280,6 +280,12 @@ struct drm_sched_fence {
>           */
>  	struct dma_fence		finished;
>  
> +	/**
> +	 * @deadline: deadline set on &drm_sched_fence.finished which
> +	 * potentially needs to be propagated to &drm_sched_fence.parent
> +	 */
> +	ktime_t				deadline;
> +
>          /**
>           * @parent: the fence returned by &drm_sched_backend_ops.run_job
>           * when scheduling the job on hardware. We signal the
> @@ -568,6 +574,8 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
>  				   enum drm_sched_priority priority);
>  bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
>  
> +void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
> +				struct dma_fence *fence);
>  struct drm_sched_fence *drm_sched_fence_alloc(
>  	struct drm_sched_entity *s_entity, void *owner);
>  void drm_sched_fence_init(struct drm_sched_fence *fence,


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 13:01         ` Ville Syrjälä
  2023-02-21 13:11           ` Pekka Paalanen
  2023-02-21 17:50           ` Rob Clark
@ 2023-02-21 19:54           ` Rob Clark
  2023-02-21 21:39             ` Ville Syrjälä
  2 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-21 19:54 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Pekka Paalanen, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > On Mon, 20 Feb 2023 07:55:41 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >
> > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > >
> > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > atomic update is waiting on.
> > > > >
> > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > ---
> > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > >  include/drm/drm_vblank.h     |  1 +
> > > > >  2 files changed, 33 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > >  }
> > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > >
> > > > > +/**
> > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > + *
> > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > + * vblank and frame duration
> > > >
> > > > Hi,
> > > >
> > > > for VRR this targets the highest frame rate possible for the current
> > > > VRR mode, right?
> > > >
> > >
> > > It is based on vblank->framedur_ns which is in turn based on
> > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> >
> > I don't know. :-)
>
> At least for i915 this will give you the maximum frame
> duration.
>
> Also this does not calculate the the start of vblank, it
> calculates the start of active video.

AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:

  vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
  vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
  framedur = ns_to_ktime(vblank->framedur_ns);
  *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));

?

BR,
-R

>
> >
> > You need a number of clock cycles in addition to the clock frequency,
> > and that could still be minimum, maximum, the last realized one, ...
> >
> > VRR works by adjusting the front porch length IIRC.
> >
> >
> > Thanks,
> > pq
> >
> > > BR,
> > > -R
> > >
> > >
> > > >
> > > > Thanks,
> > > > pq
> > > >
> > > > > + */
> > > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > > > > +{
> > > > > +     unsigned int pipe = drm_crtc_index(crtc);
> > > > > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > > > > +     u64 count;
> > > > > +
> > > > > +     if (!vblank->framedur_ns)
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > > > > +
> > > > > +     /*
> > > > > +      * If we don't get a valid count, then we probably also don't
> > > > > +      * have a valid time:
> > > > > +      */
> > > > > +     if (!count)
> > > > > +             return -EINVAL;
> > > > > +
> > > > > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
> > > > > +
> > > > > +     return 0;
> > > > > +}
> > > > > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > > > > +
> > > > >  static void send_vblank_event(struct drm_device *dev,
> > > > >               struct drm_pending_vblank_event *e,
> > > > >               u64 seq, ktime_t now)
> > > > > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > > > > index 733a3e2d1d10..a63bc2c92f3c 100644
> > > > > --- a/include/drm/drm_vblank.h
> > > > > +++ b/include/drm/drm_vblank.h
> > > > > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> > > > >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> > > > >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > >                                  ktime_t *vblanktime);
> > > > > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> > > > >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> > > > >                              struct drm_pending_vblank_event *e);
> > > > >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
> > > >
> >
>
>
>
> --
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 19:54           ` Rob Clark
@ 2023-02-21 21:39             ` Ville Syrjälä
  2023-02-21 21:48               ` Ville Syrjälä
  0 siblings, 1 reply; 93+ messages in thread
From: Ville Syrjälä @ 2023-02-21 21:39 UTC (permalink / raw)
  To: Rob Clark
  Cc: Pekka Paalanen, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 11:54:55AM -0800, Rob Clark wrote:
> On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >
> > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > >
> > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > atomic update is waiting on.
> > > > > >
> > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > ---
> > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > >  2 files changed, 33 insertions(+)
> > > > > >
> > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > >  }
> > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > >
> > > > > > +/**
> > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > + *
> > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > + * vblank and frame duration
> > > > >
> > > > > Hi,
> > > > >
> > > > > for VRR this targets the highest frame rate possible for the current
> > > > > VRR mode, right?
> > > > >
> > > >
> > > > It is based on vblank->framedur_ns which is in turn based on
> > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > >
> > > I don't know. :-)
> >
> > At least for i915 this will give you the maximum frame
> > duration.
> >
> > Also this does not calculate the the start of vblank, it
> > calculates the start of active video.
> 
> AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:
> 
>   vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
>   vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
>   framedur = ns_to_ktime(vblank->framedur_ns);
>   *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));

Something like this should work:
 vblank_start = framedur_ns * crtc_vblank_start / crtc_vtotal
 deadline = vblanktime + vblank_start

That would be the expected time when the next start of vblank
happens.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 21:39             ` Ville Syrjälä
@ 2023-02-21 21:48               ` Ville Syrjälä
  2023-02-21 22:28                 ` [Freedreno] " Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Ville Syrjälä @ 2023-02-21 21:48 UTC (permalink / raw)
  To: Rob Clark
  Cc: Tvrtko Ursulin, Christian König, Michel Dänzer,
	dri-devel, open list, Pekka Paalanen, Thomas Zimmermann,
	Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 11:39:40PM +0200, Ville Syrjälä wrote:
> On Tue, Feb 21, 2023 at 11:54:55AM -0800, Rob Clark wrote:
> > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > <ville.syrjala@linux.intel.com> wrote:
> > >
> > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > >
> > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > >
> > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > atomic update is waiting on.
> > > > > > >
> > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > ---
> > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > >  2 files changed, 33 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > >  }
> > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > >
> > > > > > > +/**
> > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > + *
> > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > + * vblank and frame duration
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > VRR mode, right?
> > > > > >
> > > > >
> > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > >
> > > > I don't know. :-)
> > >
> > > At least for i915 this will give you the maximum frame
> > > duration.
> > >
> > > Also this does not calculate the the start of vblank, it
> > > calculates the start of active video.
> > 
> > AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:
> > 
> >   vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
> >   vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
> >   framedur = ns_to_ktime(vblank->framedur_ns);
> >   *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));
> 
> Something like this should work:
>  vblank_start = framedur_ns * crtc_vblank_start / crtc_vtotal
>  deadline = vblanktime + vblank_start
> 
> That would be the expected time when the next start of vblank
> happens.

Except that drm_vblank_count_and_time() will give you the last
sampled timestamp, which may be long ago in the past. Would
need to add an _accurate version of that if we want to be
guaranteed a fresh sample.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [Freedreno] [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 21:48               ` Ville Syrjälä
@ 2023-02-21 22:28                 ` Rob Clark
  2023-02-21 22:46                   ` Ville Syrjälä
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-21 22:28 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Rob Clark, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, dri-devel, Pekka Paalanen,
	Thomas Zimmermann, Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 1:48 PM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Tue, Feb 21, 2023 at 11:39:40PM +0200, Ville Syrjälä wrote:
> > On Tue, Feb 21, 2023 at 11:54:55AM -0800, Rob Clark wrote:
> > > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > > <ville.syrjala@linux.intel.com> wrote:
> > > >
> > > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > >
> > > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > >
> > > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > > atomic update is waiting on.
> > > > > > > >
> > > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > > ---
> > > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > > >  2 files changed, 33 insertions(+)
> > > > > > > >
> > > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > > >  }
> > > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > > >
> > > > > > > > +/**
> > > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > > + *
> > > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > > + * vblank and frame duration
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > > VRR mode, right?
> > > > > > >
> > > > > >
> > > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > > >
> > > > > I don't know. :-)
> > > >
> > > > At least for i915 this will give you the maximum frame
> > > > duration.
> > > >
> > > > Also this does not calculate the the start of vblank, it
> > > > calculates the start of active video.
> > >
> > > AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:
> > >
> > >   vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
> > >   vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
> > >   framedur = ns_to_ktime(vblank->framedur_ns);
> > >   *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));
> >
> > Something like this should work:
> >  vblank_start = framedur_ns * crtc_vblank_start / crtc_vtotal
> >  deadline = vblanktime + vblank_start
> >
> > That would be the expected time when the next start of vblank
> > happens.
>
> Except that drm_vblank_count_and_time() will give you the last
> sampled timestamp, which may be long ago in the past. Would
> need to add an _accurate version of that if we want to be
> guaranteed a fresh sample.

IIRC the only time we wouldn't have a fresh sample is if the screen
has been idle for some time?  In which case, I think that doesn't
matter.

BR,
-R

>
> --
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [Freedreno] [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 22:28                 ` [Freedreno] " Rob Clark
@ 2023-02-21 22:46                   ` Ville Syrjälä
  2023-02-21 23:20                     ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Ville Syrjälä @ 2023-02-21 22:46 UTC (permalink / raw)
  To: Rob Clark
  Cc: Rob Clark, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, dri-devel, Pekka Paalanen,
	Thomas Zimmermann, Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 02:28:10PM -0800, Rob Clark wrote:
> On Tue, Feb 21, 2023 at 1:48 PM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Tue, Feb 21, 2023 at 11:39:40PM +0200, Ville Syrjälä wrote:
> > > On Tue, Feb 21, 2023 at 11:54:55AM -0800, Rob Clark wrote:
> > > > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > > > <ville.syrjala@linux.intel.com> wrote:
> > > > >
> > > > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > >
> > > > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > >
> > > > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > > > atomic update is waiting on.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > > > ---
> > > > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > > > >  2 files changed, 33 insertions(+)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > > > >  }
> > > > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > > > >
> > > > > > > > > +/**
> > > > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > > > + *
> > > > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > > > + * vblank and frame duration
> > > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > > > VRR mode, right?
> > > > > > > >
> > > > > > >
> > > > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > > > >
> > > > > > I don't know. :-)
> > > > >
> > > > > At least for i915 this will give you the maximum frame
> > > > > duration.
> > > > >
> > > > > Also this does not calculate the the start of vblank, it
> > > > > calculates the start of active video.
> > > >
> > > > AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:
> > > >
> > > >   vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
> > > >   vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
> > > >   framedur = ns_to_ktime(vblank->framedur_ns);
> > > >   *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));
> > >
> > > Something like this should work:
> > >  vblank_start = framedur_ns * crtc_vblank_start / crtc_vtotal
> > >  deadline = vblanktime + vblank_start
> > >
> > > That would be the expected time when the next start of vblank
> > > happens.
> >
> > Except that drm_vblank_count_and_time() will give you the last
> > sampled timestamp, which may be long ago in the past. Would
> > need to add an _accurate version of that if we want to be
> > guaranteed a fresh sample.
> 
> IIRC the only time we wouldn't have a fresh sample is if the screen
> has been idle for some time?

IIRC "some time" == 1 idle frame, for any driver that sets
vblank_disable_immediate.

> In which case, I think that doesn't
> matter.
> 
> BR,
> -R
> 
> >
> > --
> > Ville Syrjälä
> > Intel

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [Freedreno] [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 22:46                   ` Ville Syrjälä
@ 2023-02-21 23:20                     ` Rob Clark
  2023-02-21 23:25                       ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-21 23:20 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Rob Clark, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, dri-devel, Pekka Paalanen,
	Thomas Zimmermann, Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 2:46 PM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Tue, Feb 21, 2023 at 02:28:10PM -0800, Rob Clark wrote:
> > On Tue, Feb 21, 2023 at 1:48 PM Ville Syrjälä
> > <ville.syrjala@linux.intel.com> wrote:
> > >
> > > On Tue, Feb 21, 2023 at 11:39:40PM +0200, Ville Syrjälä wrote:
> > > > On Tue, Feb 21, 2023 at 11:54:55AM -0800, Rob Clark wrote:
> > > > > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > > > > <ville.syrjala@linux.intel.com> wrote:
> > > > > >
> > > > > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > > >
> > > > > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > > > > atomic update is waiting on.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > > > > ---
> > > > > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > > > > >  2 files changed, 33 insertions(+)
> > > > > > > > > >
> > > > > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > > > > >  }
> > > > > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > > > > >
> > > > > > > > > > +/**
> > > > > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > > > > + *
> > > > > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > > > > + * vblank and frame duration
> > > > > > > > >
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > > > > VRR mode, right?
> > > > > > > > >
> > > > > > > >
> > > > > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > > > > >
> > > > > > > I don't know. :-)
> > > > > >
> > > > > > At least for i915 this will give you the maximum frame
> > > > > > duration.
> > > > > >
> > > > > > Also this does not calculate the the start of vblank, it
> > > > > > calculates the start of active video.
> > > > >
> > > > > AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:
> > > > >
> > > > >   vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
> > > > >   vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
> > > > >   framedur = ns_to_ktime(vblank->framedur_ns);
> > > > >   *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));
> > > >
> > > > Something like this should work:
> > > >  vblank_start = framedur_ns * crtc_vblank_start / crtc_vtotal
> > > >  deadline = vblanktime + vblank_start
> > > >
> > > > That would be the expected time when the next start of vblank
> > > > happens.
> > >
> > > Except that drm_vblank_count_and_time() will give you the last
> > > sampled timestamp, which may be long ago in the past. Would
> > > need to add an _accurate version of that if we want to be
> > > guaranteed a fresh sample.
> >
> > IIRC the only time we wouldn't have a fresh sample is if the screen
> > has been idle for some time?
>
> IIRC "some time" == 1 idle frame, for any driver that sets
> vblank_disable_immediate.
>

hmm, ok so it should be still good down to 30fps ;-)

I thought we calculated based on frame # and line # on hw that
supported that?  But it's been a while since looking at vblank code

BR,
-R

> > In which case, I think that doesn't
> > matter.
> >
> > BR,
> > -R
> >
> > >
> > > --
> > > Ville Syrjälä
> > > Intel
>
> --
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [Freedreno] [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 23:20                     ` Rob Clark
@ 2023-02-21 23:25                       ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-21 23:25 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Rob Clark, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, dri-devel, Pekka Paalanen,
	Thomas Zimmermann, Rodrigo Vivi, Alex Deucher, freedreno

On Tue, Feb 21, 2023 at 3:20 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Tue, Feb 21, 2023 at 2:46 PM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Tue, Feb 21, 2023 at 02:28:10PM -0800, Rob Clark wrote:
> > > On Tue, Feb 21, 2023 at 1:48 PM Ville Syrjälä
> > > <ville.syrjala@linux.intel.com> wrote:
> > > >
> > > > On Tue, Feb 21, 2023 at 11:39:40PM +0200, Ville Syrjälä wrote:
> > > > > On Tue, Feb 21, 2023 at 11:54:55AM -0800, Rob Clark wrote:
> > > > > > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > > > > > <ville.syrjala@linux.intel.com> wrote:
> > > > > > >
> > > > > > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > > > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > > >
> > > > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > > > >
> > > > > > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > > > > > atomic update is waiting on.
> > > > > > > > > > >
> > > > > > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > > > > > ---
> > > > > > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > > > > > >  2 files changed, 33 insertions(+)
> > > > > > > > > > >
> > > > > > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > > > > > >  }
> > > > > > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > > > > > >
> > > > > > > > > > > +/**
> > > > > > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > > > > > + *
> > > > > > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > > > > > + * vblank and frame duration
> > > > > > > > > >
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > > > > > VRR mode, right?
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > > > > > >
> > > > > > > > I don't know. :-)
> > > > > > >
> > > > > > > At least for i915 this will give you the maximum frame
> > > > > > > duration.
> > > > > > >
> > > > > > > Also this does not calculate the the start of vblank, it
> > > > > > > calculates the start of active video.
> > > > > >
> > > > > > AFAIU, vsync_end/vsync_start are in units of line, so I could do something like:
> > > > > >
> > > > > >   vsync_lines = vblank->hwmode.vsync_end - vblank->hwmode.vsync_start;
> > > > > >   vsyncdur = ns_to_ktime(vblank->linedur_ns * vsync_lines);
> > > > > >   framedur = ns_to_ktime(vblank->framedur_ns);
> > > > > >   *vblanktime = ktime_add(*vblanktime, ktime_sub(framedur, vsyncdur));
> > > > >
> > > > > Something like this should work:
> > > > >  vblank_start = framedur_ns * crtc_vblank_start / crtc_vtotal
> > > > >  deadline = vblanktime + vblank_start
> > > > >
> > > > > That would be the expected time when the next start of vblank
> > > > > happens.
> > > >
> > > > Except that drm_vblank_count_and_time() will give you the last
> > > > sampled timestamp, which may be long ago in the past. Would
> > > > need to add an _accurate version of that if we want to be
> > > > guaranteed a fresh sample.
> > >
> > > IIRC the only time we wouldn't have a fresh sample is if the screen
> > > has been idle for some time?
> >
> > IIRC "some time" == 1 idle frame, for any driver that sets
> > vblank_disable_immediate.
> >
>
> hmm, ok so it should be still good down to 30fps ;-)
>
> I thought we calculated based on frame # and line # on hw that
> supported that?  But it's been a while since looking at vblank code

looks like drm_get_last_vbltimestamp() is what I want..

> BR,
> -R
>
> > > In which case, I think that doesn't
> > > matter.
> > >
> > > BR,
> > > -R
> > >
> > > >
> > > > --
> > > > Ville Syrjälä
> > > > Intel
> >
> > --
> > Ville Syrjälä
> > Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-21 17:53         ` Rob Clark
@ 2023-02-22  9:49           ` Pekka Paalanen
  2023-02-22 10:26             ` Luben Tuikov
  2023-02-22 15:37             ` Rob Clark
  0 siblings, 2 replies; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-22  9:49 UTC (permalink / raw)
  To: Rob Clark
  Cc: Luben Tuikov, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 3747 bytes --]

On Tue, 21 Feb 2023 09:53:56 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Tue, Feb 21, 2023 at 8:48 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> >
> > On 2023-02-20 11:14, Rob Clark wrote:  
> > > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > >>
> > >> On Sat, 18 Feb 2023 13:15:49 -0800
> > >> Rob Clark <robdclark@gmail.com> wrote:
> > >>  
> > >>> From: Rob Clark <robdclark@chromium.org>
> > >>>
> > >>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > >>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > >>> some work has completed).  Usermode components of GPU driver stacks
> > >>> often poll() on fence fd's to know when it is safe to do things like
> > >>> free or reuse a buffer, but they can also poll() on a fence fd when
> > >>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > >>> lets the kernel differentiate these two cases.
> > >>>
> > >>> Signed-off-by: Rob Clark <robdclark@chromium.org>  
> > >>
> > >> Hi,
> > >>
> > >> where would the UAPI documentation of this go?
> > >> It seems to be missing.  
> > >
> > > Good question, I am not sure.  The poll() man page has a description,
> > > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > > bit vague).
> > >  
> > >> If a Wayland compositor is polling application fences to know which
> > >> client buffer to use in its rendering, should the compositor poll with
> > >> PRI or not? If a compositor polls with PRI, then all fences from all
> > >> applications would always be PRI. Would that be harmful somehow or
> > >> would it be beneficial?  
> > >
> > > I think a compositor would rather use the deadline ioctl and then poll
> > > without PRI.  Otherwise you are giving an urgency signal to the fence
> > > signaller which might not necessarily be needed.
> > >
> > > The places where I expect PRI to be useful is more in mesa (things
> > > like glFinish(), readpix, and other similar sorts of blocking APIs)  
> > Hi,
> >
> > Hmm, but then user-space could do the opposite, namely, submit work as usual--never
> > using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
> > like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
> > this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
> > time, and then wouldn't that be equivalent to (E)POLLPRI?  
> 
> Yeah, (E)POLLPRI isn't strictly needed if we have SET_DEADLINE.  It is
> slightly more convenient if you want an immediate deadline (single
> syscall instead of two), but not strictly needed.  OTOH it piggy-backs
> on existing UABI.

In that case, I would be conservative, and not add the POLLPRI
semantics. An UAPI addition that is not strictly needed and somewhat
unclear if it violates any design principles is best not done, until it
is proven to be beneficial.

Besides, a Wayland compositor does not necessary need to add the fd
to its main event loop for poll. It could just SET_DEADLINE, and then
when it renders simply check if the fence passed or not already. Not
polling means the compositor does not need to wake up at the moment the
fence signals to just record a flag.

On another matter, if the application uses SET_DEADLINE with one
timestamp, and the compositor uses SET_DEADLINE on the same thing with
another timestamp, what should happen?

Maybe it's a soft-realtime app whose primary goal is not display, and
it needs the result faster than the window server?

Maybe SET_DEADLINE should set the deadline only to an earlier timestamp
and never later?


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-21 17:50           ` Rob Clark
@ 2023-02-22  9:57             ` Pekka Paalanen
  2023-02-22 15:44               ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-22  9:57 UTC (permalink / raw)
  To: Rob Clark
  Cc: Ville Syrjälä,
	Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

[-- Attachment #1: Type: text/plain, Size: 3233 bytes --]

On Tue, 21 Feb 2023 09:50:20 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:  
> > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >  
> > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > > > >
> > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > >  
> > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > >
> > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > atomic update is waiting on.
> > > > > >
> > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > ---
> > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > >  2 files changed, 33 insertions(+)
> > > > > >
> > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > >  }
> > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > >
> > > > > > +/**
> > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > + *
> > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > + * vblank and frame duration  
> > > > >
> > > > > Hi,
> > > > >
> > > > > for VRR this targets the highest frame rate possible for the current
> > > > > VRR mode, right?
> > > > >  
> > > >
> > > > It is based on vblank->framedur_ns which is in turn based on
> > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?  
> > >
> > > I don't know. :-)  
> >
> > At least for i915 this will give you the maximum frame
> > duration.  
> 
> I suppose one could argue that maximum frame duration is the actual
> deadline.  Anything less is just moar fps, but not going to involve
> stalling until vblank N+1, AFAIU
> 
> > Also this does not calculate the the start of vblank, it
> > calculates the start of active video.  
> 
> Probably something like end of previous frame's video..  might not be
> _exactly_ correct (because some buffering involved), but OTOH on the
> GPU side, I expect the driver to set a timer for a few ms or so before
> the deadline.  So there is some wiggle room.

The vblank timestamp is defined to be the time of the first active
pixel of the frame in the video signal. At least that's the one that
UAPI carries (when not tearing?). It is not the start of vblank period.

With VRR, the front porch before the first active pixel can be multiple
milliseconds. The difference between 144 Hz and 60 Hz is 9.7 ms for
example.


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-18 21:15 ` [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness Rob Clark
@ 2023-02-22 10:23   ` Tvrtko Ursulin
  2023-02-22 15:28     ` Christian König
  2023-02-22 11:01   ` Luben Tuikov
  1 sibling, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-22 10:23 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno, Christian König,
	open list:SYNC FILE FRAMEWORK


On 18/02/2023 21:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Add a way to hint to the fence signaler of an upcoming deadline, such as
> vblank, which the fence waiter would prefer not to miss.  This is to aid
> the fence signaler in making power management decisions, like boosting
> frequency as the deadline approaches and awareness of missing deadlines
> so that can be factored in to the frequency scaling.
> 
> v2: Drop dma_fence::deadline and related logic to filter duplicate
>      deadlines, to avoid increasing dma_fence size.  The fence-context
>      implementation will need similar logic to track deadlines of all
>      the fences on the same timeline.  [ckoenig]
> v3: Clarify locking wrt. set_deadline callback
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
>   include/linux/dma-fence.h   | 20 ++++++++++++++++++++
>   2 files changed, 40 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 0de0482cd36e..763b32627684 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
>   }
>   EXPORT_SYMBOL(dma_fence_wait_any_timeout);
>   
> +
> +/**
> + * dma_fence_set_deadline - set desired fence-wait deadline
> + * @fence:    the fence that is to be waited on
> + * @deadline: the time by which the waiter hopes for the fence to be
> + *            signaled
> + *
> + * Inform the fence signaler of an upcoming deadline, such as vblank, by
> + * which point the waiter would prefer the fence to be signaled by.  This
> + * is intended to give feedback to the fence signaler to aid in power
> + * management decisions, such as boosting GPU frequency if a periodic
> + * vblank deadline is approaching.
> + */
> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
> +{
> +	if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
> +		fence->ops->set_deadline(fence, deadline);
> +}
> +EXPORT_SYMBOL(dma_fence_set_deadline);
> +
>   /**
>    * dma_fence_describe - Dump fence describtion into seq_file
>    * @fence: the 6fence to describe
> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> index 775cdc0b4f24..d77f6591c453 100644
> --- a/include/linux/dma-fence.h
> +++ b/include/linux/dma-fence.h
> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
>   	DMA_FENCE_FLAG_SIGNALED_BIT,
>   	DMA_FENCE_FLAG_TIMESTAMP_BIT,
>   	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
> +	DMA_FENCE_FLAG_HAS_DEADLINE_BIT,

Would this bit be better left out from core implementation, given how 
the approach is the component which implements dma-fence has to track 
the actual deadline and all?

Also taking a step back - are we all okay with starting to expand the 
relatively simple core synchronisation primitive with side channel data 
like this? What would be the criteria for what side channel data would 
be acceptable? Taking note the thing lives outside drivers/gpu/.

Regards,

Tvrtko

>   	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
>   };
>   
> @@ -257,6 +258,23 @@ struct dma_fence_ops {
>   	 */
>   	void (*timeline_value_str)(struct dma_fence *fence,
>   				   char *str, int size);
> +
> +	/**
> +	 * @set_deadline:
> +	 *
> +	 * Callback to allow a fence waiter to inform the fence signaler of
> +	 * an upcoming deadline, such as vblank, by which point the waiter
> +	 * would prefer the fence to be signaled by.  This is intended to
> +	 * give feedback to the fence signaler to aid in power management
> +	 * decisions, such as boosting GPU frequency.
> +	 *
> +	 * This is called without &dma_fence.lock held, it can be called
> +	 * multiple times and from any context.  Locking is up to the callee
> +	 * if it has some state to manage.
> +	 *
> +	 * This callback is optional.
> +	 */
> +	void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
>   };
>   
>   void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
> @@ -583,6 +601,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
>   	return ret < 0 ? ret : 0;
>   }
>   
> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
> +
>   struct dma_fence *dma_fence_get_stub(void);
>   struct dma_fence *dma_fence_allocate_private_stub(void);
>   u64 dma_fence_context_alloc(unsigned num);

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-22  9:49           ` Pekka Paalanen
@ 2023-02-22 10:26             ` Luben Tuikov
  2023-02-22 15:37             ` Rob Clark
  1 sibling, 0 replies; 93+ messages in thread
From: Luben Tuikov @ 2023-02-22 10:26 UTC (permalink / raw)
  To: Pekka Paalanen, Rob Clark
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, dri-devel, Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On 2023-02-22 04:49, Pekka Paalanen wrote:
> On Tue, 21 Feb 2023 09:53:56 -0800
> Rob Clark <robdclark@gmail.com> wrote:
> 
>> On Tue, Feb 21, 2023 at 8:48 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
>>>
>>> On 2023-02-20 11:14, Rob Clark wrote:  
>>>> On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
>>>>>
>>>>> On Sat, 18 Feb 2023 13:15:49 -0800
>>>>> Rob Clark <robdclark@gmail.com> wrote:
>>>>>  
>>>>>> From: Rob Clark <robdclark@chromium.org>
>>>>>>
>>>>>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
>>>>>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
>>>>>> some work has completed).  Usermode components of GPU driver stacks
>>>>>> often poll() on fence fd's to know when it is safe to do things like
>>>>>> free or reuse a buffer, but they can also poll() on a fence fd when
>>>>>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
>>>>>> lets the kernel differentiate these two cases.
>>>>>>
>>>>>> Signed-off-by: Rob Clark <robdclark@chromium.org>  
>>>>>
>>>>> Hi,
>>>>>
>>>>> where would the UAPI documentation of this go?
>>>>> It seems to be missing.  
>>>>
>>>> Good question, I am not sure.  The poll() man page has a description,
>>>> but my usage doesn't fit that _exactly_ (but OTOH the description is a
>>>> bit vague).
>>>>  
>>>>> If a Wayland compositor is polling application fences to know which
>>>>> client buffer to use in its rendering, should the compositor poll with
>>>>> PRI or not? If a compositor polls with PRI, then all fences from all
>>>>> applications would always be PRI. Would that be harmful somehow or
>>>>> would it be beneficial?  
>>>>
>>>> I think a compositor would rather use the deadline ioctl and then poll
>>>> without PRI.  Otherwise you are giving an urgency signal to the fence
>>>> signaller which might not necessarily be needed.
>>>>
>>>> The places where I expect PRI to be useful is more in mesa (things
>>>> like glFinish(), readpix, and other similar sorts of blocking APIs)  
>>> Hi,
>>>
>>> Hmm, but then user-space could do the opposite, namely, submit work as usual--never
>>> using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
>>> like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
>>> this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
>>> time, and then wouldn't that be equivalent to (E)POLLPRI?  
>>
>> Yeah, (E)POLLPRI isn't strictly needed if we have SET_DEADLINE.  It is
>> slightly more convenient if you want an immediate deadline (single
>> syscall instead of two), but not strictly needed.  OTOH it piggy-backs
>> on existing UABI.
> 
> In that case, I would be conservative, and not add the POLLPRI
> semantics. An UAPI addition that is not strictly needed and somewhat
> unclear if it violates any design principles is best not done, until it
> is proven to be beneficial.

That is my sentiment as well. Moreover, on hard-realtime systems,
one would want to set the deadline at the outset and not at poll time.
-- 
Regards,
Luben


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 03/14] dma-buf/fence-chain: Add fence deadline support
  2023-02-18 21:15 ` [PATCH v4 03/14] dma-buf/fence-chain: " Rob Clark
@ 2023-02-22 10:27   ` Tvrtko Ursulin
  2023-02-22 15:55     ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-22 10:27 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno, Christian König,
	open list:SYNC FILE FRAMEWORK


On 18/02/2023 21:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Propagate the deadline to all the fences in the chain.
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> Reviewed-by: Christian König <christian.koenig@amd.com> for this one.
> ---
>   drivers/dma-buf/dma-fence-chain.c | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
> index a0d920576ba6..4684874af612 100644
> --- a/drivers/dma-buf/dma-fence-chain.c
> +++ b/drivers/dma-buf/dma-fence-chain.c
> @@ -206,6 +206,18 @@ static void dma_fence_chain_release(struct dma_fence *fence)
>   	dma_fence_free(fence);
>   }
>   
> +
> +static void dma_fence_chain_set_deadline(struct dma_fence *fence,
> +					 ktime_t deadline)
> +{
> +	dma_fence_chain_for_each(fence, fence) {
> +		struct dma_fence_chain *chain = to_dma_fence_chain(fence);
> +		struct dma_fence *f = chain ? chain->fence : fence;

Low level comment - above two lines could be replaced with:

	struct dma_fence *f = dma_fence_chain_contained(fence);

Although to be fair I am not sure that wouldn't be making it less 
readable. From the point of view that fence might not be a chain, so 
dma_fence_chain_contained() reads a bit dodgy as an API.

Regards,

Tvrtko

> +
> +		dma_fence_set_deadline(f, deadline);
> +	}
> +}
> +
>   const struct dma_fence_ops dma_fence_chain_ops = {
>   	.use_64bit_seqno = true,
>   	.get_driver_name = dma_fence_chain_get_driver_name,
> @@ -213,6 +225,7 @@ const struct dma_fence_ops dma_fence_chain_ops = {
>   	.enable_signaling = dma_fence_chain_enable_signaling,
>   	.signaled = dma_fence_chain_signaled,
>   	.release = dma_fence_chain_release,
> +	.set_deadline = dma_fence_chain_set_deadline,
>   };
>   EXPORT_SYMBOL(dma_fence_chain_ops);
>   

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-18 21:15 ` [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time Rob Clark
  2023-02-20  9:08   ` Pekka Paalanen
@ 2023-02-22 10:37   ` Luben Tuikov
  2023-02-22 15:48     ` Rob Clark
  1 sibling, 1 reply; 93+ messages in thread
From: Luben Tuikov @ 2023-02-22 10:37 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list,
	Pekka Paalanen, Rodrigo Vivi, Alex Deucher, freedreno

On 2023-02-18 16:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Will be used in the next commit to set a deadline on fences that an
> atomic update is waiting on.
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
>  include/drm/drm_vblank.h     |  1 +
>  2 files changed, 33 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 2ff31717a3de..caf25ebb34c5 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
>  }
>  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
>  
> +/**
> + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> + * @crtc: the crtc for which to calculate next vblank time
> + * @vblanktime: pointer to time to receive the next vblank timestamp.
> + *
> + * Calculate the expected time of the next vblank based on time of previous
> + * vblank and frame duration
> + */
> +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> +{
> +	unsigned int pipe = drm_crtc_index(crtc);
> +	struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> +	u64 count;
> +
> +	if (!vblank->framedur_ns)
> +		return -EINVAL;
> +
> +	count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> +
> +	/*
> +	 * If we don't get a valid count, then we probably also don't
> +	 * have a valid time:
> +	 */
> +	if (!count)
> +		return -EINVAL;
> +
> +	*vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));

I'd rather this not do any arithmetic, i.e. add, and simply return the calculated
remaining time, i.e. time left--so instead of add, it would simply assign
the remaining time, and possibly rename the vblanktime to something like "time_to_vblank."

Changing the top comment to "calculate the time remaining to the next vblank".
-- 
Regards,
Luben

> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> +
>  static void send_vblank_event(struct drm_device *dev,
>  		struct drm_pending_vblank_event *e,
>  		u64 seq, ktime_t now)
> diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> index 733a3e2d1d10..a63bc2c92f3c 100644
> --- a/include/drm/drm_vblank.h
> +++ b/include/drm/drm_vblank.h
> @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
>  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
>  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
>  				   ktime_t *vblanktime);
> +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
>  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
>  			       struct drm_pending_vblank_event *e);
>  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank
  2023-02-18 21:15 ` [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank Rob Clark
@ 2023-02-22 10:46   ` Luben Tuikov
  2023-02-22 15:50     ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Luben Tuikov @ 2023-02-22 10:46 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list,
	Daniel Vetter, Pekka Paalanen, Rodrigo Vivi, Alex Deucher,
	freedreno

On 2023-02-18 16:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> For an atomic commit updating a single CRTC (ie. a pageflip) calculate
> the next vblank time, and inform the fence(s) of that deadline.
> 
> v2: Comment typo fix (danvet)
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>  drivers/gpu/drm/drm_atomic_helper.c | 36 +++++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
> index d579fd8f7cb8..35a4dc714920 100644
> --- a/drivers/gpu/drm/drm_atomic_helper.c
> +++ b/drivers/gpu/drm/drm_atomic_helper.c
> @@ -1511,6 +1511,40 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
>  }
>  EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables);
>  
> +/*
> + * For atomic updates which touch just a single CRTC, calculate the time of the
> + * next vblank, and inform all the fences of the deadline.
> + */
> +static void set_fence_deadline(struct drm_device *dev,
> +			       struct drm_atomic_state *state)
> +{
> +	struct drm_crtc *crtc, *wait_crtc = NULL;
> +	struct drm_crtc_state *new_crtc_state;
> +	struct drm_plane *plane;
> +	struct drm_plane_state *new_plane_state;
> +	ktime_t vbltime;

I've not looked at the latest language spec, but is AFAIR "vbltime"
would be uninitialized here. Has this changed?

> +	int i;
> +
> +	for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
> +		if (wait_crtc)
> +			return;
> +		wait_crtc = crtc;
> +	}
> +
> +	/* If no CRTCs updated, then nothing to do: */
> +	if (!wait_crtc)
> +		return;
> +
> +	if (drm_crtc_next_vblank_time(wait_crtc, &vbltime))
> +		return;

We have a problem here in that we're adding the time remaining to the next
vblank event to an uninitialized local variable. As per my comment on patch 10,
I'd rather drm_crtc_next_vblank_time() yield the time remaining to the vblank event,
and we can do the arithmetic locally here in this function.
-- 
Regards,
Luben

> +
> +	for_each_new_plane_in_state (state, plane, new_plane_state, i) {
> +		if (!new_plane_state->fence)
> +			continue;
> +		dma_fence_set_deadline(new_plane_state->fence, vbltime);
> +	}
> +}
> +
>  /**
>   * drm_atomic_helper_wait_for_fences - wait for fences stashed in plane state
>   * @dev: DRM device
> @@ -1540,6 +1574,8 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
>  	struct drm_plane_state *new_plane_state;
>  	int i, ret;
>  
> +	set_fence_deadline(dev, state);
> +
>  	for_each_new_plane_in_state(state, plane, new_plane_state, i) {
>  		if (!new_plane_state->fence)
>  			continue;


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-18 21:15 ` [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness Rob Clark
  2023-02-22 10:23   ` Tvrtko Ursulin
@ 2023-02-22 11:01   ` Luben Tuikov
  1 sibling, 0 replies; 93+ messages in thread
From: Luben Tuikov @ 2023-02-22 11:01 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Christian König,
	Michel Dänzer, open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno, Christian König,
	open list:SYNC FILE FRAMEWORK

On 2023-02-18 16:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Add a way to hint to the fence signaler of an upcoming deadline, such as
> vblank, which the fence waiter would prefer not to miss.  This is to aid
> the fence signaler in making power management decisions, like boosting
> frequency as the deadline approaches and awareness of missing deadlines
> so that can be factored in to the frequency scaling.
> 
> v2: Drop dma_fence::deadline and related logic to filter duplicate
>     deadlines, to avoid increasing dma_fence size.  The fence-context
>     implementation will need similar logic to track deadlines of all
>     the fences on the same timeline.  [ckoenig]
> v3: Clarify locking wrt. set_deadline callback
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
>  drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
>  include/linux/dma-fence.h   | 20 ++++++++++++++++++++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 0de0482cd36e..763b32627684 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count,
>  }
>  EXPORT_SYMBOL(dma_fence_wait_any_timeout);
>  
> +
> +/**

The added empty line above creates a problem for scripts/checkpatch.pl--and
there's a few others here and there. It'd be a good idea to run this series
through checkpatch.pl, if at least informatively.

I wasn't able to apply patch 13 to drm-misc-next or any other known to me
branch, and I didn't see base tree information in the cover letter. I skipped
it and it compiled okay without it.
-- 
Regards,
Luben

> + * dma_fence_set_deadline - set desired fence-wait deadline
> + * @fence:    the fence that is to be waited on
> + * @deadline: the time by which the waiter hopes for the fence to be
> + *            signaled
> + *
> + * Inform the fence signaler of an upcoming deadline, such as vblank, by
> + * which point the waiter would prefer the fence to be signaled by.  This
> + * is intended to give feedback to the fence signaler to aid in power
> + * management decisions, such as boosting GPU frequency if a periodic
> + * vblank deadline is approaching.
> + */
> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
> +{
> +	if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
> +		fence->ops->set_deadline(fence, deadline);
> +}
> +EXPORT_SYMBOL(dma_fence_set_deadline);
> +
>  /**
>   * dma_fence_describe - Dump fence describtion into seq_file
>   * @fence: the 6fence to describe
> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> index 775cdc0b4f24..d77f6591c453 100644
> --- a/include/linux/dma-fence.h
> +++ b/include/linux/dma-fence.h
> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
>  	DMA_FENCE_FLAG_SIGNALED_BIT,
>  	DMA_FENCE_FLAG_TIMESTAMP_BIT,
>  	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
> +	DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
>  	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
>  };
>  
> @@ -257,6 +258,23 @@ struct dma_fence_ops {
>  	 */
>  	void (*timeline_value_str)(struct dma_fence *fence,
>  				   char *str, int size);
> +
> +	/**
> +	 * @set_deadline:
> +	 *
> +	 * Callback to allow a fence waiter to inform the fence signaler of
> +	 * an upcoming deadline, such as vblank, by which point the waiter
> +	 * would prefer the fence to be signaled by.  This is intended to
> +	 * give feedback to the fence signaler to aid in power management
> +	 * decisions, such as boosting GPU frequency.
> +	 *
> +	 * This is called without &dma_fence.lock held, it can be called
> +	 * multiple times and from any context.  Locking is up to the callee
> +	 * if it has some state to manage.
> +	 *
> +	 * This callback is optional.
> +	 */
> +	void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
>  };
>  
>  void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
> @@ -583,6 +601,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr)
>  	return ret < 0 ? ret : 0;
>  }
>  
> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
> +
>  struct dma_fence *dma_fence_get_stub(void);
>  struct dma_fence *dma_fence_allocate_private_stub(void);
>  u64 dma_fence_context_alloc(unsigned num);


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-22 10:23   ` Tvrtko Ursulin
@ 2023-02-22 15:28     ` Christian König
  2023-02-22 17:04       ` Tvrtko Ursulin
  0 siblings, 1 reply; 93+ messages in thread
From: Christian König @ 2023-02-22 15:28 UTC (permalink / raw)
  To: Tvrtko Ursulin, Rob Clark, dri-devel
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Michel Dänzer,
	open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno, Christian König,
	open list:SYNC FILE FRAMEWORK

Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
>
> On 18/02/2023 21:15, Rob Clark wrote:
>> From: Rob Clark <robdclark@chromium.org>
>>
>> Add a way to hint to the fence signaler of an upcoming deadline, such as
>> vblank, which the fence waiter would prefer not to miss.  This is to aid
>> the fence signaler in making power management decisions, like boosting
>> frequency as the deadline approaches and awareness of missing deadlines
>> so that can be factored in to the frequency scaling.
>>
>> v2: Drop dma_fence::deadline and related logic to filter duplicate
>>      deadlines, to avoid increasing dma_fence size.  The fence-context
>>      implementation will need similar logic to track deadlines of all
>>      the fences on the same timeline.  [ckoenig]
>> v3: Clarify locking wrt. set_deadline callback
>>
>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>> Reviewed-by: Christian König <christian.koenig@amd.com>
>> ---
>>   drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
>>   include/linux/dma-fence.h   | 20 ++++++++++++++++++++
>>   2 files changed, 40 insertions(+)
>>
>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>> index 0de0482cd36e..763b32627684 100644
>> --- a/drivers/dma-buf/dma-fence.c
>> +++ b/drivers/dma-buf/dma-fence.c
>> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence 
>> **fences, uint32_t count,
>>   }
>>   EXPORT_SYMBOL(dma_fence_wait_any_timeout);
>>   +
>> +/**
>> + * dma_fence_set_deadline - set desired fence-wait deadline
>> + * @fence:    the fence that is to be waited on
>> + * @deadline: the time by which the waiter hopes for the fence to be
>> + *            signaled
>> + *
>> + * Inform the fence signaler of an upcoming deadline, such as 
>> vblank, by
>> + * which point the waiter would prefer the fence to be signaled by.  
>> This
>> + * is intended to give feedback to the fence signaler to aid in power
>> + * management decisions, such as boosting GPU frequency if a periodic
>> + * vblank deadline is approaching.
>> + */
>> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
>> +{
>> +    if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
>> +        fence->ops->set_deadline(fence, deadline);
>> +}
>> +EXPORT_SYMBOL(dma_fence_set_deadline);
>> +
>>   /**
>>    * dma_fence_describe - Dump fence describtion into seq_file
>>    * @fence: the 6fence to describe
>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>> index 775cdc0b4f24..d77f6591c453 100644
>> --- a/include/linux/dma-fence.h
>> +++ b/include/linux/dma-fence.h
>> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
>>       DMA_FENCE_FLAG_SIGNALED_BIT,
>>       DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>       DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>> +    DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
>
> Would this bit be better left out from core implementation, given how 
> the approach is the component which implements dma-fence has to track 
> the actual deadline and all?
>
> Also taking a step back - are we all okay with starting to expand the 
> relatively simple core synchronisation primitive with side channel 
> data like this? What would be the criteria for what side channel data 
> would be acceptable? Taking note the thing lives outside drivers/gpu/.

I had similar concerns and it took me a moment as well to understand the 
background why this is necessary. I essentially don't see much other 
approach we could do.

Yes, this is GPU/CRTC specific, but we somehow need a common interface 
for communicating it between drivers and that's the dma_fence object as 
far as I can see.

Regards,
Christian.

>
> Regards,
>
> Tvrtko
>
>>       DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
>>   };
>>   @@ -257,6 +258,23 @@ struct dma_fence_ops {
>>        */
>>       void (*timeline_value_str)(struct dma_fence *fence,
>>                      char *str, int size);
>> +
>> +    /**
>> +     * @set_deadline:
>> +     *
>> +     * Callback to allow a fence waiter to inform the fence signaler of
>> +     * an upcoming deadline, such as vblank, by which point the waiter
>> +     * would prefer the fence to be signaled by.  This is intended to
>> +     * give feedback to the fence signaler to aid in power management
>> +     * decisions, such as boosting GPU frequency.
>> +     *
>> +     * This is called without &dma_fence.lock held, it can be called
>> +     * multiple times and from any context.  Locking is up to the 
>> callee
>> +     * if it has some state to manage.
>> +     *
>> +     * This callback is optional.
>> +     */
>> +    void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
>>   };
>>     void dma_fence_init(struct dma_fence *fence, const struct 
>> dma_fence_ops *ops,
>> @@ -583,6 +601,8 @@ static inline signed long dma_fence_wait(struct 
>> dma_fence *fence, bool intr)
>>       return ret < 0 ? ret : 0;
>>   }
>>   +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t 
>> deadline);
>> +
>>   struct dma_fence *dma_fence_get_stub(void);
>>   struct dma_fence *dma_fence_allocate_private_stub(void);
>>   u64 dma_fence_context_alloc(unsigned num);


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-22  9:49           ` Pekka Paalanen
  2023-02-22 10:26             ` Luben Tuikov
@ 2023-02-22 15:37             ` Rob Clark
  2023-02-23  9:38               ` Pekka Paalanen
  1 sibling, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-22 15:37 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Luben Tuikov, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Tue, 21 Feb 2023 09:53:56 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > On Tue, Feb 21, 2023 at 8:48 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > >
> > > On 2023-02-20 11:14, Rob Clark wrote:
> > > > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > >>
> > > >> On Sat, 18 Feb 2023 13:15:49 -0800
> > > >> Rob Clark <robdclark@gmail.com> wrote:
> > > >>
> > > >>> From: Rob Clark <robdclark@chromium.org>
> > > >>>
> > > >>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > > >>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > > >>> some work has completed).  Usermode components of GPU driver stacks
> > > >>> often poll() on fence fd's to know when it is safe to do things like
> > > >>> free or reuse a buffer, but they can also poll() on a fence fd when
> > > >>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > > >>> lets the kernel differentiate these two cases.
> > > >>>
> > > >>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > >>
> > > >> Hi,
> > > >>
> > > >> where would the UAPI documentation of this go?
> > > >> It seems to be missing.
> > > >
> > > > Good question, I am not sure.  The poll() man page has a description,
> > > > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > > > bit vague).
> > > >
> > > >> If a Wayland compositor is polling application fences to know which
> > > >> client buffer to use in its rendering, should the compositor poll with
> > > >> PRI or not? If a compositor polls with PRI, then all fences from all
> > > >> applications would always be PRI. Would that be harmful somehow or
> > > >> would it be beneficial?
> > > >
> > > > I think a compositor would rather use the deadline ioctl and then poll
> > > > without PRI.  Otherwise you are giving an urgency signal to the fence
> > > > signaller which might not necessarily be needed.
> > > >
> > > > The places where I expect PRI to be useful is more in mesa (things
> > > > like glFinish(), readpix, and other similar sorts of blocking APIs)
> > > Hi,
> > >
> > > Hmm, but then user-space could do the opposite, namely, submit work as usual--never
> > > using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
> > > like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
> > > this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
> > > time, and then wouldn't that be equivalent to (E)POLLPRI?
> >
> > Yeah, (E)POLLPRI isn't strictly needed if we have SET_DEADLINE.  It is
> > slightly more convenient if you want an immediate deadline (single
> > syscall instead of two), but not strictly needed.  OTOH it piggy-backs
> > on existing UABI.
>
> In that case, I would be conservative, and not add the POLLPRI
> semantics. An UAPI addition that is not strictly needed and somewhat
> unclear if it violates any design principles is best not done, until it
> is proven to be beneficial.
>
> Besides, a Wayland compositor does not necessary need to add the fd
> to its main event loop for poll. It could just SET_DEADLINE, and then
> when it renders simply check if the fence passed or not already. Not
> polling means the compositor does not need to wake up at the moment the
> fence signals to just record a flag.

poll(POLLPRI) isn't intended for wayland.. but is a thing I want in
mesa for fence waits.  I _could_ use SET_DEADLINE but it is two
syscalls and correspondingly more code ;-)

> On another matter, if the application uses SET_DEADLINE with one
> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> another timestamp, what should happen?

The expectation is that many deadline hints can be set on a fence.
The fence signaller should track the soonest deadline.

BR,
-R

> Maybe it's a soft-realtime app whose primary goal is not display, and
> it needs the result faster than the window server?
>
> Maybe SET_DEADLINE should set the deadline only to an earlier timestamp
> and never later?
>
>
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-22  9:57             ` Pekka Paalanen
@ 2023-02-22 15:44               ` Rob Clark
  2023-02-22 15:55                 ` Ville Syrjälä
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-22 15:44 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Ville Syrjälä,
	Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Wed, Feb 22, 2023 at 1:57 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Tue, 21 Feb 2023 09:50:20 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > <ville.syrjala@linux.intel.com> wrote:
> > >
> > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > >
> > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > >
> > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > atomic update is waiting on.
> > > > > > >
> > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > ---
> > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > >  2 files changed, 33 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > >  }
> > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > >
> > > > > > > +/**
> > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > + *
> > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > + * vblank and frame duration
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > VRR mode, right?
> > > > > >
> > > > >
> > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > >
> > > > I don't know. :-)
> > >
> > > At least for i915 this will give you the maximum frame
> > > duration.
> >
> > I suppose one could argue that maximum frame duration is the actual
> > deadline.  Anything less is just moar fps, but not going to involve
> > stalling until vblank N+1, AFAIU
> >
> > > Also this does not calculate the the start of vblank, it
> > > calculates the start of active video.
> >
> > Probably something like end of previous frame's video..  might not be
> > _exactly_ correct (because some buffering involved), but OTOH on the
> > GPU side, I expect the driver to set a timer for a few ms or so before
> > the deadline.  So there is some wiggle room.
>
> The vblank timestamp is defined to be the time of the first active
> pixel of the frame in the video signal. At least that's the one that
> UAPI carries (when not tearing?). It is not the start of vblank period.
>
> With VRR, the front porch before the first active pixel can be multiple
> milliseconds. The difference between 144 Hz and 60 Hz is 9.7 ms for
> example.

What we really want is the deadline for the hw to latch for the next
frame.. which as Ville pointed out is definitely before the end of
vblank.

Honestly this sort of feature is a lot more critical for the non-VRR
case, and VRR is kind of a minority edge case.  So I'd prefer not to
get too hung up on VRR.  If there is an easy way for the helpers to
detect VRR, I'd be perfectly fine not setting a deadline hint in that
case, and let someone who actually has a VRR display figure out how to
handle that case.

BR,
-R

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-22 10:37   ` Luben Tuikov
@ 2023-02-22 15:48     ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-22 15:48 UTC (permalink / raw)
  To: Luben Tuikov
  Cc: dri-devel, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list,
	Pekka Paalanen, Rodrigo Vivi, Alex Deucher, freedreno

On Wed, Feb 22, 2023 at 2:37 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> On 2023-02-18 16:15, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Will be used in the next commit to set a deadline on fences that an
> > atomic update is waiting on.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> >  include/drm/drm_vblank.h     |  1 +
> >  2 files changed, 33 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > index 2ff31717a3de..caf25ebb34c5 100644
> > --- a/drivers/gpu/drm/drm_vblank.c
> > +++ b/drivers/gpu/drm/drm_vblank.c
> > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> >  }
> >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> >
> > +/**
> > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > + * @crtc: the crtc for which to calculate next vblank time
> > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > + *
> > + * Calculate the expected time of the next vblank based on time of previous
> > + * vblank and frame duration
> > + */
> > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime)
> > +{
> > +     unsigned int pipe = drm_crtc_index(crtc);
> > +     struct drm_vblank_crtc *vblank = &crtc->dev->vblank[pipe];
> > +     u64 count;
> > +
> > +     if (!vblank->framedur_ns)
> > +             return -EINVAL;
> > +
> > +     count = drm_vblank_count_and_time(crtc->dev, pipe, vblanktime);
> > +
> > +     /*
> > +      * If we don't get a valid count, then we probably also don't
> > +      * have a valid time:
> > +      */
> > +     if (!count)
> > +             return -EINVAL;
> > +
> > +     *vblanktime = ktime_add(*vblanktime, ns_to_ktime(vblank->framedur_ns));
>
> I'd rather this not do any arithmetic, i.e. add, and simply return the calculated
> remaining time, i.e. time left--so instead of add, it would simply assign
> the remaining time, and possibly rename the vblanktime to something like "time_to_vblank."
>

Note that since I sent the last iteration, I've renamed it to
drm_crtc_next_vblank_start().

I would prefer to keep the arithmetic, because I have another use for
this helper in drm/msm (for async/cursor updates, where we want to set
an hrtimer for start of vblank).  It is a bit off the topic of this
series, but I can include the patch when I repost.

BR,
-R

> Changing the top comment to "calculate the time remaining to the next vblank".
> --
> Regards,
> Luben
>
> > +
> > +     return 0;
> > +}
> > +EXPORT_SYMBOL(drm_crtc_next_vblank_time);
> > +
> >  static void send_vblank_event(struct drm_device *dev,
> >               struct drm_pending_vblank_event *e,
> >               u64 seq, ktime_t now)
> > diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
> > index 733a3e2d1d10..a63bc2c92f3c 100644
> > --- a/include/drm/drm_vblank.h
> > +++ b/include/drm/drm_vblank.h
> > @@ -230,6 +230,7 @@ bool drm_dev_has_vblank(const struct drm_device *dev);
> >  u64 drm_crtc_vblank_count(struct drm_crtc *crtc);
> >  u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> >                                  ktime_t *vblanktime);
> > +int drm_crtc_next_vblank_time(struct drm_crtc *crtc, ktime_t *vblanktime);
> >  void drm_crtc_send_vblank_event(struct drm_crtc *crtc,
> >                              struct drm_pending_vblank_event *e);
> >  void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank
  2023-02-22 10:46   ` Luben Tuikov
@ 2023-02-22 15:50     ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-22 15:50 UTC (permalink / raw)
  To: Luben Tuikov
  Cc: dri-devel, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list,
	Daniel Vetter, Pekka Paalanen, Rodrigo Vivi, Alex Deucher,
	freedreno

On Wed, Feb 22, 2023 at 2:46 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> On 2023-02-18 16:15, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > For an atomic commit updating a single CRTC (ie. a pageflip) calculate
> > the next vblank time, and inform the fence(s) of that deadline.
> >
> > v2: Comment typo fix (danvet)
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> >  drivers/gpu/drm/drm_atomic_helper.c | 36 +++++++++++++++++++++++++++++
> >  1 file changed, 36 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
> > index d579fd8f7cb8..35a4dc714920 100644
> > --- a/drivers/gpu/drm/drm_atomic_helper.c
> > +++ b/drivers/gpu/drm/drm_atomic_helper.c
> > @@ -1511,6 +1511,40 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
> >  }
> >  EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables);
> >
> > +/*
> > + * For atomic updates which touch just a single CRTC, calculate the time of the
> > + * next vblank, and inform all the fences of the deadline.
> > + */
> > +static void set_fence_deadline(struct drm_device *dev,
> > +                            struct drm_atomic_state *state)
> > +{
> > +     struct drm_crtc *crtc, *wait_crtc = NULL;
> > +     struct drm_crtc_state *new_crtc_state;
> > +     struct drm_plane *plane;
> > +     struct drm_plane_state *new_plane_state;
> > +     ktime_t vbltime;
>
> I've not looked at the latest language spec, but is AFAIR "vbltime"
> would be uninitialized here. Has this changed?
>
> > +     int i;
> > +
> > +     for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
> > +             if (wait_crtc)
> > +                     return;
> > +             wait_crtc = crtc;
> > +     }
> > +
> > +     /* If no CRTCs updated, then nothing to do: */
> > +     if (!wait_crtc)
> > +             return;
> > +
> > +     if (drm_crtc_next_vblank_time(wait_crtc, &vbltime))
> > +             return;
>
> We have a problem here in that we're adding the time remaining to the next
> vblank event to an uninitialized local variable. As per my comment on patch 10,
> I'd rather drm_crtc_next_vblank_time() yield the time remaining to the vblank event,
> and we can do the arithmetic locally here in this function.

if drm_crtc_next_vblank_time() returns 0 then it has initialized
vbltime, so no problem here

BR,
-R

> --
> Regards,
> Luben
>
> > +
> > +     for_each_new_plane_in_state (state, plane, new_plane_state, i) {
> > +             if (!new_plane_state->fence)
> > +                     continue;
> > +             dma_fence_set_deadline(new_plane_state->fence, vbltime);
> > +     }
> > +}
> > +
> >  /**
> >   * drm_atomic_helper_wait_for_fences - wait for fences stashed in plane state
> >   * @dev: DRM device
> > @@ -1540,6 +1574,8 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
> >       struct drm_plane_state *new_plane_state;
> >       int i, ret;
> >
> > +     set_fence_deadline(dev, state);
> > +
> >       for_each_new_plane_in_state(state, plane, new_plane_state, i) {
> >               if (!new_plane_state->fence)
> >                       continue;
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 03/14] dma-buf/fence-chain: Add fence deadline support
  2023-02-22 10:27   ` Tvrtko Ursulin
@ 2023-02-22 15:55     ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-22 15:55 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: dri-devel, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Pekka Paalanen, Rodrigo Vivi, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Wed, Feb 22, 2023 at 2:27 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 18/02/2023 21:15, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Propagate the deadline to all the fences in the chain.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > Reviewed-by: Christian König <christian.koenig@amd.com> for this one.
> > ---
> >   drivers/dma-buf/dma-fence-chain.c | 13 +++++++++++++
> >   1 file changed, 13 insertions(+)
> >
> > diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
> > index a0d920576ba6..4684874af612 100644
> > --- a/drivers/dma-buf/dma-fence-chain.c
> > +++ b/drivers/dma-buf/dma-fence-chain.c
> > @@ -206,6 +206,18 @@ static void dma_fence_chain_release(struct dma_fence *fence)
> >       dma_fence_free(fence);
> >   }
> >
> > +
> > +static void dma_fence_chain_set_deadline(struct dma_fence *fence,
> > +                                      ktime_t deadline)
> > +{
> > +     dma_fence_chain_for_each(fence, fence) {
> > +             struct dma_fence_chain *chain = to_dma_fence_chain(fence);
> > +             struct dma_fence *f = chain ? chain->fence : fence;
>
> Low level comment - above two lines could be replaced with:
>
>         struct dma_fence *f = dma_fence_chain_contained(fence);
>
> Although to be fair I am not sure that wouldn't be making it less
> readable. From the point of view that fence might not be a chain, so
> dma_fence_chain_contained() reads a bit dodgy as an API.

It does seem to be the canonical way to do it since 18f5fad275ef
("dma-buf: add dma_fence_chain_contained helper").. looks like I
missed that when I rebased.  I guess for consistency it's best that I
use the helper.

BR,
-R

> Regards,
>
> Tvrtko
>
> > +
> > +             dma_fence_set_deadline(f, deadline);
> > +     }
> > +}
> > +
> >   const struct dma_fence_ops dma_fence_chain_ops = {
> >       .use_64bit_seqno = true,
> >       .get_driver_name = dma_fence_chain_get_driver_name,
> > @@ -213,6 +225,7 @@ const struct dma_fence_ops dma_fence_chain_ops = {
> >       .enable_signaling = dma_fence_chain_enable_signaling,
> >       .signaled = dma_fence_chain_signaled,
> >       .release = dma_fence_chain_release,
> > +     .set_deadline = dma_fence_chain_set_deadline,
> >   };
> >   EXPORT_SYMBOL(dma_fence_chain_ops);
> >

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time
  2023-02-22 15:44               ` Rob Clark
@ 2023-02-22 15:55                 ` Ville Syrjälä
  0 siblings, 0 replies; 93+ messages in thread
From: Ville Syrjälä @ 2023-02-22 15:55 UTC (permalink / raw)
  To: Rob Clark
  Cc: Pekka Paalanen, Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Rodrigo Vivi, Alex Deucher, freedreno

On Wed, Feb 22, 2023 at 07:44:42AM -0800, Rob Clark wrote:
> On Wed, Feb 22, 2023 at 1:57 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Tue, 21 Feb 2023 09:50:20 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >
> > > On Tue, Feb 21, 2023 at 5:01 AM Ville Syrjälä
> > > <ville.syrjala@linux.intel.com> wrote:
> > > >
> > > > On Tue, Feb 21, 2023 at 10:45:51AM +0200, Pekka Paalanen wrote:
> > > > > On Mon, 20 Feb 2023 07:55:41 -0800
> > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > > On Mon, Feb 20, 2023 at 1:08 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > >
> > > > > > > On Sat, 18 Feb 2023 13:15:53 -0800
> > > > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > >
> > > > > > > > Will be used in the next commit to set a deadline on fences that an
> > > > > > > > atomic update is waiting on.
> > > > > > > >
> > > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > > > > ---
> > > > > > > >  drivers/gpu/drm/drm_vblank.c | 32 ++++++++++++++++++++++++++++++++
> > > > > > > >  include/drm/drm_vblank.h     |  1 +
> > > > > > > >  2 files changed, 33 insertions(+)
> > > > > > > >
> > > > > > > > diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > index 2ff31717a3de..caf25ebb34c5 100644
> > > > > > > > --- a/drivers/gpu/drm/drm_vblank.c
> > > > > > > > +++ b/drivers/gpu/drm/drm_vblank.c
> > > > > > > > @@ -980,6 +980,38 @@ u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
> > > > > > > >  }
> > > > > > > >  EXPORT_SYMBOL(drm_crtc_vblank_count_and_time);
> > > > > > > >
> > > > > > > > +/**
> > > > > > > > + * drm_crtc_next_vblank_time - calculate the time of the next vblank
> > > > > > > > + * @crtc: the crtc for which to calculate next vblank time
> > > > > > > > + * @vblanktime: pointer to time to receive the next vblank timestamp.
> > > > > > > > + *
> > > > > > > > + * Calculate the expected time of the next vblank based on time of previous
> > > > > > > > + * vblank and frame duration
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > for VRR this targets the highest frame rate possible for the current
> > > > > > > VRR mode, right?
> > > > > > >
> > > > > >
> > > > > > It is based on vblank->framedur_ns which is in turn based on
> > > > > > mode->crtc_clock.  Presumably for VRR that ends up being a maximum?
> > > > >
> > > > > I don't know. :-)
> > > >
> > > > At least for i915 this will give you the maximum frame
> > > > duration.
> > >
> > > I suppose one could argue that maximum frame duration is the actual
> > > deadline.  Anything less is just moar fps, but not going to involve
> > > stalling until vblank N+1, AFAIU
> > >
> > > > Also this does not calculate the the start of vblank, it
> > > > calculates the start of active video.
> > >
> > > Probably something like end of previous frame's video..  might not be
> > > _exactly_ correct (because some buffering involved), but OTOH on the
> > > GPU side, I expect the driver to set a timer for a few ms or so before
> > > the deadline.  So there is some wiggle room.
> >
> > The vblank timestamp is defined to be the time of the first active
> > pixel of the frame in the video signal. At least that's the one that
> > UAPI carries (when not tearing?). It is not the start of vblank period.
> >
> > With VRR, the front porch before the first active pixel can be multiple
> > milliseconds. The difference between 144 Hz and 60 Hz is 9.7 ms for
> > example.
> 
> What we really want is the deadline for the hw to latch for the next
> frame.. which as Ville pointed out is definitely before the end of
> vblank.
> 
> Honestly this sort of feature is a lot more critical for the non-VRR
> case, and VRR is kind of a minority edge case.  So I'd prefer not to
> get too hung up on VRR.  If there is an easy way for the helpers to
> detect VRR, I'd be perfectly fine not setting a deadline hint in that
> case, and let someone who actually has a VRR display figure out how to
> handle that case.

The formula I gave you earlier works for both VRR and non-VRR.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-22 15:28     ` Christian König
@ 2023-02-22 17:04       ` Tvrtko Ursulin
  2023-02-22 17:16         ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-22 17:04 UTC (permalink / raw)
  To: Christian König, Rob Clark, dri-devel
  Cc: Rob Clark, Gustavo Padovan, Tvrtko Ursulin, Michel Dänzer,
	open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno,
	open list:SYNC FILE FRAMEWORK


On 22/02/2023 15:28, Christian König wrote:
> Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
>>
>> On 18/02/2023 21:15, Rob Clark wrote:
>>> From: Rob Clark <robdclark@chromium.org>
>>>
>>> Add a way to hint to the fence signaler of an upcoming deadline, such as
>>> vblank, which the fence waiter would prefer not to miss.  This is to aid
>>> the fence signaler in making power management decisions, like boosting
>>> frequency as the deadline approaches and awareness of missing deadlines
>>> so that can be factored in to the frequency scaling.
>>>
>>> v2: Drop dma_fence::deadline and related logic to filter duplicate
>>>      deadlines, to avoid increasing dma_fence size.  The fence-context
>>>      implementation will need similar logic to track deadlines of all
>>>      the fences on the same timeline.  [ckoenig]
>>> v3: Clarify locking wrt. set_deadline callback
>>>
>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>> ---
>>>   drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
>>>   include/linux/dma-fence.h   | 20 ++++++++++++++++++++
>>>   2 files changed, 40 insertions(+)
>>>
>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>> index 0de0482cd36e..763b32627684 100644
>>> --- a/drivers/dma-buf/dma-fence.c
>>> +++ b/drivers/dma-buf/dma-fence.c
>>> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence 
>>> **fences, uint32_t count,
>>>   }
>>>   EXPORT_SYMBOL(dma_fence_wait_any_timeout);
>>>   +
>>> +/**
>>> + * dma_fence_set_deadline - set desired fence-wait deadline
>>> + * @fence:    the fence that is to be waited on
>>> + * @deadline: the time by which the waiter hopes for the fence to be
>>> + *            signaled
>>> + *
>>> + * Inform the fence signaler of an upcoming deadline, such as 
>>> vblank, by
>>> + * which point the waiter would prefer the fence to be signaled by. 
>>> This
>>> + * is intended to give feedback to the fence signaler to aid in power
>>> + * management decisions, such as boosting GPU frequency if a periodic
>>> + * vblank deadline is approaching.
>>> + */
>>> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
>>> +{
>>> +    if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
>>> +        fence->ops->set_deadline(fence, deadline);
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_set_deadline);
>>> +
>>>   /**
>>>    * dma_fence_describe - Dump fence describtion into seq_file
>>>    * @fence: the 6fence to describe
>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>> index 775cdc0b4f24..d77f6591c453 100644
>>> --- a/include/linux/dma-fence.h
>>> +++ b/include/linux/dma-fence.h
>>> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
>>>       DMA_FENCE_FLAG_SIGNALED_BIT,
>>>       DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>       DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>> +    DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
>>
>> Would this bit be better left out from core implementation, given how 
>> the approach is the component which implements dma-fence has to track 
>> the actual deadline and all?
>>
>> Also taking a step back - are we all okay with starting to expand the 
>> relatively simple core synchronisation primitive with side channel 
>> data like this? What would be the criteria for what side channel data 
>> would be acceptable? Taking note the thing lives outside drivers/gpu/.
> 
> I had similar concerns and it took me a moment as well to understand the 
> background why this is necessary. I essentially don't see much other 
> approach we could do.
> 
> Yes, this is GPU/CRTC specific, but we somehow need a common interface 
> for communicating it between drivers and that's the dma_fence object as 
> far as I can see.

Yeah I also don't see any other easy options. Just wanted to raise this 
as something which probably needs some wider acks.

Also what about the "low level" part of my question about the reason, or 
benefits, of defining the deadline bit in the common layer?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-22 17:04       ` Tvrtko Ursulin
@ 2023-02-22 17:16         ` Rob Clark
  2023-02-22 17:33           ` Tvrtko Ursulin
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-22 17:16 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Christian König, Rob Clark, dri-devel, Gustavo Padovan,
	Tvrtko Ursulin, Michel Dänzer, open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno,
	open list:SYNC FILE FRAMEWORK

On Wed, Feb 22, 2023 at 9:05 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 22/02/2023 15:28, Christian König wrote:
> > Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
> >>
> >> On 18/02/2023 21:15, Rob Clark wrote:
> >>> From: Rob Clark <robdclark@chromium.org>
> >>>
> >>> Add a way to hint to the fence signaler of an upcoming deadline, such as
> >>> vblank, which the fence waiter would prefer not to miss.  This is to aid
> >>> the fence signaler in making power management decisions, like boosting
> >>> frequency as the deadline approaches and awareness of missing deadlines
> >>> so that can be factored in to the frequency scaling.
> >>>
> >>> v2: Drop dma_fence::deadline and related logic to filter duplicate
> >>>      deadlines, to avoid increasing dma_fence size.  The fence-context
> >>>      implementation will need similar logic to track deadlines of all
> >>>      the fences on the same timeline.  [ckoenig]
> >>> v3: Clarify locking wrt. set_deadline callback
> >>>
> >>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> >>> Reviewed-by: Christian König <christian.koenig@amd.com>
> >>> ---
> >>>   drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
> >>>   include/linux/dma-fence.h   | 20 ++++++++++++++++++++
> >>>   2 files changed, 40 insertions(+)
> >>>
> >>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> >>> index 0de0482cd36e..763b32627684 100644
> >>> --- a/drivers/dma-buf/dma-fence.c
> >>> +++ b/drivers/dma-buf/dma-fence.c
> >>> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence
> >>> **fences, uint32_t count,
> >>>   }
> >>>   EXPORT_SYMBOL(dma_fence_wait_any_timeout);
> >>>   +
> >>> +/**
> >>> + * dma_fence_set_deadline - set desired fence-wait deadline
> >>> + * @fence:    the fence that is to be waited on
> >>> + * @deadline: the time by which the waiter hopes for the fence to be
> >>> + *            signaled
> >>> + *
> >>> + * Inform the fence signaler of an upcoming deadline, such as
> >>> vblank, by
> >>> + * which point the waiter would prefer the fence to be signaled by.
> >>> This
> >>> + * is intended to give feedback to the fence signaler to aid in power
> >>> + * management decisions, such as boosting GPU frequency if a periodic
> >>> + * vblank deadline is approaching.
> >>> + */
> >>> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
> >>> +{
> >>> +    if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
> >>> +        fence->ops->set_deadline(fence, deadline);
> >>> +}
> >>> +EXPORT_SYMBOL(dma_fence_set_deadline);
> >>> +
> >>>   /**
> >>>    * dma_fence_describe - Dump fence describtion into seq_file
> >>>    * @fence: the 6fence to describe
> >>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> >>> index 775cdc0b4f24..d77f6591c453 100644
> >>> --- a/include/linux/dma-fence.h
> >>> +++ b/include/linux/dma-fence.h
> >>> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
> >>>       DMA_FENCE_FLAG_SIGNALED_BIT,
> >>>       DMA_FENCE_FLAG_TIMESTAMP_BIT,
> >>>       DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
> >>> +    DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
> >>
> >> Would this bit be better left out from core implementation, given how
> >> the approach is the component which implements dma-fence has to track
> >> the actual deadline and all?
> >>
> >> Also taking a step back - are we all okay with starting to expand the
> >> relatively simple core synchronisation primitive with side channel
> >> data like this? What would be the criteria for what side channel data
> >> would be acceptable? Taking note the thing lives outside drivers/gpu/.
> >
> > I had similar concerns and it took me a moment as well to understand the
> > background why this is necessary. I essentially don't see much other
> > approach we could do.
> >
> > Yes, this is GPU/CRTC specific, but we somehow need a common interface
> > for communicating it between drivers and that's the dma_fence object as
> > far as I can see.
>
> Yeah I also don't see any other easy options. Just wanted to raise this
> as something which probably needs some wider acks.
>
> Also what about the "low level" part of my question about the reason, or
> benefits, of defining the deadline bit in the common layer?

We could leave DMA_FENCE_FLAG_HAS_DEADLINE_BIT out, but OTOH managing
a bitmask that is partially defined in core enum and partially in
backend-driver has it's own drawbacks, and it isn't like we are
running out of bits.. :shrug:

BR,
-R

> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-22 17:16         ` Rob Clark
@ 2023-02-22 17:33           ` Tvrtko Ursulin
  2023-02-22 18:57             ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-22 17:33 UTC (permalink / raw)
  To: Rob Clark
  Cc: Christian König, Rob Clark, dri-devel, Gustavo Padovan,
	Tvrtko Ursulin, Michel Dänzer, open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno,
	open list:SYNC FILE FRAMEWORK


On 22/02/2023 17:16, Rob Clark wrote:
> On Wed, Feb 22, 2023 at 9:05 AM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 22/02/2023 15:28, Christian König wrote:
>>> Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
>>>>
>>>> On 18/02/2023 21:15, Rob Clark wrote:
>>>>> From: Rob Clark <robdclark@chromium.org>
>>>>>
>>>>> Add a way to hint to the fence signaler of an upcoming deadline, such as
>>>>> vblank, which the fence waiter would prefer not to miss.  This is to aid
>>>>> the fence signaler in making power management decisions, like boosting
>>>>> frequency as the deadline approaches and awareness of missing deadlines
>>>>> so that can be factored in to the frequency scaling.
>>>>>
>>>>> v2: Drop dma_fence::deadline and related logic to filter duplicate
>>>>>       deadlines, to avoid increasing dma_fence size.  The fence-context
>>>>>       implementation will need similar logic to track deadlines of all
>>>>>       the fences on the same timeline.  [ckoenig]
>>>>> v3: Clarify locking wrt. set_deadline callback
>>>>>
>>>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>>> ---
>>>>>    drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
>>>>>    include/linux/dma-fence.h   | 20 ++++++++++++++++++++
>>>>>    2 files changed, 40 insertions(+)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>> index 0de0482cd36e..763b32627684 100644
>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence
>>>>> **fences, uint32_t count,
>>>>>    }
>>>>>    EXPORT_SYMBOL(dma_fence_wait_any_timeout);
>>>>>    +
>>>>> +/**
>>>>> + * dma_fence_set_deadline - set desired fence-wait deadline
>>>>> + * @fence:    the fence that is to be waited on
>>>>> + * @deadline: the time by which the waiter hopes for the fence to be
>>>>> + *            signaled
>>>>> + *
>>>>> + * Inform the fence signaler of an upcoming deadline, such as
>>>>> vblank, by
>>>>> + * which point the waiter would prefer the fence to be signaled by.
>>>>> This
>>>>> + * is intended to give feedback to the fence signaler to aid in power
>>>>> + * management decisions, such as boosting GPU frequency if a periodic
>>>>> + * vblank deadline is approaching.
>>>>> + */
>>>>> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
>>>>> +{
>>>>> +    if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
>>>>> +        fence->ops->set_deadline(fence, deadline);
>>>>> +}
>>>>> +EXPORT_SYMBOL(dma_fence_set_deadline);
>>>>> +
>>>>>    /**
>>>>>     * dma_fence_describe - Dump fence describtion into seq_file
>>>>>     * @fence: the 6fence to describe
>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>>>> index 775cdc0b4f24..d77f6591c453 100644
>>>>> --- a/include/linux/dma-fence.h
>>>>> +++ b/include/linux/dma-fence.h
>>>>> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
>>>>>        DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>>        DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>>        DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>>>> +    DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
>>>>
>>>> Would this bit be better left out from core implementation, given how
>>>> the approach is the component which implements dma-fence has to track
>>>> the actual deadline and all?
>>>>
>>>> Also taking a step back - are we all okay with starting to expand the
>>>> relatively simple core synchronisation primitive with side channel
>>>> data like this? What would be the criteria for what side channel data
>>>> would be acceptable? Taking note the thing lives outside drivers/gpu/.
>>>
>>> I had similar concerns and it took me a moment as well to understand the
>>> background why this is necessary. I essentially don't see much other
>>> approach we could do.
>>>
>>> Yes, this is GPU/CRTC specific, but we somehow need a common interface
>>> for communicating it between drivers and that's the dma_fence object as
>>> far as I can see.
>>
>> Yeah I also don't see any other easy options. Just wanted to raise this
>> as something which probably needs some wider acks.
>>
>> Also what about the "low level" part of my question about the reason, or
>> benefits, of defining the deadline bit in the common layer?
> 
> We could leave DMA_FENCE_FLAG_HAS_DEADLINE_BIT out, but OTOH managing
> a bitmask that is partially defined in core enum and partially in
> backend-driver has it's own drawbacks, and it isn't like we are
> running out of bits.. :shrug:

There is DMA_FENCE_FLAG_USER_BITS onwards which implementations could 
use to store their stuff?

And if we skip forward to "drm/scheduler: Add fence deadline support" 
that's the only place bit is used, right? Would it simply work to look 
at drm_sched_fence->deadline == 0 as bit not set? Or you see a need to 
interoperate with other fence implementations via that bit somehow?

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness
  2023-02-22 17:33           ` Tvrtko Ursulin
@ 2023-02-22 18:57             ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-22 18:57 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Rob Clark, Christian König, dri-devel, Gustavo Padovan,
	Tvrtko Ursulin, Michel Dänzer, open list, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Rodrigo Vivi, Alex Deucher, freedreno,
	open list:SYNC FILE FRAMEWORK

On Wed, Feb 22, 2023 at 9:33 AM Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 22/02/2023 17:16, Rob Clark wrote:
> > On Wed, Feb 22, 2023 at 9:05 AM Tvrtko Ursulin
> > <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >>
> >> On 22/02/2023 15:28, Christian König wrote:
> >>> Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
> >>>>
> >>>> On 18/02/2023 21:15, Rob Clark wrote:
> >>>>> From: Rob Clark <robdclark@chromium.org>
> >>>>>
> >>>>> Add a way to hint to the fence signaler of an upcoming deadline, such as
> >>>>> vblank, which the fence waiter would prefer not to miss.  This is to aid
> >>>>> the fence signaler in making power management decisions, like boosting
> >>>>> frequency as the deadline approaches and awareness of missing deadlines
> >>>>> so that can be factored in to the frequency scaling.
> >>>>>
> >>>>> v2: Drop dma_fence::deadline and related logic to filter duplicate
> >>>>>       deadlines, to avoid increasing dma_fence size.  The fence-context
> >>>>>       implementation will need similar logic to track deadlines of all
> >>>>>       the fences on the same timeline.  [ckoenig]
> >>>>> v3: Clarify locking wrt. set_deadline callback
> >>>>>
> >>>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> >>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
> >>>>> ---
> >>>>>    drivers/dma-buf/dma-fence.c | 20 ++++++++++++++++++++
> >>>>>    include/linux/dma-fence.h   | 20 ++++++++++++++++++++
> >>>>>    2 files changed, 40 insertions(+)
> >>>>>
> >>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> >>>>> index 0de0482cd36e..763b32627684 100644
> >>>>> --- a/drivers/dma-buf/dma-fence.c
> >>>>> +++ b/drivers/dma-buf/dma-fence.c
> >>>>> @@ -912,6 +912,26 @@ dma_fence_wait_any_timeout(struct dma_fence
> >>>>> **fences, uint32_t count,
> >>>>>    }
> >>>>>    EXPORT_SYMBOL(dma_fence_wait_any_timeout);
> >>>>>    +
> >>>>> +/**
> >>>>> + * dma_fence_set_deadline - set desired fence-wait deadline
> >>>>> + * @fence:    the fence that is to be waited on
> >>>>> + * @deadline: the time by which the waiter hopes for the fence to be
> >>>>> + *            signaled
> >>>>> + *
> >>>>> + * Inform the fence signaler of an upcoming deadline, such as
> >>>>> vblank, by
> >>>>> + * which point the waiter would prefer the fence to be signaled by.
> >>>>> This
> >>>>> + * is intended to give feedback to the fence signaler to aid in power
> >>>>> + * management decisions, such as boosting GPU frequency if a periodic
> >>>>> + * vblank deadline is approaching.
> >>>>> + */
> >>>>> +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline)
> >>>>> +{
> >>>>> +    if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
> >>>>> +        fence->ops->set_deadline(fence, deadline);
> >>>>> +}
> >>>>> +EXPORT_SYMBOL(dma_fence_set_deadline);
> >>>>> +
> >>>>>    /**
> >>>>>     * dma_fence_describe - Dump fence describtion into seq_file
> >>>>>     * @fence: the 6fence to describe
> >>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> >>>>> index 775cdc0b4f24..d77f6591c453 100644
> >>>>> --- a/include/linux/dma-fence.h
> >>>>> +++ b/include/linux/dma-fence.h
> >>>>> @@ -99,6 +99,7 @@ enum dma_fence_flag_bits {
> >>>>>        DMA_FENCE_FLAG_SIGNALED_BIT,
> >>>>>        DMA_FENCE_FLAG_TIMESTAMP_BIT,
> >>>>>        DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
> >>>>> +    DMA_FENCE_FLAG_HAS_DEADLINE_BIT,
> >>>>
> >>>> Would this bit be better left out from core implementation, given how
> >>>> the approach is the component which implements dma-fence has to track
> >>>> the actual deadline and all?
> >>>>
> >>>> Also taking a step back - are we all okay with starting to expand the
> >>>> relatively simple core synchronisation primitive with side channel
> >>>> data like this? What would be the criteria for what side channel data
> >>>> would be acceptable? Taking note the thing lives outside drivers/gpu/.
> >>>
> >>> I had similar concerns and it took me a moment as well to understand the
> >>> background why this is necessary. I essentially don't see much other
> >>> approach we could do.
> >>>
> >>> Yes, this is GPU/CRTC specific, but we somehow need a common interface
> >>> for communicating it between drivers and that's the dma_fence object as
> >>> far as I can see.
> >>
> >> Yeah I also don't see any other easy options. Just wanted to raise this
> >> as something which probably needs some wider acks.
> >>
> >> Also what about the "low level" part of my question about the reason, or
> >> benefits, of defining the deadline bit in the common layer?
> >
> > We could leave DMA_FENCE_FLAG_HAS_DEADLINE_BIT out, but OTOH managing
> > a bitmask that is partially defined in core enum and partially in
> > backend-driver has it's own drawbacks, and it isn't like we are
> > running out of bits.. :shrug:
>
> There is DMA_FENCE_FLAG_USER_BITS onwards which implementations could
> use to store their stuff?
>
> And if we skip forward to "drm/scheduler: Add fence deadline support"
> that's the only place bit is used, right? Would it simply work to look
> at drm_sched_fence->deadline == 0 as bit not set? Or you see a need to
> interoperate with other fence implementations via that bit somehow?

Currently drm/scheduler is the only one using it.  I ended up dropping
use of it in msm since the deadline is stored in the fence-context
instead.  But I think it is better to try to avoid assuming that zero
means not-set.

It could be moved to drm/sched.. I guess there are few enough
implementations at this point to say whether it is something useful to
other drivers or not.

BR,
-R

> Regards,
>
> Tvrtko

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl
  2023-02-20 16:09     ` Rob Clark
  2023-02-21  8:41       ` Pekka Paalanen
@ 2023-02-23  9:19       ` Christian König
  1 sibling, 0 replies; 93+ messages in thread
From: Christian König @ 2023-02-23  9:19 UTC (permalink / raw)
  To: Rob Clark, Christian König
  Cc: dri-devel, freedreno, Daniel Vetter, Michel Dänzer,
	Tvrtko Ursulin, Rodrigo Vivi, Alex Deucher, Pekka Paalanen,
	Simon Ser, Rob Clark, Sumit Semwal, Gustavo Padovan,
	open list:SYNC FILE FRAMEWORK,
	moderated list:DMA BUFFER SHARING FRAMEWORK, open list

Am 20.02.23 um 17:09 schrieb Rob Clark:
> On Mon, Feb 20, 2023 at 12:27 AM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 18.02.23 um 22:15 schrieb Rob Clark:
>>> From: Rob Clark <robdclark@chromium.org>
>>>
>>> The initial purpose is for igt tests, but this would also be useful for
>>> compositors that wait until close to vblank deadline to make decisions
>>> about which frame to show.
>>>
>>> The igt tests can be found at:
>>>
>>> https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadline
>>>
>>> v2: Clarify the timebase, add link to igt tests
>>>
>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>> ---
>>>    drivers/dma-buf/sync_file.c    | 19 +++++++++++++++++++
>>>    include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++
>>>    2 files changed, 41 insertions(+)
>>>
>>> diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
>>> index af57799c86ce..fb6ca1032885 100644
>>> --- a/drivers/dma-buf/sync_file.c
>>> +++ b/drivers/dma-buf/sync_file.c
>>> @@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
>>>        return ret;
>>>    }
>>>
>>> +static int sync_file_ioctl_set_deadline(struct sync_file *sync_file,
>>> +                                     unsigned long arg)
>>> +{
>>> +     struct sync_set_deadline ts;
>>> +
>>> +     if (copy_from_user(&ts, (void __user *)arg, sizeof(ts)))
>>> +             return -EFAULT;
>>> +
>>> +     if (ts.pad)
>>> +             return -EINVAL;
>>> +
>>> +     dma_fence_set_deadline(sync_file->fence, ktime_set(ts.tv_sec, ts.tv_nsec));
>>> +
>>> +     return 0;
>>> +}
>>> +
>>>    static long sync_file_ioctl(struct file *file, unsigned int cmd,
>>>                            unsigned long arg)
>>>    {
>>> @@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd,
>>>        case SYNC_IOC_FILE_INFO:
>>>                return sync_file_ioctl_fence_info(sync_file, arg);
>>>
>>> +     case SYNC_IOC_SET_DEADLINE:
>>> +             return sync_file_ioctl_set_deadline(sync_file, arg);
>>> +
>>>        default:
>>>                return -ENOTTY;
>>>        }
>>> diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
>>> index ee2dcfb3d660..c8666580816f 100644
>>> --- a/include/uapi/linux/sync_file.h
>>> +++ b/include/uapi/linux/sync_file.h
>>> @@ -67,6 +67,20 @@ struct sync_file_info {
>>>        __u64   sync_fence_info;
>>>    };
>>>
>>> +/**
>>> + * struct sync_set_deadline - set a deadline on a fence
>>> + * @tv_sec:  seconds elapsed since epoch
>>> + * @tv_nsec: nanoseconds elapsed since the time given by the tv_sec
>>> + * @pad:     must be zero
>>> + *
>>> + * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank)
>>> + */
>>> +struct sync_set_deadline {
>>> +     __s64   tv_sec;
>>> +     __s32   tv_nsec;
>>> +     __u32   pad;
>> IIRC struct timespec defined this as time_t/long (which is horrible for
>> an UAPI because of the sizeof(long) dependency), one possible
>> alternative is to use 64bit nanoseconds from CLOCK_MONOTONIC (which is
>> essentially ktime).
>>
>> Not 100% sure if there is any preferences documented, but I think the
>> later might be better.
> The original thought is that this maps directly to clock_gettime()
> without extra conversion needed, and is similar to other pre-ktime_t
> UAPI.  But OTOH if userspace wants to add an offset, it is maybe
> better to convert completely to ns in userspace and use a u64 (as that
> is what ns_to_ktime() uses).. (and OFC whatever decision here also
> applies to the syncobj wait ioctls)
>
> I'm leaning towards u64 CLOCK_MONOTONIC ns if no one has a good
> argument against that.

+1 for that.

Regards,
Christian.

>
> BR,
> -R
>
>> Either way the patch is Acked-by: Christian König
>> <christian.koenig@amd.com> for this patch.
>>
>> Regards,
>> Christian.
>>
>>> +};
>>> +
>>>    #define SYNC_IOC_MAGIC              '>'
>>>
>>>    /**
>>> @@ -95,4 +109,12 @@ struct sync_file_info {
>>>     */
>>>    #define SYNC_IOC_FILE_INFO  _IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
>>>
>>> +
>>> +/**
>>> + * DOC: SYNC_IOC_SET_DEADLINE - set a deadline on a fence
>>> + *
>>> + * Allows userspace to set a deadline on a fence, see dma_fence_set_deadline()
>>> + */
>>> +#define SYNC_IOC_SET_DEADLINE        _IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
>>> +
>>>    #endif /* _UAPI_LINUX_SYNC_H */


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-22 15:37             ` Rob Clark
@ 2023-02-23  9:38               ` Pekka Paalanen
  2023-02-23 18:51                 ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-23  9:38 UTC (permalink / raw)
  To: Rob Clark
  Cc: Luben Tuikov, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 4694 bytes --]

On Wed, 22 Feb 2023 07:37:26 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Tue, 21 Feb 2023 09:53:56 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >  
> > > On Tue, Feb 21, 2023 at 8:48 AM Luben Tuikov <luben.tuikov@amd.com> wrote:  
> > > >
> > > > On 2023-02-20 11:14, Rob Clark wrote:  
> > > > > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > > > >>
> > > > >> On Sat, 18 Feb 2023 13:15:49 -0800
> > > > >> Rob Clark <robdclark@gmail.com> wrote:
> > > > >>  
> > > > >>> From: Rob Clark <robdclark@chromium.org>
> > > > >>>
> > > > >>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > > > >>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > > > >>> some work has completed).  Usermode components of GPU driver stacks
> > > > >>> often poll() on fence fd's to know when it is safe to do things like
> > > > >>> free or reuse a buffer, but they can also poll() on a fence fd when
> > > > >>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > > > >>> lets the kernel differentiate these two cases.
> > > > >>>
> > > > >>> Signed-off-by: Rob Clark <robdclark@chromium.org>  
> > > > >>
> > > > >> Hi,
> > > > >>
> > > > >> where would the UAPI documentation of this go?
> > > > >> It seems to be missing.  
> > > > >
> > > > > Good question, I am not sure.  The poll() man page has a description,
> > > > > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > > > > bit vague).
> > > > >  
> > > > >> If a Wayland compositor is polling application fences to know which
> > > > >> client buffer to use in its rendering, should the compositor poll with
> > > > >> PRI or not? If a compositor polls with PRI, then all fences from all
> > > > >> applications would always be PRI. Would that be harmful somehow or
> > > > >> would it be beneficial?  
> > > > >
> > > > > I think a compositor would rather use the deadline ioctl and then poll
> > > > > without PRI.  Otherwise you are giving an urgency signal to the fence
> > > > > signaller which might not necessarily be needed.
> > > > >
> > > > > The places where I expect PRI to be useful is more in mesa (things
> > > > > like glFinish(), readpix, and other similar sorts of blocking APIs)  
> > > > Hi,
> > > >
> > > > Hmm, but then user-space could do the opposite, namely, submit work as usual--never
> > > > using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
> > > > like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
> > > > this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
> > > > time, and then wouldn't that be equivalent to (E)POLLPRI?  
> > >
> > > Yeah, (E)POLLPRI isn't strictly needed if we have SET_DEADLINE.  It is
> > > slightly more convenient if you want an immediate deadline (single
> > > syscall instead of two), but not strictly needed.  OTOH it piggy-backs
> > > on existing UABI.  
> >
> > In that case, I would be conservative, and not add the POLLPRI
> > semantics. An UAPI addition that is not strictly needed and somewhat
> > unclear if it violates any design principles is best not done, until it
> > is proven to be beneficial.
> >
> > Besides, a Wayland compositor does not necessary need to add the fd
> > to its main event loop for poll. It could just SET_DEADLINE, and then
> > when it renders simply check if the fence passed or not already. Not
> > polling means the compositor does not need to wake up at the moment the
> > fence signals to just record a flag.  
> 
> poll(POLLPRI) isn't intended for wayland.. but is a thing I want in
> mesa for fence waits.  I _could_ use SET_DEADLINE but it is two
> syscalls and correspondingly more code ;-)

But is it actually beneficial? "More code" seems quite irrelevant.

Would there be a hundred or more of those per frame? Or would it be
always limited to one or two? Or totally depend on what the application
is doing? Is it a significant impact?

> > On another matter, if the application uses SET_DEADLINE with one
> > timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > another timestamp, what should happen?  
> 
> The expectation is that many deadline hints can be set on a fence.
> The fence signaller should track the soonest deadline.

You need to document that as UAPI, since it is observable to userspace.
It would be bad if drivers or subsystems would differ in behaviour.


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-23  9:38               ` Pekka Paalanen
@ 2023-02-23 18:51                 ` Rob Clark
  2023-02-24  9:26                   ` Pekka Paalanen
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-23 18:51 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Luben Tuikov, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Wed, 22 Feb 2023 07:37:26 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >
> > > On Tue, 21 Feb 2023 09:53:56 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > > On Tue, Feb 21, 2023 at 8:48 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > > >
> > > > > On 2023-02-20 11:14, Rob Clark wrote:
> > > > > > On Mon, Feb 20, 2023 at 12:53 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > >>
> > > > > >> On Sat, 18 Feb 2023 13:15:49 -0800
> > > > > >> Rob Clark <robdclark@gmail.com> wrote:
> > > > > >>
> > > > > >>> From: Rob Clark <robdclark@chromium.org>
> > > > > >>>
> > > > > >>> Allow userspace to use the EPOLLPRI/POLLPRI flag to indicate an urgent
> > > > > >>> wait (as opposed to a "housekeeping" wait to know when to cleanup after
> > > > > >>> some work has completed).  Usermode components of GPU driver stacks
> > > > > >>> often poll() on fence fd's to know when it is safe to do things like
> > > > > >>> free or reuse a buffer, but they can also poll() on a fence fd when
> > > > > >>> waiting to read back results from the GPU.  The EPOLLPRI/POLLPRI flag
> > > > > >>> lets the kernel differentiate these two cases.
> > > > > >>>
> > > > > >>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > > > >>
> > > > > >> Hi,
> > > > > >>
> > > > > >> where would the UAPI documentation of this go?
> > > > > >> It seems to be missing.
> > > > > >
> > > > > > Good question, I am not sure.  The poll() man page has a description,
> > > > > > but my usage doesn't fit that _exactly_ (but OTOH the description is a
> > > > > > bit vague).
> > > > > >
> > > > > >> If a Wayland compositor is polling application fences to know which
> > > > > >> client buffer to use in its rendering, should the compositor poll with
> > > > > >> PRI or not? If a compositor polls with PRI, then all fences from all
> > > > > >> applications would always be PRI. Would that be harmful somehow or
> > > > > >> would it be beneficial?
> > > > > >
> > > > > > I think a compositor would rather use the deadline ioctl and then poll
> > > > > > without PRI.  Otherwise you are giving an urgency signal to the fence
> > > > > > signaller which might not necessarily be needed.
> > > > > >
> > > > > > The places where I expect PRI to be useful is more in mesa (things
> > > > > > like glFinish(), readpix, and other similar sorts of blocking APIs)
> > > > > Hi,
> > > > >
> > > > > Hmm, but then user-space could do the opposite, namely, submit work as usual--never
> > > > > using the SET_DEADLINE ioctl, and then at the end, poll using (E)POLLPRI. That seems
> > > > > like a possible usage pattern, unintended--maybe, but possible. Do we want to discourage
> > > > > this? Wouldn't SET_DEADLINE be enough? I mean, one can call SET_DEADLINE with the current
> > > > > time, and then wouldn't that be equivalent to (E)POLLPRI?
> > > >
> > > > Yeah, (E)POLLPRI isn't strictly needed if we have SET_DEADLINE.  It is
> > > > slightly more convenient if you want an immediate deadline (single
> > > > syscall instead of two), but not strictly needed.  OTOH it piggy-backs
> > > > on existing UABI.
> > >
> > > In that case, I would be conservative, and not add the POLLPRI
> > > semantics. An UAPI addition that is not strictly needed and somewhat
> > > unclear if it violates any design principles is best not done, until it
> > > is proven to be beneficial.
> > >
> > > Besides, a Wayland compositor does not necessary need to add the fd
> > > to its main event loop for poll. It could just SET_DEADLINE, and then
> > > when it renders simply check if the fence passed or not already. Not
> > > polling means the compositor does not need to wake up at the moment the
> > > fence signals to just record a flag.
> >
> > poll(POLLPRI) isn't intended for wayland.. but is a thing I want in
> > mesa for fence waits.  I _could_ use SET_DEADLINE but it is two
> > syscalls and correspondingly more code ;-)
>
> But is it actually beneficial? "More code" seems quite irrelevant.
>
> Would there be a hundred or more of those per frame? Or would it be
> always limited to one or two? Or totally depend on what the application
> is doing? Is it a significant impact?

In general, any time the CPU is waiting on the GPU, you have already
lost.  So I don't think the extra syscall is too much of a problem.
Just less convenient.

> > > On another matter, if the application uses SET_DEADLINE with one
> > > timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > another timestamp, what should happen?
> >
> > The expectation is that many deadline hints can be set on a fence.
> > The fence signaller should track the soonest deadline.
>
> You need to document that as UAPI, since it is observable to userspace.
> It would be bad if drivers or subsystems would differ in behaviour.
>

It is in the end a hint.  It is about giving the driver more
information so that it can make better choices.  But the driver is
even free to ignore it.  So maybe "expectation" is too strong of a
word.  Rather, any other behavior doesn't really make sense.  But it
could end up being dictated by how the hw and/or fw works.

BR,
-R

>
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-23 18:51                 ` Rob Clark
@ 2023-02-24  9:26                   ` Pekka Paalanen
  2023-02-24  9:41                     ` Tvrtko Ursulin
  0 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-24  9:26 UTC (permalink / raw)
  To: Rob Clark
  Cc: Luben Tuikov, Rob Clark, Gustavo Padovan, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Rodrigo Vivi,
	Alex Deucher, freedreno, Sumit Semwal,
	open list:SYNC FILE FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 2064 bytes --]

On Thu, 23 Feb 2023 10:51:48 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Wed, 22 Feb 2023 07:37:26 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >  
> > > On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  

...

> > > > On another matter, if the application uses SET_DEADLINE with one
> > > > timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > another timestamp, what should happen?  
> > >
> > > The expectation is that many deadline hints can be set on a fence.
> > > The fence signaller should track the soonest deadline.  
> >
> > You need to document that as UAPI, since it is observable to userspace.
> > It would be bad if drivers or subsystems would differ in behaviour.
> >  
> 
> It is in the end a hint.  It is about giving the driver more
> information so that it can make better choices.  But the driver is
> even free to ignore it.  So maybe "expectation" is too strong of a
> word.  Rather, any other behavior doesn't really make sense.  But it
> could end up being dictated by how the hw and/or fw works.

It will stop being a hint once it has been implemented and used in the
wild long enough. The kernel userspace regression rules make sure of
that.

See the topic of implementing triple-buffering in Mutter in order to
put more work to the GPU in order to have the GPU ramp up clocks in
order to not miss rendering deadlines. I don't think that patch set has
landed in Mutter upstream, but I hear distributions in downstream are
already carrying it.

https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1383
https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1441

Granted, GPU clocks are just one side of that story it seems, and
triple-buffering may have other benefits.

If SET_DEADLINE would fix that problem without triple-buffering, it is
definitely userspace observable, expected and eventually required
behaviour.


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24  9:26                   ` Pekka Paalanen
@ 2023-02-24  9:41                     ` Tvrtko Ursulin
  2023-02-24 10:24                       ` Pekka Paalanen
  0 siblings, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-24  9:41 UTC (permalink / raw)
  To: Pekka Paalanen, Rob Clark
  Cc: Rob Clark, Tvrtko Ursulin, Gustavo Padovan, Michel Dänzer,
	Rodrigo Vivi, open list, dri-devel, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Luben Tuikov,
	Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK


On 24/02/2023 09:26, Pekka Paalanen wrote:
> On Thu, 23 Feb 2023 10:51:48 -0800
> Rob Clark <robdclark@gmail.com> wrote:
> 
>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>
>>> On Wed, 22 Feb 2023 07:37:26 -0800
>>> Rob Clark <robdclark@gmail.com> wrote:
>>>   
>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> 
> ...
> 
>>>>> On another matter, if the application uses SET_DEADLINE with one
>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
>>>>> another timestamp, what should happen?
>>>>
>>>> The expectation is that many deadline hints can be set on a fence.
>>>> The fence signaller should track the soonest deadline.
>>>
>>> You need to document that as UAPI, since it is observable to userspace.
>>> It would be bad if drivers or subsystems would differ in behaviour.
>>>   
>>
>> It is in the end a hint.  It is about giving the driver more
>> information so that it can make better choices.  But the driver is
>> even free to ignore it.  So maybe "expectation" is too strong of a
>> word.  Rather, any other behavior doesn't really make sense.  But it
>> could end up being dictated by how the hw and/or fw works.
> 
> It will stop being a hint once it has been implemented and used in the
> wild long enough. The kernel userspace regression rules make sure of
> that.

Yeah, tricky and maybe a gray area in this case. I think we eluded 
elsewhere in the thread that renaming the thing might be an option.

So maybe instead of deadline, which is a very strong word, use something 
along the lines of "present time hint", or "signalled time hint"? Maybe 
reads clumsy. Just throwing some ideas for a start.

Regards,

Tvrtko

> See the topic of implementing triple-buffering in Mutter in order to
> put more work to the GPU in order to have the GPU ramp up clocks in
> order to not miss rendering deadlines. I don't think that patch set has
> landed in Mutter upstream, but I hear distributions in downstream are
> already carrying it.
> 
> https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1383
> https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1441
> 
> Granted, GPU clocks are just one side of that story it seems, and
> triple-buffering may have other benefits.
> 
> If SET_DEADLINE would fix that problem without triple-buffering, it is
> definitely userspace observable, expected and eventually required
> behaviour.
> 
> 
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits
  2023-02-18 21:15 ` [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits Rob Clark
  2023-02-19 16:09   ` Rob Clark
  2023-02-20  9:05   ` Pekka Paalanen
@ 2023-02-24  9:51   ` Tvrtko Ursulin
  2 siblings, 0 replies; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-24  9:51 UTC (permalink / raw)
  To: Rob Clark, dri-devel
  Cc: Rob Clark, Thomas Zimmermann, Tvrtko Ursulin,
	Christian König, Michel Dänzer, open list,
	Pekka Paalanen, Rodrigo Vivi, Alex Deucher, freedreno


On 18/02/2023 21:15, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Add a new flag to let userspace provide a deadline as a hint for syncobj
> and timeline waits.  This gives a hint to the driver signaling the
> backing fences about how soon userspace needs it to compete work, so it
> can addjust GPU frequency accordingly.  An immediate deadline can be
> given to provide something equivalent to i915 "wait boost".

I'm waiting for some test result before I can comment on this one 
properly. It may end up we just want to mark these as immediate deadline 
to help existing userspace. In which case maybe we would need a per 
driver option of deciding what to do. So instead of:

   dma_fence_set_deadline(fence, *deadline);

We'd need something like:

   dma_fence_mark_wait(fence);

Which would call into individual drivers to decide what to do with that. 
Some drivers maybe don't want to do anything, and i915 may end up 
applying waitboost. Or maybe not "instead of" but "along with". Aka 
similar in spirit to my RFC.

Regards,

Tvrtko

> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> 
> I'm a bit on the fence about the addition of the DRM_CAP, but it seems
> useful to give userspace a way to probe whether the kernel and driver
> supports the new wait flag, especially since we have vk-common code
> dealing with syncobjs.  But open to suggestions.
> 
>   drivers/gpu/drm/drm_ioctl.c   |  3 ++
>   drivers/gpu/drm/drm_syncobj.c | 59 ++++++++++++++++++++++++++++-------
>   include/drm/drm_drv.h         |  6 ++++
>   include/uapi/drm/drm.h        | 16 ++++++++--
>   4 files changed, 71 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 7c9d66ee917d..1c5c942cf0f9 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -254,6 +254,9 @@ static int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_
>   	case DRM_CAP_SYNCOBJ_TIMELINE:
>   		req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
>   		return 0;
> +	case DRM_CAP_SYNCOBJ_DEADLINE:
> +		req->value = drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE);
> +		return 0;
>   	}
>   
>   	/* Other caps only work with KMS drivers */
> diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> index 0c2be8360525..61cf97972a60 100644
> --- a/drivers/gpu/drm/drm_syncobj.c
> +++ b/drivers/gpu/drm/drm_syncobj.c
> @@ -973,7 +973,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>   						  uint32_t count,
>   						  uint32_t flags,
>   						  signed long timeout,
> -						  uint32_t *idx)
> +						  uint32_t *idx,
> +						  ktime_t *deadline)
>   {
>   	struct syncobj_wait_entry *entries;
>   	struct dma_fence *fence;
> @@ -1053,6 +1054,15 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
>   			drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
>   	}
>   
> +	if (deadline) {
> +		for (i = 0; i < count; ++i) {
> +			fence = entries[i].fence;
> +			if (!fence)
> +				continue;
> +			dma_fence_set_deadline(fence, *deadline);
> +		}
> +	}
> +
>   	do {
>   		set_current_state(TASK_INTERRUPTIBLE);
>   
> @@ -1151,7 +1161,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>   				  struct drm_file *file_private,
>   				  struct drm_syncobj_wait *wait,
>   				  struct drm_syncobj_timeline_wait *timeline_wait,
> -				  struct drm_syncobj **syncobjs, bool timeline)
> +				  struct drm_syncobj **syncobjs, bool timeline,
> +				  ktime_t *deadline)
>   {
>   	signed long timeout = 0;
>   	uint32_t first = ~0;
> @@ -1162,7 +1173,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>   							 NULL,
>   							 wait->count_handles,
>   							 wait->flags,
> -							 timeout, &first);
> +							 timeout, &first,
> +							 deadline);
>   		if (timeout < 0)
>   			return timeout;
>   		wait->first_signaled = first;
> @@ -1172,7 +1184,8 @@ static int drm_syncobj_array_wait(struct drm_device *dev,
>   							 u64_to_user_ptr(timeline_wait->points),
>   							 timeline_wait->count_handles,
>   							 timeline_wait->flags,
> -							 timeout, &first);
> +							 timeout, &first,
> +							 deadline);
>   		if (timeout < 0)
>   			return timeout;
>   		timeline_wait->first_signaled = first;
> @@ -1243,13 +1256,20 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>   {
>   	struct drm_syncobj_wait *args = data;
>   	struct drm_syncobj **syncobjs;
> +	unsigned possible_flags;
> +	ktime_t t, *tp = NULL;
>   	int ret = 0;
>   
>   	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
>   		return -EOPNOTSUPP;
>   
> -	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> -			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
> +	possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> +			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT;
> +
> +	if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> +		possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> +
> +	if (args->flags & ~possible_flags)
>   		return -EINVAL;
>   
>   	if (args->count_handles == 0)
> @@ -1262,8 +1282,13 @@ drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
>   	if (ret < 0)
>   		return ret;
>   
> +	if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> +		t = ktime_set(args->deadline_sec, args->deadline_nsec);
> +		tp = &t;
> +	}
> +
>   	ret = drm_syncobj_array_wait(dev, file_private,
> -				     args, NULL, syncobjs, false);
> +				     args, NULL, syncobjs, false, tp);
>   
>   	drm_syncobj_array_free(syncobjs, args->count_handles);
>   
> @@ -1276,14 +1301,21 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
>   {
>   	struct drm_syncobj_timeline_wait *args = data;
>   	struct drm_syncobj **syncobjs;
> +	unsigned possible_flags;
> +	ktime_t t, *tp = NULL;
>   	int ret = 0;
>   
>   	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
>   		return -EOPNOTSUPP;
>   
> -	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> -			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> -			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
> +	possible_flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
> +			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
> +			 DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE;
> +
> +	if (drm_core_check_feature(dev, DRIVER_SYNCOBJ_DEADLINE))
> +		possible_flags |= DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE;
> +
> +	if (args->flags & ~possible_flags)
>   		return -EINVAL;
>   
>   	if (args->count_handles == 0)
> @@ -1296,8 +1328,13 @@ drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
>   	if (ret < 0)
>   		return ret;
>   
> +	if (args->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE) {
> +		t = ktime_set(args->deadline_sec, args->deadline_nsec);
> +		tp = &t;
> +	}
> +
>   	ret = drm_syncobj_array_wait(dev, file_private,
> -				     NULL, args, syncobjs, true);
> +				     NULL, args, syncobjs, true, tp);
>   
>   	drm_syncobj_array_free(syncobjs, args->count_handles);
>   
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 1d76d0686b03..9aa24f097e22 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -104,6 +104,12 @@ enum drm_driver_feature {
>   	 * acceleration should be handled by two drivers that are connected using auxiliary bus.
>   	 */
>   	DRIVER_COMPUTE_ACCEL            = BIT(7),
> +	/**
> +	 * @DRIVER_SYNCOBJ_DEADLINE:
> +	 *
> +	 * Driver supports &dma_fence_ops.set_deadline
> +	 */
> +	DRIVER_SYNCOBJ_DEADLINE         = BIT(8),
>   
>   	/* IMPORTANT: Below are all the legacy flags, add new ones above. */
>   
> diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
> index 642808520d92..c6b85bb13810 100644
> --- a/include/uapi/drm/drm.h
> +++ b/include/uapi/drm/drm.h
> @@ -767,6 +767,13 @@ struct drm_gem_open {
>    * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
>    */
>   #define DRM_CAP_SYNCOBJ_TIMELINE	0x14
> +/**
> + * DRM_CAP_SYNCOBJ_DEADLINE
> + *
> + * If set to 1, the driver supports DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE flag
> + * on the SYNCOBJ_TIMELINE_WAIT/SYNCOBJ_WAIT ioctls.
> + */
> +#define DRM_CAP_SYNCOBJ_DEADLINE	0x15
>   
>   /* DRM_IOCTL_GET_CAP ioctl argument type */
>   struct drm_get_cap {
> @@ -887,6 +894,7 @@ struct drm_syncobj_transfer {
>   #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL (1 << 0)
>   #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT (1 << 1)
>   #define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE (1 << 2) /* wait for time point to become available */
> +#define DRM_SYNCOBJ_WAIT_FLAGS_WAIT_DEADLINE (1 << 3) /* set fence deadline based to deadline_nsec/sec */
>   struct drm_syncobj_wait {
>   	__u64 handles;
>   	/* absolute timeout */
> @@ -894,7 +902,9 @@ struct drm_syncobj_wait {
>   	__u32 count_handles;
>   	__u32 flags;
>   	__u32 first_signaled; /* only valid when not waiting all */
> -	__u32 pad;
> +	/* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> +	__u32 deadline_nsec;
> +	__u64 deadline_sec;
>   };
>   
>   struct drm_syncobj_timeline_wait {
> @@ -906,7 +916,9 @@ struct drm_syncobj_timeline_wait {
>   	__u32 count_handles;
>   	__u32 flags;
>   	__u32 first_signaled; /* only valid when not waiting all */
> -	__u32 pad;
> +	/* Deadline to set on backing fence(s) in CLOCK_MONOTONIC: */
> +	__u32 deadline_nsec;
> +	__u64 deadline_sec;
>   };
>   
>   

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24  9:41                     ` Tvrtko Ursulin
@ 2023-02-24 10:24                       ` Pekka Paalanen
  2023-02-24 10:50                         ` Tvrtko Ursulin
                                           ` (2 more replies)
  0 siblings, 3 replies; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-24 10:24 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Rob Clark, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 3372 bytes --]

On Fri, 24 Feb 2023 09:41:46 +0000
Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:

> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > On Thu, 23 Feb 2023 10:51:48 -0800
> > Rob Clark <robdclark@gmail.com> wrote:
> >   
> >> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> >>>
> >>> On Wed, 22 Feb 2023 07:37:26 -0800
> >>> Rob Clark <robdclark@gmail.com> wrote:
> >>>     
> >>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > 
> > ...
> >   
> >>>>> On another matter, if the application uses SET_DEADLINE with one
> >>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> >>>>> another timestamp, what should happen?  
> >>>>
> >>>> The expectation is that many deadline hints can be set on a fence.
> >>>> The fence signaller should track the soonest deadline.  
> >>>
> >>> You need to document that as UAPI, since it is observable to userspace.
> >>> It would be bad if drivers or subsystems would differ in behaviour.
> >>>     
> >>
> >> It is in the end a hint.  It is about giving the driver more
> >> information so that it can make better choices.  But the driver is
> >> even free to ignore it.  So maybe "expectation" is too strong of a
> >> word.  Rather, any other behavior doesn't really make sense.  But it
> >> could end up being dictated by how the hw and/or fw works.  
> > 
> > It will stop being a hint once it has been implemented and used in the
> > wild long enough. The kernel userspace regression rules make sure of
> > that.  
> 
> Yeah, tricky and maybe a gray area in this case. I think we eluded 
> elsewhere in the thread that renaming the thing might be an option.
> 
> So maybe instead of deadline, which is a very strong word, use something 
> along the lines of "present time hint", or "signalled time hint"? Maybe 
> reads clumsy. Just throwing some ideas for a start.

You can try, but I fear that if it ever changes behaviour and
someone notices that, it's labelled as a kernel regression. I don't
think documentation has ever been the authoritative definition of UABI
in Linux, it just guides drivers and userspace towards a common
understanding and common usage patterns.

So even if the UABI contract is not documented (ugh), you need to be
prepared to set the UABI contract through kernel implementation.

If you do not document the UABI contract, then different drivers are
likely to implement it differently, leading to differing behaviour.
Also userspace will invent wild ways to abuse the UABI if there is no
documentation guiding it on proper use. If userspace or end users
observe different behaviour, that's bad even if it's not a regression.

I don't like the situation either, but it is what it is. UABI stability
trumps everything regardless of whether it was documented or not.

I bet userspace is going to use this as a "make it faster, make it
hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
library that stamps any and all fences with an expired deadline to
just squeeze out a little more through some weird side-effect.

Well, that's hopefully overboard in scaring, but in the end, I would
like to see UABI documented so I can have a feeling of what it is for
and how it was intended to be used. That's all.


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 10:24                       ` Pekka Paalanen
@ 2023-02-24 10:50                         ` Tvrtko Ursulin
  2023-02-24 11:00                           ` Pekka Paalanen
  2023-02-24 16:59                         ` Rob Clark
  2023-02-24 19:44                         ` Rob Clark
  2 siblings, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-24 10:50 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Rob Clark, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK


On 24/02/2023 10:24, Pekka Paalanen wrote:
> On Fri, 24 Feb 2023 09:41:46 +0000
> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> 
>> On 24/02/2023 09:26, Pekka Paalanen wrote:
>>> On Thu, 23 Feb 2023 10:51:48 -0800
>>> Rob Clark <robdclark@gmail.com> wrote:
>>>    
>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>>>
>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
>>>>> Rob Clark <robdclark@gmail.com> wrote:
>>>>>      
>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>
>>> ...
>>>    
>>>>>>> On another matter, if the application uses SET_DEADLINE with one
>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
>>>>>>> another timestamp, what should happen?
>>>>>>
>>>>>> The expectation is that many deadline hints can be set on a fence.
>>>>>> The fence signaller should track the soonest deadline.
>>>>>
>>>>> You need to document that as UAPI, since it is observable to userspace.
>>>>> It would be bad if drivers or subsystems would differ in behaviour.
>>>>>      
>>>>
>>>> It is in the end a hint.  It is about giving the driver more
>>>> information so that it can make better choices.  But the driver is
>>>> even free to ignore it.  So maybe "expectation" is too strong of a
>>>> word.  Rather, any other behavior doesn't really make sense.  But it
>>>> could end up being dictated by how the hw and/or fw works.
>>>
>>> It will stop being a hint once it has been implemented and used in the
>>> wild long enough. The kernel userspace regression rules make sure of
>>> that.
>>
>> Yeah, tricky and maybe a gray area in this case. I think we eluded
>> elsewhere in the thread that renaming the thing might be an option.
>>
>> So maybe instead of deadline, which is a very strong word, use something
>> along the lines of "present time hint", or "signalled time hint"? Maybe
>> reads clumsy. Just throwing some ideas for a start.
> 
> You can try, but I fear that if it ever changes behaviour and
> someone notices that, it's labelled as a kernel regression. I don't
> think documentation has ever been the authoritative definition of UABI
> in Linux, it just guides drivers and userspace towards a common
> understanding and common usage patterns.
> 
> So even if the UABI contract is not documented (ugh), you need to be
> prepared to set the UABI contract through kernel implementation.

To be the devil's advocate it probably wouldn't be an ABI regression but 
just an regression. Same way as what nice(2) priorities mean hasn't 
always been the same over the years, I don't think there is a strict 
contract.

Having said that, it may be different with latency sensitive stuff such 
as UIs though since it is very observable and can be very painful to users.

> If you do not document the UABI contract, then different drivers are
> likely to implement it differently, leading to differing behaviour.
> Also userspace will invent wild ways to abuse the UABI if there is no
> documentation guiding it on proper use. If userspace or end users
> observe different behaviour, that's bad even if it's not a regression.
> 
> I don't like the situation either, but it is what it is. UABI stability
> trumps everything regardless of whether it was documented or not.
> 
> I bet userspace is going to use this as a "make it faster, make it
> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> library that stamps any and all fences with an expired deadline to
> just squeeze out a little more through some weird side-effect.
> 
> Well, that's hopefully overboard in scaring, but in the end, I would
> like to see UABI documented so I can have a feeling of what it is for
> and how it was intended to be used. That's all.

We share the same concern. If you read elsewhere in these threads you 
will notice I have been calling this an "arms race". If the ability to 
make yourself go faster does not required additional privilege I also 
worry everyone will do it at which point it becomes pointless. So yes, I 
do share this concern about exposing any of this as an unprivileged uapi.

Is it possible to limit access to only compositors in some sane way? 
Sounds tricky when dma-fence should be disconnected from DRM..

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 10:50                         ` Tvrtko Ursulin
@ 2023-02-24 11:00                           ` Pekka Paalanen
  2023-02-24 11:37                             ` Tvrtko Ursulin
  0 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-24 11:00 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: Rob Clark, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 5015 bytes --]

On Fri, 24 Feb 2023 10:50:51 +0000
Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:

> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > On Fri, 24 Feb 2023 09:41:46 +0000
> > Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> >   
> >> On 24/02/2023 09:26, Pekka Paalanen wrote:  
> >>> On Thu, 23 Feb 2023 10:51:48 -0800
> >>> Rob Clark <robdclark@gmail.com> wrote:
> >>>      
> >>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> >>>>>
> >>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> >>>>> Rob Clark <robdclark@gmail.com> wrote:
> >>>>>        
> >>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> >>>
> >>> ...
> >>>      
> >>>>>>> On another matter, if the application uses SET_DEADLINE with one
> >>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> >>>>>>> another timestamp, what should happen?  
> >>>>>>
> >>>>>> The expectation is that many deadline hints can be set on a fence.
> >>>>>> The fence signaller should track the soonest deadline.  
> >>>>>
> >>>>> You need to document that as UAPI, since it is observable to userspace.
> >>>>> It would be bad if drivers or subsystems would differ in behaviour.
> >>>>>        
> >>>>
> >>>> It is in the end a hint.  It is about giving the driver more
> >>>> information so that it can make better choices.  But the driver is
> >>>> even free to ignore it.  So maybe "expectation" is too strong of a
> >>>> word.  Rather, any other behavior doesn't really make sense.  But it
> >>>> could end up being dictated by how the hw and/or fw works.  
> >>>
> >>> It will stop being a hint once it has been implemented and used in the
> >>> wild long enough. The kernel userspace regression rules make sure of
> >>> that.  
> >>
> >> Yeah, tricky and maybe a gray area in this case. I think we eluded
> >> elsewhere in the thread that renaming the thing might be an option.
> >>
> >> So maybe instead of deadline, which is a very strong word, use something
> >> along the lines of "present time hint", or "signalled time hint"? Maybe
> >> reads clumsy. Just throwing some ideas for a start.  
> > 
> > You can try, but I fear that if it ever changes behaviour and
> > someone notices that, it's labelled as a kernel regression. I don't
> > think documentation has ever been the authoritative definition of UABI
> > in Linux, it just guides drivers and userspace towards a common
> > understanding and common usage patterns.
> > 
> > So even if the UABI contract is not documented (ugh), you need to be
> > prepared to set the UABI contract through kernel implementation.  
> 
> To be the devil's advocate it probably wouldn't be an ABI regression but 
> just an regression. Same way as what nice(2) priorities mean hasn't 
> always been the same over the years, I don't think there is a strict 
> contract.
> 
> Having said that, it may be different with latency sensitive stuff such 
> as UIs though since it is very observable and can be very painful to users.
> 
> > If you do not document the UABI contract, then different drivers are
> > likely to implement it differently, leading to differing behaviour.
> > Also userspace will invent wild ways to abuse the UABI if there is no
> > documentation guiding it on proper use. If userspace or end users
> > observe different behaviour, that's bad even if it's not a regression.
> > 
> > I don't like the situation either, but it is what it is. UABI stability
> > trumps everything regardless of whether it was documented or not.
> > 
> > I bet userspace is going to use this as a "make it faster, make it
> > hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > library that stamps any and all fences with an expired deadline to
> > just squeeze out a little more through some weird side-effect.
> > 
> > Well, that's hopefully overboard in scaring, but in the end, I would
> > like to see UABI documented so I can have a feeling of what it is for
> > and how it was intended to be used. That's all.  
> 
> We share the same concern. If you read elsewhere in these threads you 
> will notice I have been calling this an "arms race". If the ability to 
> make yourself go faster does not required additional privilege I also 
> worry everyone will do it at which point it becomes pointless. So yes, I 
> do share this concern about exposing any of this as an unprivileged uapi.
> 
> Is it possible to limit access to only compositors in some sane way? 
> Sounds tricky when dma-fence should be disconnected from DRM..

Maybe it's not that bad in this particular case, because we are talking
only about boosting GPU clocks which benefits everyone (except
battery life) and it does not penalize other programs like e.g.
job priorities do.

Drivers are not going to use the deadline for scheduling priorities,
right? I don't recall seeing any mention of that.

...right?


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 11:00                           ` Pekka Paalanen
@ 2023-02-24 11:37                             ` Tvrtko Ursulin
  2023-02-24 15:26                               ` Luben Tuikov
  0 siblings, 1 reply; 93+ messages in thread
From: Tvrtko Ursulin @ 2023-02-24 11:37 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Rob Clark, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK


On 24/02/2023 11:00, Pekka Paalanen wrote:
> On Fri, 24 Feb 2023 10:50:51 +0000
> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> 
>> On 24/02/2023 10:24, Pekka Paalanen wrote:
>>> On Fri, 24 Feb 2023 09:41:46 +0000
>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>>>    
>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
>>>>> Rob Clark <robdclark@gmail.com> wrote:
>>>>>       
>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>>>>>
>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
>>>>>>>         
>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>>>
>>>>> ...
>>>>>       
>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
>>>>>>>>> another timestamp, what should happen?
>>>>>>>>
>>>>>>>> The expectation is that many deadline hints can be set on a fence.
>>>>>>>> The fence signaller should track the soonest deadline.
>>>>>>>
>>>>>>> You need to document that as UAPI, since it is observable to userspace.
>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
>>>>>>>         
>>>>>>
>>>>>> It is in the end a hint.  It is about giving the driver more
>>>>>> information so that it can make better choices.  But the driver is
>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
>>>>>> could end up being dictated by how the hw and/or fw works.
>>>>>
>>>>> It will stop being a hint once it has been implemented and used in the
>>>>> wild long enough. The kernel userspace regression rules make sure of
>>>>> that.
>>>>
>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
>>>> elsewhere in the thread that renaming the thing might be an option.
>>>>
>>>> So maybe instead of deadline, which is a very strong word, use something
>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
>>>> reads clumsy. Just throwing some ideas for a start.
>>>
>>> You can try, but I fear that if it ever changes behaviour and
>>> someone notices that, it's labelled as a kernel regression. I don't
>>> think documentation has ever been the authoritative definition of UABI
>>> in Linux, it just guides drivers and userspace towards a common
>>> understanding and common usage patterns.
>>>
>>> So even if the UABI contract is not documented (ugh), you need to be
>>> prepared to set the UABI contract through kernel implementation.
>>
>> To be the devil's advocate it probably wouldn't be an ABI regression but
>> just an regression. Same way as what nice(2) priorities mean hasn't
>> always been the same over the years, I don't think there is a strict
>> contract.
>>
>> Having said that, it may be different with latency sensitive stuff such
>> as UIs though since it is very observable and can be very painful to users.
>>
>>> If you do not document the UABI contract, then different drivers are
>>> likely to implement it differently, leading to differing behaviour.
>>> Also userspace will invent wild ways to abuse the UABI if there is no
>>> documentation guiding it on proper use. If userspace or end users
>>> observe different behaviour, that's bad even if it's not a regression.
>>>
>>> I don't like the situation either, but it is what it is. UABI stability
>>> trumps everything regardless of whether it was documented or not.
>>>
>>> I bet userspace is going to use this as a "make it faster, make it
>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
>>> library that stamps any and all fences with an expired deadline to
>>> just squeeze out a little more through some weird side-effect.
>>>
>>> Well, that's hopefully overboard in scaring, but in the end, I would
>>> like to see UABI documented so I can have a feeling of what it is for
>>> and how it was intended to be used. That's all.
>>
>> We share the same concern. If you read elsewhere in these threads you
>> will notice I have been calling this an "arms race". If the ability to
>> make yourself go faster does not required additional privilege I also
>> worry everyone will do it at which point it becomes pointless. So yes, I
>> do share this concern about exposing any of this as an unprivileged uapi.
>>
>> Is it possible to limit access to only compositors in some sane way?
>> Sounds tricky when dma-fence should be disconnected from DRM..
> 
> Maybe it's not that bad in this particular case, because we are talking
> only about boosting GPU clocks which benefits everyone (except
> battery life) and it does not penalize other programs like e.g.
> job priorities do.

Apart from efficiency that you mentioned, which does not always favor 
higher clocks, sometimes thermal budget is also shared between CPU and 
GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to 
make optimal choices without the full coordination between both schedulers.

But that is even not the main point, which is that if everyone sets the 
immediate deadline then having the deadline API is a bit pointless. For 
instance there is a reason negative nice needs CAP_SYS_ADMIN.

However Rob has also pointed out the existence of uclamp.min via 
sched_setattr which is unprivileged and can influence frequency 
selection in the CPU world, so I conceded on that point. If CPU world 
has accepted it so can we I guess.

So IMO we are back to whether we can agree defining it is a hint is good 
enough, be in via the name of the ioctl/flag itself or via documentation.

> Drivers are not going to use the deadline for scheduling priorities,
> right? I don't recall seeing any mention of that.
> 
> ...right?

I wouldn't have thought it would be beneficial to preclude that, or 
assume what drivers would do with the info to begin with.

For instance in i915 we almost had a deadline based scheduler which was 
much fairer than the current priority sorted fifo and in an ideal world 
we would either revive or re-implement that idea. In which case 
considering the fence deadline would naturally slot in and give true 
integration with compositor deadlines (not just boost clocks and pray it 
helps).

Regards,

Tvrtko

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 11:37                             ` Tvrtko Ursulin
@ 2023-02-24 15:26                               ` Luben Tuikov
  2023-02-24 17:59                                 ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Luben Tuikov @ 2023-02-24 15:26 UTC (permalink / raw)
  To: Tvrtko Ursulin, Pekka Paalanen
  Cc: Rob Clark, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> 
> On 24/02/2023 11:00, Pekka Paalanen wrote:
>> On Fri, 24 Feb 2023 10:50:51 +0000
>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>> On 24/02/2023 10:24, Pekka Paalanen wrote:
>>>> On Fri, 24 Feb 2023 09:41:46 +0000
>>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>>>>    
>>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
>>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
>>>>>> Rob Clark <robdclark@gmail.com> wrote:
>>>>>>       
>>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>>>>>>
>>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
>>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
>>>>>>>>         
>>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>>>>>>
>>>>>> ...
>>>>>>       
>>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
>>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
>>>>>>>>>> another timestamp, what should happen?
>>>>>>>>>
>>>>>>>>> The expectation is that many deadline hints can be set on a fence.
>>>>>>>>> The fence signaller should track the soonest deadline.
>>>>>>>>
>>>>>>>> You need to document that as UAPI, since it is observable to userspace.
>>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
>>>>>>>>         
>>>>>>>
>>>>>>> It is in the end a hint.  It is about giving the driver more
>>>>>>> information so that it can make better choices.  But the driver is
>>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
>>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
>>>>>>> could end up being dictated by how the hw and/or fw works.
>>>>>>
>>>>>> It will stop being a hint once it has been implemented and used in the
>>>>>> wild long enough. The kernel userspace regression rules make sure of
>>>>>> that.
>>>>>
>>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
>>>>> elsewhere in the thread that renaming the thing might be an option.
>>>>>
>>>>> So maybe instead of deadline, which is a very strong word, use something
>>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
>>>>> reads clumsy. Just throwing some ideas for a start.
>>>>
>>>> You can try, but I fear that if it ever changes behaviour and
>>>> someone notices that, it's labelled as a kernel regression. I don't
>>>> think documentation has ever been the authoritative definition of UABI
>>>> in Linux, it just guides drivers and userspace towards a common
>>>> understanding and common usage patterns.
>>>>
>>>> So even if the UABI contract is not documented (ugh), you need to be
>>>> prepared to set the UABI contract through kernel implementation.
>>>
>>> To be the devil's advocate it probably wouldn't be an ABI regression but
>>> just an regression. Same way as what nice(2) priorities mean hasn't
>>> always been the same over the years, I don't think there is a strict
>>> contract.
>>>
>>> Having said that, it may be different with latency sensitive stuff such
>>> as UIs though since it is very observable and can be very painful to users.
>>>
>>>> If you do not document the UABI contract, then different drivers are
>>>> likely to implement it differently, leading to differing behaviour.
>>>> Also userspace will invent wild ways to abuse the UABI if there is no
>>>> documentation guiding it on proper use. If userspace or end users
>>>> observe different behaviour, that's bad even if it's not a regression.
>>>>
>>>> I don't like the situation either, but it is what it is. UABI stability
>>>> trumps everything regardless of whether it was documented or not.
>>>>
>>>> I bet userspace is going to use this as a "make it faster, make it
>>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
>>>> library that stamps any and all fences with an expired deadline to
>>>> just squeeze out a little more through some weird side-effect.
>>>>
>>>> Well, that's hopefully overboard in scaring, but in the end, I would
>>>> like to see UABI documented so I can have a feeling of what it is for
>>>> and how it was intended to be used. That's all.
>>>
>>> We share the same concern. If you read elsewhere in these threads you
>>> will notice I have been calling this an "arms race". If the ability to
>>> make yourself go faster does not required additional privilege I also
>>> worry everyone will do it at which point it becomes pointless. So yes, I
>>> do share this concern about exposing any of this as an unprivileged uapi.
>>>
>>> Is it possible to limit access to only compositors in some sane way?
>>> Sounds tricky when dma-fence should be disconnected from DRM..
>>
>> Maybe it's not that bad in this particular case, because we are talking
>> only about boosting GPU clocks which benefits everyone (except
>> battery life) and it does not penalize other programs like e.g.
>> job priorities do.
> 
> Apart from efficiency that you mentioned, which does not always favor 
> higher clocks, sometimes thermal budget is also shared between CPU and 
> GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to 
> make optimal choices without the full coordination between both schedulers.
> 
> But that is even not the main point, which is that if everyone sets the 
> immediate deadline then having the deadline API is a bit pointless. For 
> instance there is a reason negative nice needs CAP_SYS_ADMIN.
> 
> However Rob has also pointed out the existence of uclamp.min via 
> sched_setattr which is unprivileged and can influence frequency 
> selection in the CPU world, so I conceded on that point. If CPU world 
> has accepted it so can we I guess.
> 
> So IMO we are back to whether we can agree defining it is a hint is good 
> enough, be in via the name of the ioctl/flag itself or via documentation.
> 
>> Drivers are not going to use the deadline for scheduling priorities,
>> right? I don't recall seeing any mention of that.
>>
>> ...right?
> 
> I wouldn't have thought it would be beneficial to preclude that, or 
> assume what drivers would do with the info to begin with.
> 
> For instance in i915 we almost had a deadline based scheduler which was 
> much fairer than the current priority sorted fifo and in an ideal world 
> we would either revive or re-implement that idea. In which case 
> considering the fence deadline would naturally slot in and give true 
> integration with compositor deadlines (not just boost clocks and pray it 
> helps).
How is user-space to decide whether to use ioctl(SET_DEADLINE) or
poll(POLLPRI)?
-- 
Regards,
Luben


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 10:24                       ` Pekka Paalanen
  2023-02-24 10:50                         ` Tvrtko Ursulin
@ 2023-02-24 16:59                         ` Rob Clark
  2023-02-24 19:44                         ` Rob Clark
  2 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-24 16:59 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Tvrtko Ursulin, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Fri, Feb 24, 2023 at 2:24 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Fri, 24 Feb 2023 09:41:46 +0000
> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>
> > On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > On Thu, 23 Feb 2023 10:51:48 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >
> > >> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >>>
> > >>> On Wed, 22 Feb 2023 07:37:26 -0800
> > >>> Rob Clark <robdclark@gmail.com> wrote:
> > >>>
> > >>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >
> > > ...
> > >
> > >>>>> On another matter, if the application uses SET_DEADLINE with one
> > >>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > >>>>> another timestamp, what should happen?
> > >>>>
> > >>>> The expectation is that many deadline hints can be set on a fence.
> > >>>> The fence signaller should track the soonest deadline.
> > >>>
> > >>> You need to document that as UAPI, since it is observable to userspace.
> > >>> It would be bad if drivers or subsystems would differ in behaviour.
> > >>>
> > >>
> > >> It is in the end a hint.  It is about giving the driver more
> > >> information so that it can make better choices.  But the driver is
> > >> even free to ignore it.  So maybe "expectation" is too strong of a
> > >> word.  Rather, any other behavior doesn't really make sense.  But it
> > >> could end up being dictated by how the hw and/or fw works.
> > >
> > > It will stop being a hint once it has been implemented and used in the
> > > wild long enough. The kernel userspace regression rules make sure of
> > > that.
> >
> > Yeah, tricky and maybe a gray area in this case. I think we eluded
> > elsewhere in the thread that renaming the thing might be an option.
> >
> > So maybe instead of deadline, which is a very strong word, use something
> > along the lines of "present time hint", or "signalled time hint"? Maybe
> > reads clumsy. Just throwing some ideas for a start.
>
> You can try, but I fear that if it ever changes behaviour and
> someone notices that, it's labelled as a kernel regression. I don't
> think documentation has ever been the authoritative definition of UABI
> in Linux, it just guides drivers and userspace towards a common
> understanding and common usage patterns.
>
> So even if the UABI contract is not documented (ugh), you need to be
> prepared to set the UABI contract through kernel implementation.
>
> If you do not document the UABI contract, then different drivers are
> likely to implement it differently, leading to differing behaviour.
> Also userspace will invent wild ways to abuse the UABI if there is no
> documentation guiding it on proper use. If userspace or end users
> observe different behaviour, that's bad even if it's not a regression.
>
> I don't like the situation either, but it is what it is. UABI stability
> trumps everything regardless of whether it was documented or not.
>
> I bet userspace is going to use this as a "make it faster, make it
> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> library that stamps any and all fences with an expired deadline to
> just squeeze out a little more through some weird side-effect.

Ok, maybe we can rename the SET_DEADLINE ioctl to SPACEBAR_HEATER ;-)

BR,
-R

> Well, that's hopefully overboard in scaring, but in the end, I would
> like to see UABI documented so I can have a feeling of what it is for
> and how it was intended to be used. That's all.
>
>
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 15:26                               ` Luben Tuikov
@ 2023-02-24 17:59                                 ` Rob Clark
  2023-02-27 21:35                                   ` Rodrigo Vivi
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-24 17:59 UTC (permalink / raw)
  To: Luben Tuikov
  Cc: Tvrtko Ursulin, Pekka Paalanen, Rob Clark, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, Rodrigo Vivi, open list,
	dri-devel, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK,
	Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
>
> On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> >
> > On 24/02/2023 11:00, Pekka Paalanen wrote:
> >> On Fri, 24 Feb 2023 10:50:51 +0000
> >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> >>
> >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> >>>>
> >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> >>>>>>
> >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >>>>>>>>
> >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> >>>>>>>>
> >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >>>>>>
> >>>>>> ...
> >>>>>>
> >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> >>>>>>>>>> another timestamp, what should happen?
> >>>>>>>>>
> >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> >>>>>>>>> The fence signaller should track the soonest deadline.
> >>>>>>>>
> >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> >>>>>>>>
> >>>>>>>
> >>>>>>> It is in the end a hint.  It is about giving the driver more
> >>>>>>> information so that it can make better choices.  But the driver is
> >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> >>>>>>> could end up being dictated by how the hw and/or fw works.
> >>>>>>
> >>>>>> It will stop being a hint once it has been implemented and used in the
> >>>>>> wild long enough. The kernel userspace regression rules make sure of
> >>>>>> that.
> >>>>>
> >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> >>>>> elsewhere in the thread that renaming the thing might be an option.
> >>>>>
> >>>>> So maybe instead of deadline, which is a very strong word, use something
> >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> >>>>> reads clumsy. Just throwing some ideas for a start.
> >>>>
> >>>> You can try, but I fear that if it ever changes behaviour and
> >>>> someone notices that, it's labelled as a kernel regression. I don't
> >>>> think documentation has ever been the authoritative definition of UABI
> >>>> in Linux, it just guides drivers and userspace towards a common
> >>>> understanding and common usage patterns.
> >>>>
> >>>> So even if the UABI contract is not documented (ugh), you need to be
> >>>> prepared to set the UABI contract through kernel implementation.
> >>>
> >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> >>> just an regression. Same way as what nice(2) priorities mean hasn't
> >>> always been the same over the years, I don't think there is a strict
> >>> contract.
> >>>
> >>> Having said that, it may be different with latency sensitive stuff such
> >>> as UIs though since it is very observable and can be very painful to users.
> >>>
> >>>> If you do not document the UABI contract, then different drivers are
> >>>> likely to implement it differently, leading to differing behaviour.
> >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> >>>> documentation guiding it on proper use. If userspace or end users
> >>>> observe different behaviour, that's bad even if it's not a regression.
> >>>>
> >>>> I don't like the situation either, but it is what it is. UABI stability
> >>>> trumps everything regardless of whether it was documented or not.
> >>>>
> >>>> I bet userspace is going to use this as a "make it faster, make it
> >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> >>>> library that stamps any and all fences with an expired deadline to
> >>>> just squeeze out a little more through some weird side-effect.
> >>>>
> >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> >>>> like to see UABI documented so I can have a feeling of what it is for
> >>>> and how it was intended to be used. That's all.
> >>>
> >>> We share the same concern. If you read elsewhere in these threads you
> >>> will notice I have been calling this an "arms race". If the ability to
> >>> make yourself go faster does not required additional privilege I also
> >>> worry everyone will do it at which point it becomes pointless. So yes, I
> >>> do share this concern about exposing any of this as an unprivileged uapi.
> >>>
> >>> Is it possible to limit access to only compositors in some sane way?
> >>> Sounds tricky when dma-fence should be disconnected from DRM..
> >>
> >> Maybe it's not that bad in this particular case, because we are talking
> >> only about boosting GPU clocks which benefits everyone (except
> >> battery life) and it does not penalize other programs like e.g.
> >> job priorities do.
> >
> > Apart from efficiency that you mentioned, which does not always favor
> > higher clocks, sometimes thermal budget is also shared between CPU and
> > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > make optimal choices without the full coordination between both schedulers.
> >
> > But that is even not the main point, which is that if everyone sets the
> > immediate deadline then having the deadline API is a bit pointless. For
> > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> >
> > However Rob has also pointed out the existence of uclamp.min via
> > sched_setattr which is unprivileged and can influence frequency
> > selection in the CPU world, so I conceded on that point. If CPU world
> > has accepted it so can we I guess.
> >
> > So IMO we are back to whether we can agree defining it is a hint is good
> > enough, be in via the name of the ioctl/flag itself or via documentation.
> >
> >> Drivers are not going to use the deadline for scheduling priorities,
> >> right? I don't recall seeing any mention of that.
> >>
> >> ...right?
> >
> > I wouldn't have thought it would be beneficial to preclude that, or
> > assume what drivers would do with the info to begin with.
> >
> > For instance in i915 we almost had a deadline based scheduler which was
> > much fairer than the current priority sorted fifo and in an ideal world
> > we would either revive or re-implement that idea. In which case
> > considering the fence deadline would naturally slot in and give true
> > integration with compositor deadlines (not just boost clocks and pray it
> > helps).
> How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> poll(POLLPRI)?

Implementation of blocking gl/vk/cl APIs, like glFinish() would use
poll(POLLPRI).  It could also set an immediate deadline and then call
poll() without POLLPRI.

Other than compositors which do frame-pacing I expect the main usage
of either of these is mesa.

BR,
-R

> --
> Regards,
> Luben
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 10:24                       ` Pekka Paalanen
  2023-02-24 10:50                         ` Tvrtko Ursulin
  2023-02-24 16:59                         ` Rob Clark
@ 2023-02-24 19:44                         ` Rob Clark
  2023-02-27  9:34                           ` Pekka Paalanen
  2 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-24 19:44 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Tvrtko Ursulin, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Fri, Feb 24, 2023 at 2:24 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Fri, 24 Feb 2023 09:41:46 +0000
> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>
> > On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > On Thu, 23 Feb 2023 10:51:48 -0800
> > > Rob Clark <robdclark@gmail.com> wrote:
> > >
> > >> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >>>
> > >>> On Wed, 22 Feb 2023 07:37:26 -0800
> > >>> Rob Clark <robdclark@gmail.com> wrote:
> > >>>
> > >>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >
> > > ...
> > >
> > >>>>> On another matter, if the application uses SET_DEADLINE with one
> > >>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > >>>>> another timestamp, what should happen?
> > >>>>
> > >>>> The expectation is that many deadline hints can be set on a fence.
> > >>>> The fence signaller should track the soonest deadline.
> > >>>
> > >>> You need to document that as UAPI, since it is observable to userspace.
> > >>> It would be bad if drivers or subsystems would differ in behaviour.
> > >>>
> > >>
> > >> It is in the end a hint.  It is about giving the driver more
> > >> information so that it can make better choices.  But the driver is
> > >> even free to ignore it.  So maybe "expectation" is too strong of a
> > >> word.  Rather, any other behavior doesn't really make sense.  But it
> > >> could end up being dictated by how the hw and/or fw works.
> > >
> > > It will stop being a hint once it has been implemented and used in the
> > > wild long enough. The kernel userspace regression rules make sure of
> > > that.
> >
> > Yeah, tricky and maybe a gray area in this case. I think we eluded
> > elsewhere in the thread that renaming the thing might be an option.
> >
> > So maybe instead of deadline, which is a very strong word, use something
> > along the lines of "present time hint", or "signalled time hint"? Maybe
> > reads clumsy. Just throwing some ideas for a start.
>
> You can try, but I fear that if it ever changes behaviour and
> someone notices that, it's labelled as a kernel regression. I don't
> think documentation has ever been the authoritative definition of UABI
> in Linux, it just guides drivers and userspace towards a common
> understanding and common usage patterns.
>
> So even if the UABI contract is not documented (ugh), you need to be
> prepared to set the UABI contract through kernel implementation.
>
> If you do not document the UABI contract, then different drivers are
> likely to implement it differently, leading to differing behaviour.
> Also userspace will invent wild ways to abuse the UABI if there is no
> documentation guiding it on proper use. If userspace or end users
> observe different behaviour, that's bad even if it's not a regression.
>
> I don't like the situation either, but it is what it is. UABI stability
> trumps everything regardless of whether it was documented or not.
>
> I bet userspace is going to use this as a "make it faster, make it
> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> library that stamps any and all fences with an expired deadline to
> just squeeze out a little more through some weird side-effect.

Userspace already has various (driver specific) debugfs/sysfs that it
can use if it wants to make it hotter and drain batteries faster, so
I'm not seeing a strong need to cater to the "turn it up to eleven"
crowd here.  And really your point feels like a good reason to _not_
document this as anything more than a hint.

Back in the real world, mobile games are already well aware of the fps
vs battery-life (and therefore gameplay) tradeoff.  But what is
missing is a way to inform the kernel of userspace's intentions, so
that gpu dvfs can make intelligent decisions.  This series is meant to
bridge that gap.

BR,
-R

> Well, that's hopefully overboard in scaring, but in the end, I would
> like to see UABI documented so I can have a feeling of what it is for
> and how it was intended to be used. That's all.
>
>
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 19:44                         ` Rob Clark
@ 2023-02-27  9:34                           ` Pekka Paalanen
  2023-02-27 18:43                             ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Pekka Paalanen @ 2023-02-27  9:34 UTC (permalink / raw)
  To: Rob Clark
  Cc: Tvrtko Ursulin, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 5004 bytes --]

On Fri, 24 Feb 2023 11:44:53 -0800
Rob Clark <robdclark@gmail.com> wrote:

> On Fri, Feb 24, 2023 at 2:24 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > On Fri, 24 Feb 2023 09:41:46 +0000
> > Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> >  
> > > On 24/02/2023 09:26, Pekka Paalanen wrote:  
> > > > On Thu, 23 Feb 2023 10:51:48 -0800
> > > > Rob Clark <robdclark@gmail.com> wrote:
> > > >  
> > > >> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > > >>>
> > > >>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > >>> Rob Clark <robdclark@gmail.com> wrote:
> > > >>>  
> > > >>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:  
> > > >
> > > > ...
> > > >  
> > > >>>>> On another matter, if the application uses SET_DEADLINE with one
> > > >>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > >>>>> another timestamp, what should happen?  
> > > >>>>
> > > >>>> The expectation is that many deadline hints can be set on a fence.
> > > >>>> The fence signaller should track the soonest deadline.  
> > > >>>
> > > >>> You need to document that as UAPI, since it is observable to userspace.
> > > >>> It would be bad if drivers or subsystems would differ in behaviour.
> > > >>>  
> > > >>
> > > >> It is in the end a hint.  It is about giving the driver more
> > > >> information so that it can make better choices.  But the driver is
> > > >> even free to ignore it.  So maybe "expectation" is too strong of a
> > > >> word.  Rather, any other behavior doesn't really make sense.  But it
> > > >> could end up being dictated by how the hw and/or fw works.  
> > > >
> > > > It will stop being a hint once it has been implemented and used in the
> > > > wild long enough. The kernel userspace regression rules make sure of
> > > > that.  
> > >
> > > Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > elsewhere in the thread that renaming the thing might be an option.
> > >
> > > So maybe instead of deadline, which is a very strong word, use something
> > > along the lines of "present time hint", or "signalled time hint"? Maybe
> > > reads clumsy. Just throwing some ideas for a start.  
> >
> > You can try, but I fear that if it ever changes behaviour and
> > someone notices that, it's labelled as a kernel regression. I don't
> > think documentation has ever been the authoritative definition of UABI
> > in Linux, it just guides drivers and userspace towards a common
> > understanding and common usage patterns.
> >
> > So even if the UABI contract is not documented (ugh), you need to be
> > prepared to set the UABI contract through kernel implementation.
> >
> > If you do not document the UABI contract, then different drivers are
> > likely to implement it differently, leading to differing behaviour.
> > Also userspace will invent wild ways to abuse the UABI if there is no
> > documentation guiding it on proper use. If userspace or end users
> > observe different behaviour, that's bad even if it's not a regression.
> >
> > I don't like the situation either, but it is what it is. UABI stability
> > trumps everything regardless of whether it was documented or not.
> >
> > I bet userspace is going to use this as a "make it faster, make it
> > hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > library that stamps any and all fences with an expired deadline to
> > just squeeze out a little more through some weird side-effect.  
> 
> Userspace already has various (driver specific) debugfs/sysfs that it
> can use if it wants to make it hotter and drain batteries faster, so
> I'm not seeing a strong need to cater to the "turn it up to eleven"
> crowd here.  And really your point feels like a good reason to _not_
> document this as anything more than a hint.

My point is that no matter what you say in documentation or leave
unsaid, people can and will abuse this by the behaviour it provides
anyway, like every other UABI.

So why not just document what it is supposed to do? It cannot get any
worse. Maybe you get lucky instead and people don't abuse it that much
if they read the docs.

E.g. can it affect GPU job scheduling, can it affect GPU clocks, can it
affect power consumption, and so on.

> Back in the real world, mobile games are already well aware of the fps
> vs battery-life (and therefore gameplay) tradeoff.  But what is
> missing is a way to inform the kernel of userspace's intentions, so
> that gpu dvfs can make intelligent decisions.  This series is meant to
> bridge that gap.

Then document that. As long as you document it properly: what you
expect it to be used for and how.

Or if this is reserved strictly for Mesa drivers, then document that.

You can also stop CC'ing me if you don't want attention to UABI docs. I
don't read dri-devel@ unless I'm explicitly CC'd nowadays.


Thanks,
pq

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-27  9:34                           ` Pekka Paalanen
@ 2023-02-27 18:43                             ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-27 18:43 UTC (permalink / raw)
  To: Pekka Paalanen
  Cc: Tvrtko Ursulin, Rob Clark, Tvrtko Ursulin, Gustavo Padovan,
	Michel Dänzer, Rodrigo Vivi, open list, dri-devel,
	Sumit Semwal, moderated list:DMA BUFFER SHARING FRAMEWORK,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Mon, Feb 27, 2023 at 1:34 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Fri, 24 Feb 2023 11:44:53 -0800
> Rob Clark <robdclark@gmail.com> wrote:
>
> > On Fri, Feb 24, 2023 at 2:24 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >
> > > On Fri, 24 Feb 2023 09:41:46 +0000
> > > Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > >
> > > > On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > > On Thu, 23 Feb 2023 10:51:48 -0800
> > > > > Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > >> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >>>
> > > > >>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > >>> Rob Clark <robdclark@gmail.com> wrote:
> > > > >>>
> > > > >>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >
> > > > > ...
> > > > >
> > > > >>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > >>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > >>>>> another timestamp, what should happen?
> > > > >>>>
> > > > >>>> The expectation is that many deadline hints can be set on a fence.
> > > > >>>> The fence signaller should track the soonest deadline.
> > > > >>>
> > > > >>> You need to document that as UAPI, since it is observable to userspace.
> > > > >>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > >>>
> > > > >>
> > > > >> It is in the end a hint.  It is about giving the driver more
> > > > >> information so that it can make better choices.  But the driver is
> > > > >> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > >> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > >> could end up being dictated by how the hw and/or fw works.
> > > > >
> > > > > It will stop being a hint once it has been implemented and used in the
> > > > > wild long enough. The kernel userspace regression rules make sure of
> > > > > that.
> > > >
> > > > Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > elsewhere in the thread that renaming the thing might be an option.
> > > >
> > > > So maybe instead of deadline, which is a very strong word, use something
> > > > along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > reads clumsy. Just throwing some ideas for a start.
> > >
> > > You can try, but I fear that if it ever changes behaviour and
> > > someone notices that, it's labelled as a kernel regression. I don't
> > > think documentation has ever been the authoritative definition of UABI
> > > in Linux, it just guides drivers and userspace towards a common
> > > understanding and common usage patterns.
> > >
> > > So even if the UABI contract is not documented (ugh), you need to be
> > > prepared to set the UABI contract through kernel implementation.
> > >
> > > If you do not document the UABI contract, then different drivers are
> > > likely to implement it differently, leading to differing behaviour.
> > > Also userspace will invent wild ways to abuse the UABI if there is no
> > > documentation guiding it on proper use. If userspace or end users
> > > observe different behaviour, that's bad even if it's not a regression.
> > >
> > > I don't like the situation either, but it is what it is. UABI stability
> > > trumps everything regardless of whether it was documented or not.
> > >
> > > I bet userspace is going to use this as a "make it faster, make it
> > > hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > library that stamps any and all fences with an expired deadline to
> > > just squeeze out a little more through some weird side-effect.
> >
> > Userspace already has various (driver specific) debugfs/sysfs that it
> > can use if it wants to make it hotter and drain batteries faster, so
> > I'm not seeing a strong need to cater to the "turn it up to eleven"
> > crowd here.  And really your point feels like a good reason to _not_
> > document this as anything more than a hint.
>
> My point is that no matter what you say in documentation or leave
> unsaid, people can and will abuse this by the behaviour it provides
> anyway, like every other UABI.
>
> So why not just document what it is supposed to do? It cannot get any
> worse. Maybe you get lucky instead and people don't abuse it that much
> if they read the docs.
>
> E.g. can it affect GPU job scheduling, can it affect GPU clocks, can it
> affect power consumption, and so on.

I expect, potentially, all or any, or none of the above ;-)

I guess I could document it as such, just to preempt potential
complaints about broken spacebar-heater.  The question is, where?  I
could add something about fence deadline hints in dma-buf.rst, which
would I think be useful in general for driver writers.  But there
isn't really any existing documents other than headerdoc comments for
sync_file ioctls, or drm_syncobj related ioctls (the latter are really
just for mesa to use, so maybe that is ok).

>
> > Back in the real world, mobile games are already well aware of the fps
> > vs battery-life (and therefore gameplay) tradeoff.  But what is
> > missing is a way to inform the kernel of userspace's intentions, so
> > that gpu dvfs can make intelligent decisions.  This series is meant to
> > bridge that gap.
>
> Then document that. As long as you document it properly: what you
> expect it to be used for and how.
>
> Or if this is reserved strictly for Mesa drivers, then document that.

I expect the SET_DEADLINE ioctl to be useful to compositors, and maybe
a few other cases.  I'd like to use it in virglrenderer to bridge
guest deadline hints to host kernel, for example.

BR,
-R

> You can also stop CC'ing me if you don't want attention to UABI docs. I
> don't read dri-devel@ unless I'm explicitly CC'd nowadays.
>
>
> Thanks,
> pq

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-24 17:59                                 ` Rob Clark
@ 2023-02-27 21:35                                   ` Rodrigo Vivi
  2023-02-27 22:20                                     ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Rodrigo Vivi @ 2023-02-27 21:35 UTC (permalink / raw)
  To: Rob Clark
  Cc: Luben Tuikov, Tvrtko Ursulin, Pekka Paalanen, Rob Clark,
	Tvrtko Ursulin, Gustavo Padovan, Michel Dänzer, open list,
	dri-devel, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK,
	Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> >
> > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > >
> > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > >>
> > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > >>>>
> > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > >>>>>>
> > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >>>>>>>>
> > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > >>>>>>>>
> > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > >>>>>>
> > >>>>>> ...
> > >>>>>>
> > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > >>>>>>>>>> another timestamp, what should happen?
> > >>>>>>>>>
> > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > >>>>>>>>> The fence signaller should track the soonest deadline.
> > >>>>>>>>
> > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > >>>>>>>>
> > >>>>>>>
> > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > >>>>>>> information so that it can make better choices.  But the driver is
> > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > >>>>>>
> > >>>>>> It will stop being a hint once it has been implemented and used in the
> > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > >>>>>> that.
> > >>>>>
> > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > >>>>>
> > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > >>>>> reads clumsy. Just throwing some ideas for a start.
> > >>>>
> > >>>> You can try, but I fear that if it ever changes behaviour and
> > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > >>>> think documentation has ever been the authoritative definition of UABI
> > >>>> in Linux, it just guides drivers and userspace towards a common
> > >>>> understanding and common usage patterns.
> > >>>>
> > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > >>>> prepared to set the UABI contract through kernel implementation.
> > >>>
> > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > >>> always been the same over the years, I don't think there is a strict
> > >>> contract.
> > >>>
> > >>> Having said that, it may be different with latency sensitive stuff such
> > >>> as UIs though since it is very observable and can be very painful to users.
> > >>>
> > >>>> If you do not document the UABI contract, then different drivers are
> > >>>> likely to implement it differently, leading to differing behaviour.
> > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > >>>> documentation guiding it on proper use. If userspace or end users
> > >>>> observe different behaviour, that's bad even if it's not a regression.
> > >>>>
> > >>>> I don't like the situation either, but it is what it is. UABI stability
> > >>>> trumps everything regardless of whether it was documented or not.
> > >>>>
> > >>>> I bet userspace is going to use this as a "make it faster, make it
> > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > >>>> library that stamps any and all fences with an expired deadline to
> > >>>> just squeeze out a little more through some weird side-effect.
> > >>>>
> > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > >>>> like to see UABI documented so I can have a feeling of what it is for
> > >>>> and how it was intended to be used. That's all.
> > >>>
> > >>> We share the same concern. If you read elsewhere in these threads you
> > >>> will notice I have been calling this an "arms race". If the ability to
> > >>> make yourself go faster does not required additional privilege I also
> > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > >>>
> > >>> Is it possible to limit access to only compositors in some sane way?
> > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > >>
> > >> Maybe it's not that bad in this particular case, because we are talking
> > >> only about boosting GPU clocks which benefits everyone (except
> > >> battery life) and it does not penalize other programs like e.g.
> > >> job priorities do.
> > >
> > > Apart from efficiency that you mentioned, which does not always favor
> > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > make optimal choices without the full coordination between both schedulers.
> > >
> > > But that is even not the main point, which is that if everyone sets the
> > > immediate deadline then having the deadline API is a bit pointless. For
> > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > >
> > > However Rob has also pointed out the existence of uclamp.min via
> > > sched_setattr which is unprivileged and can influence frequency
> > > selection in the CPU world, so I conceded on that point. If CPU world
> > > has accepted it so can we I guess.
> > >
> > > So IMO we are back to whether we can agree defining it is a hint is good
> > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > >
> > >> Drivers are not going to use the deadline for scheduling priorities,
> > >> right? I don't recall seeing any mention of that.
> > >>
> > >> ...right?
> > >
> > > I wouldn't have thought it would be beneficial to preclude that, or
> > > assume what drivers would do with the info to begin with.
> > >
> > > For instance in i915 we almost had a deadline based scheduler which was
> > > much fairer than the current priority sorted fifo and in an ideal world
> > > we would either revive or re-implement that idea. In which case
> > > considering the fence deadline would naturally slot in and give true
> > > integration with compositor deadlines (not just boost clocks and pray it
> > > helps).
> > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > poll(POLLPRI)?
> 
> Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> poll(POLLPRI).  It could also set an immediate deadline and then call
> poll() without POLLPRI.
> 
> Other than compositors which do frame-pacing I expect the main usage
> of either of these is mesa.

Okay, so it looks like we already agreed that having a way to bump frequency
from userspace is acceptable. either because there are already other ways
that you can waste power and because this already acceptable in the CPU
world.

But why we are doing this in hidden ways then?

Why can't we have this hint per context that is getting executed?
(either with a boost-context flag or with some low/med/max or '-1' to '1'
value like the latency priority)?

I don't like the waitboost because this heurisitic fails in some media cases.
I don't like the global setting because we might be alternating a top-priority
with low-priority cases...

So, why not something per context in execution?

> 
> BR,
> -R
> 
> > --
> > Regards,
> > Luben
> >

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-27 21:35                                   ` Rodrigo Vivi
@ 2023-02-27 22:20                                     ` Rob Clark
  2023-02-27 22:44                                       ` Sebastian Wick
  2023-03-01 15:45                                       ` Rodrigo Vivi
  0 siblings, 2 replies; 93+ messages in thread
From: Rob Clark @ 2023-02-27 22:20 UTC (permalink / raw)
  To: Rodrigo Vivi
  Cc: Luben Tuikov, Tvrtko Ursulin, Pekka Paalanen, Rob Clark,
	Tvrtko Ursulin, Gustavo Padovan, Michel Dänzer, open list,
	dri-devel, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK,
	Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > >
> > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > >
> > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > >>
> > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > >>>>
> > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > >>>>>>
> > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > >>>>>>>>
> > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > >>>>>>>>
> > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > >>>>>>
> > > >>>>>> ...
> > > >>>>>>
> > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > >>>>>>>>>> another timestamp, what should happen?
> > > >>>>>>>>>
> > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > >>>>>>>>
> > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > >>>>>>>>
> > > >>>>>>>
> > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > >>>>>>> information so that it can make better choices.  But the driver is
> > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > >>>>>>
> > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > >>>>>> that.
> > > >>>>>
> > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > >>>>>
> > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > >>>>
> > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > >>>> think documentation has ever been the authoritative definition of UABI
> > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > >>>> understanding and common usage patterns.
> > > >>>>
> > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > >>>> prepared to set the UABI contract through kernel implementation.
> > > >>>
> > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > >>> always been the same over the years, I don't think there is a strict
> > > >>> contract.
> > > >>>
> > > >>> Having said that, it may be different with latency sensitive stuff such
> > > >>> as UIs though since it is very observable and can be very painful to users.
> > > >>>
> > > >>>> If you do not document the UABI contract, then different drivers are
> > > >>>> likely to implement it differently, leading to differing behaviour.
> > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > >>>> documentation guiding it on proper use. If userspace or end users
> > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > >>>>
> > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > >>>> trumps everything regardless of whether it was documented or not.
> > > >>>>
> > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > >>>> library that stamps any and all fences with an expired deadline to
> > > >>>> just squeeze out a little more through some weird side-effect.
> > > >>>>
> > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > >>>> and how it was intended to be used. That's all.
> > > >>>
> > > >>> We share the same concern. If you read elsewhere in these threads you
> > > >>> will notice I have been calling this an "arms race". If the ability to
> > > >>> make yourself go faster does not required additional privilege I also
> > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > >>>
> > > >>> Is it possible to limit access to only compositors in some sane way?
> > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > >>
> > > >> Maybe it's not that bad in this particular case, because we are talking
> > > >> only about boosting GPU clocks which benefits everyone (except
> > > >> battery life) and it does not penalize other programs like e.g.
> > > >> job priorities do.
> > > >
> > > > Apart from efficiency that you mentioned, which does not always favor
> > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > make optimal choices without the full coordination between both schedulers.
> > > >
> > > > But that is even not the main point, which is that if everyone sets the
> > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > >
> > > > However Rob has also pointed out the existence of uclamp.min via
> > > > sched_setattr which is unprivileged and can influence frequency
> > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > has accepted it so can we I guess.
> > > >
> > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > >
> > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > >> right? I don't recall seeing any mention of that.
> > > >>
> > > >> ...right?
> > > >
> > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > assume what drivers would do with the info to begin with.
> > > >
> > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > we would either revive or re-implement that idea. In which case
> > > > considering the fence deadline would naturally slot in and give true
> > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > helps).
> > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > poll(POLLPRI)?
> >
> > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > poll(POLLPRI).  It could also set an immediate deadline and then call
> > poll() without POLLPRI.
> >
> > Other than compositors which do frame-pacing I expect the main usage
> > of either of these is mesa.
>
> Okay, so it looks like we already agreed that having a way to bump frequency
> from userspace is acceptable. either because there are already other ways
> that you can waste power and because this already acceptable in the CPU
> world.
>
> But why we are doing this in hidden ways then?
>
> Why can't we have this hint per context that is getting executed?
> (either with a boost-context flag or with some low/med/max or '-1' to '1'
> value like the latency priority)?
>
> I don't like the waitboost because this heurisitic fails in some media cases.
> I don't like the global setting because we might be alternating a top-priority
> with low-priority cases...
>
> So, why not something per context in execution?
>

It needs to be finer granularity than per-context, because not all
waits should trigger boosting.  For example, virglrenderer ends up
with a thread polling unsignaled fences to know when to signal an
interrupt to the guest virtgpu.  This alone shouldn't trigger
boosting.  (We also wouldn't want to completely disable boosting for
virglrenderer.)  Or the usermode driver could be waiting on a fence to
know when to do some cleanup.

That is not to say that there isn't room for per-context flags to
disable/enable boosting for fences created by that context, meaning it
could be an AND operation for i915 if it needs to be.

BR,
-R

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-27 22:20                                     ` Rob Clark
@ 2023-02-27 22:44                                       ` Sebastian Wick
  2023-02-27 23:48                                         ` Rob Clark
  2023-03-01 15:45                                       ` Rodrigo Vivi
  1 sibling, 1 reply; 93+ messages in thread
From: Sebastian Wick @ 2023-02-27 22:44 UTC (permalink / raw)
  To: Rob Clark
  Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Sumit Semwal, open list:SYNC FILE FRAMEWORK

On Mon, Feb 27, 2023 at 11:20 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> >
> > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > >
> > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > >
> > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > >>
> > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > >>>>
> > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > >>>>>>
> > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >>>>>>
> > > > >>>>>> ...
> > > > >>>>>>
> > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > >>>>>>>>>> another timestamp, what should happen?
> > > > >>>>>>>>>
> > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > >>>>>>>>
> > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > >>>>>>>>
> > > > >>>>>>>
> > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > >>>>>>
> > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > >>>>>> that.
> > > > >>>>>
> > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > >>>>>
> > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > >>>>
> > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > >>>> understanding and common usage patterns.
> > > > >>>>
> > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > >>>
> > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > >>> always been the same over the years, I don't think there is a strict
> > > > >>> contract.
> > > > >>>
> > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > >>>
> > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > >>>>
> > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > >>>>
> > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > >>>>
> > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > >>>> and how it was intended to be used. That's all.
> > > > >>>
> > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > >>> make yourself go faster does not required additional privilege I also
> > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > >>>
> > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > >>
> > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > >> battery life) and it does not penalize other programs like e.g.
> > > > >> job priorities do.
> > > > >
> > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > make optimal choices without the full coordination between both schedulers.
> > > > >
> > > > > But that is even not the main point, which is that if everyone sets the
> > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > >
> > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > has accepted it so can we I guess.
> > > > >
> > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > >
> > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > >> right? I don't recall seeing any mention of that.
> > > > >>
> > > > >> ...right?
> > > > >
> > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > assume what drivers would do with the info to begin with.
> > > > >
> > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > we would either revive or re-implement that idea. In which case
> > > > > considering the fence deadline would naturally slot in and give true
> > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > helps).
> > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > poll(POLLPRI)?
> > >
> > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > poll() without POLLPRI.
> > >
> > > Other than compositors which do frame-pacing I expect the main usage
> > > of either of these is mesa.
> >
> > Okay, so it looks like we already agreed that having a way to bump frequency
> > from userspace is acceptable. either because there are already other ways
> > that you can waste power and because this already acceptable in the CPU
> > world.
> >
> > But why we are doing this in hidden ways then?
> >
> > Why can't we have this hint per context that is getting executed?
> > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > value like the latency priority)?
> >
> > I don't like the waitboost because this heurisitic fails in some media cases.
> > I don't like the global setting because we might be alternating a top-priority
> > with low-priority cases...
> >
> > So, why not something per context in execution?
> >
>
> It needs to be finer granularity than per-context, because not all
> waits should trigger boosting.  For example, virglrenderer ends up
> with a thread polling unsignaled fences to know when to signal an
> interrupt to the guest virtgpu.  This alone shouldn't trigger
> boosting.  (We also wouldn't want to completely disable boosting for
> virglrenderer.)  Or the usermode driver could be waiting on a fence to
> know when to do some cleanup.
>
> That is not to say that there isn't room for per-context flags to
> disable/enable boosting for fences created by that context, meaning it
> could be an AND operation for i915 if it needs to be.

First of all, I believe that the fence deadline hint is a good idea.
With that being said, I also don't think it is sufficient in a lot of
cases.

The one thing I was alluding to before and that Pekka mentioned as
well is that mutter for example has a problem where we're missing the
deadline consistently because the clocks don't ramp up fast enough and
there is a MR which is just trying to keep the GPU busy to avoid this.

It would be much better if the kernel could make sure the clocks are
all ramped up when we start submitting work. In the compositor we
actually have a lot of information that *should* influence clocks. We
know when we're going to start submitting work and when the deadline
for that work is beforehand. We know which windows are visible, and
which one should have the highest priority. We know when there are
input events which actually matter. We know when the deadline for
client work is.

In the future we also want to make sure clients know beforehand when
they should start their work and when the deadline is but that's all
very much WIP in both wayland and vulkan.

There are two issues:

1. The compositor has no way to communicate any of that information to
the kernel.
2. The only connection to client work the compositor has is a fence to
the last bit of work that must be done before the deadline after a
wl_surface.commit.

So in both cases a fence is just not the right primitive for us. We
need to be able to provide per-context/queue information for work that
will happen in the future and we need a way to refer to a
context/queue generically and over IPC to boost the clocks of the
device that a client is actually using and maybe even give priority.

But like I said, having a per-fence deadline is probably still a good
idea and doesn't conflict with any of the more coarse information.

>
> BR,
> -R
>


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-27 22:44                                       ` Sebastian Wick
@ 2023-02-27 23:48                                         ` Rob Clark
  2023-02-28 14:30                                           ` Sebastian Wick
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-27 23:48 UTC (permalink / raw)
  To: Sebastian Wick
  Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Sumit Semwal, open list:SYNC FILE FRAMEWORK

On Mon, Feb 27, 2023 at 2:44 PM Sebastian Wick
<sebastian.wick@redhat.com> wrote:
>
> On Mon, Feb 27, 2023 at 11:20 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> > >
> > > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > > >
> > > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > > >
> > > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > >>
> > > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > >>>>
> > > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > >>>>>>
> > > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > >>>>>>>>
> > > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > >>>>>>>>
> > > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > >>>>>>
> > > > > >>>>>> ...
> > > > > >>>>>>
> > > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > > >>>>>>>>>> another timestamp, what should happen?
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > > >>>>>>>>
> > > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > > >>>>>>>>
> > > > > >>>>>>>
> > > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > > >>>>>>
> > > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > > >>>>>> that.
> > > > > >>>>>
> > > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > > >>>>>
> > > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > > >>>>
> > > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > > >>>> understanding and common usage patterns.
> > > > > >>>>
> > > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > > >>>
> > > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > > >>> always been the same over the years, I don't think there is a strict
> > > > > >>> contract.
> > > > > >>>
> > > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > > >>>
> > > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > > >>>>
> > > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > > >>>>
> > > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > > >>>>
> > > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > > >>>> and how it was intended to be used. That's all.
> > > > > >>>
> > > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > > >>> make yourself go faster does not required additional privilege I also
> > > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > > >>>
> > > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > > >>
> > > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > > >> battery life) and it does not penalize other programs like e.g.
> > > > > >> job priorities do.
> > > > > >
> > > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > > make optimal choices without the full coordination between both schedulers.
> > > > > >
> > > > > > But that is even not the main point, which is that if everyone sets the
> > > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > > >
> > > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > > has accepted it so can we I guess.
> > > > > >
> > > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > > >
> > > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > > >> right? I don't recall seeing any mention of that.
> > > > > >>
> > > > > >> ...right?
> > > > > >
> > > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > > assume what drivers would do with the info to begin with.
> > > > > >
> > > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > > we would either revive or re-implement that idea. In which case
> > > > > > considering the fence deadline would naturally slot in and give true
> > > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > > helps).
> > > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > > poll(POLLPRI)?
> > > >
> > > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > > poll() without POLLPRI.
> > > >
> > > > Other than compositors which do frame-pacing I expect the main usage
> > > > of either of these is mesa.
> > >
> > > Okay, so it looks like we already agreed that having a way to bump frequency
> > > from userspace is acceptable. either because there are already other ways
> > > that you can waste power and because this already acceptable in the CPU
> > > world.
> > >
> > > But why we are doing this in hidden ways then?
> > >
> > > Why can't we have this hint per context that is getting executed?
> > > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > > value like the latency priority)?
> > >
> > > I don't like the waitboost because this heurisitic fails in some media cases.
> > > I don't like the global setting because we might be alternating a top-priority
> > > with low-priority cases...
> > >
> > > So, why not something per context in execution?
> > >
> >
> > It needs to be finer granularity than per-context, because not all
> > waits should trigger boosting.  For example, virglrenderer ends up
> > with a thread polling unsignaled fences to know when to signal an
> > interrupt to the guest virtgpu.  This alone shouldn't trigger
> > boosting.  (We also wouldn't want to completely disable boosting for
> > virglrenderer.)  Or the usermode driver could be waiting on a fence to
> > know when to do some cleanup.
> >
> > That is not to say that there isn't room for per-context flags to
> > disable/enable boosting for fences created by that context, meaning it
> > could be an AND operation for i915 if it needs to be.
>
> First of all, I believe that the fence deadline hint is a good idea.
> With that being said, I also don't think it is sufficient in a lot of
> cases.
>
> The one thing I was alluding to before and that Pekka mentioned as
> well is that mutter for example has a problem where we're missing the
> deadline consistently because the clocks don't ramp up fast enough and
> there is a MR which is just trying to keep the GPU busy to avoid this.

the dynamic double/triple buffer thing?

> It would be much better if the kernel could make sure the clocks are
> all ramped up when we start submitting work. In the compositor we
> actually have a lot of information that *should* influence clocks. We
> know when we're going to start submitting work and when the deadline
> for that work is beforehand. We know which windows are visible, and
> which one should have the highest priority.

This sounds like something orthogonal.. something for cgroups?  Ie.
android moves visible/foreground apps to a different cgroup to given
them higher priority.  Tvrtko had a patchset to add drm cgroup
support..

> We know when there are
> input events which actually matter.

This I see input as a different boost source for the driver.  (Ie. one
boost signal is missing fence deadlines, another is input events,
etc.)

We end up using downstream input-handlers on the kernel side for this.
Partially for the freq boost (but mostly not, UI interactive workloads
like touchscreen scrolling don't generally need high GPU freqs, they
are more memory bandwidth limited if they are limited by anything)..
really the reason here is to get a head-start on the ~2ms that it
takes to power up the GPU if it is suspended.

But this is not quite perfect, since for example some keys should be
handled on key-down but others on key-up.

But again, this is something different from fence deadlines.  I'm
interested in proposals because we do need something for this.  But I
think it is something is orthogonal to this series.  For input, we
want the kernel to know long before userspace is ready to submit
rendering.

> We know when the deadline for
> client work is.
>
> In the future we also want to make sure clients know beforehand when
> they should start their work and when the deadline is but that's all
> very much WIP in both wayland and vulkan.
>
> There are two issues:
>
> 1. The compositor has no way to communicate any of that information to
> the kernel.
> 2. The only connection to client work the compositor has is a fence to
> the last bit of work that must be done before the deadline after a
> wl_surface.commit.

If the client isn't using multiple GPUs, a single fence should be
sufficient.  And even if it is, well we still have all the dependency
information on the kernel side.  Ie. drm/sched knows what fences it is
waiting on if it is waiting to schedule the work associated with the
last fence.  It would otherwise require drm/sched to be a bit more
tricky than it is so far in this series.

But I think the normal dual-gpu case, the app is only dealing with a single GPU?

> So in both cases a fence is just not the right primitive for us. We
> need to be able to provide per-context/queue information for work that
> will happen in the future and we need a way to refer to a
> context/queue generically and over IPC to boost the clocks of the
> device that a client is actually using and maybe even give priority.
>
> But like I said, having a per-fence deadline is probably still a good
> idea and doesn't conflict with any of the more coarse information.

Yeah, I think the thing is you need multiple things, and this is only
one of them ;-)

BR,
-R

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-27 23:48                                         ` Rob Clark
@ 2023-02-28 14:30                                           ` Sebastian Wick
  2023-02-28 22:52                                             ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Sebastian Wick @ 2023-02-28 14:30 UTC (permalink / raw)
  To: Rob Clark
  Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Sumit Semwal, open list:SYNC FILE FRAMEWORK

On Tue, Feb 28, 2023 at 12:48 AM Rob Clark <robdclark@gmail.com> wrote:
>
> On Mon, Feb 27, 2023 at 2:44 PM Sebastian Wick
> <sebastian.wick@redhat.com> wrote:
> >
> > On Mon, Feb 27, 2023 at 11:20 PM Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> > > >
> > > > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > > > >
> > > > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > > > >
> > > > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > >>
> > > > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > >>>>
> > > > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >>>>>>
> > > > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > >>>>>>>>
> > > > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >>>>>>>>
> > > > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > >>>>>>
> > > > > > >>>>>> ...
> > > > > > >>>>>>
> > > > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > > > >>>>>>>>>> another timestamp, what should happen?
> > > > > > >>>>>>>>>
> > > > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > > > >>>>>>>>
> > > > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > > > >>>>>>>>
> > > > > > >>>>>>>
> > > > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > > > >>>>>>
> > > > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > > > >>>>>> that.
> > > > > > >>>>>
> > > > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > > > >>>>>
> > > > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > > > >>>>
> > > > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > > > >>>> understanding and common usage patterns.
> > > > > > >>>>
> > > > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > > > >>>
> > > > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > > > >>> always been the same over the years, I don't think there is a strict
> > > > > > >>> contract.
> > > > > > >>>
> > > > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > > > >>>
> > > > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > > > >>>>
> > > > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > > > >>>>
> > > > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > > > >>>>
> > > > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > > > >>>> and how it was intended to be used. That's all.
> > > > > > >>>
> > > > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > > > >>> make yourself go faster does not required additional privilege I also
> > > > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > > > >>>
> > > > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > > > >>
> > > > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > > > >> battery life) and it does not penalize other programs like e.g.
> > > > > > >> job priorities do.
> > > > > > >
> > > > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > > > make optimal choices without the full coordination between both schedulers.
> > > > > > >
> > > > > > > But that is even not the main point, which is that if everyone sets the
> > > > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > > > >
> > > > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > > > has accepted it so can we I guess.
> > > > > > >
> > > > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > > > >
> > > > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > > > >> right? I don't recall seeing any mention of that.
> > > > > > >>
> > > > > > >> ...right?
> > > > > > >
> > > > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > > > assume what drivers would do with the info to begin with.
> > > > > > >
> > > > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > > > we would either revive or re-implement that idea. In which case
> > > > > > > considering the fence deadline would naturally slot in and give true
> > > > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > > > helps).
> > > > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > > > poll(POLLPRI)?
> > > > >
> > > > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > > > poll() without POLLPRI.
> > > > >
> > > > > Other than compositors which do frame-pacing I expect the main usage
> > > > > of either of these is mesa.
> > > >
> > > > Okay, so it looks like we already agreed that having a way to bump frequency
> > > > from userspace is acceptable. either because there are already other ways
> > > > that you can waste power and because this already acceptable in the CPU
> > > > world.
> > > >
> > > > But why we are doing this in hidden ways then?
> > > >
> > > > Why can't we have this hint per context that is getting executed?
> > > > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > > > value like the latency priority)?
> > > >
> > > > I don't like the waitboost because this heurisitic fails in some media cases.
> > > > I don't like the global setting because we might be alternating a top-priority
> > > > with low-priority cases...
> > > >
> > > > So, why not something per context in execution?
> > > >
> > >
> > > It needs to be finer granularity than per-context, because not all
> > > waits should trigger boosting.  For example, virglrenderer ends up
> > > with a thread polling unsignaled fences to know when to signal an
> > > interrupt to the guest virtgpu.  This alone shouldn't trigger
> > > boosting.  (We also wouldn't want to completely disable boosting for
> > > virglrenderer.)  Or the usermode driver could be waiting on a fence to
> > > know when to do some cleanup.
> > >
> > > That is not to say that there isn't room for per-context flags to
> > > disable/enable boosting for fences created by that context, meaning it
> > > could be an AND operation for i915 if it needs to be.
> >
> > First of all, I believe that the fence deadline hint is a good idea.
> > With that being said, I also don't think it is sufficient in a lot of
> > cases.
> >
> > The one thing I was alluding to before and that Pekka mentioned as
> > well is that mutter for example has a problem where we're missing the
> > deadline consistently because the clocks don't ramp up fast enough and
> > there is a MR which is just trying to keep the GPU busy to avoid this.
>
> the dynamic double/triple buffer thing?

Yes

> > It would be much better if the kernel could make sure the clocks are
> > all ramped up when we start submitting work. In the compositor we
> > actually have a lot of information that *should* influence clocks. We
> > know when we're going to start submitting work and when the deadline
> > for that work is beforehand. We know which windows are visible, and
> > which one should have the highest priority.
>
> This sounds like something orthogonal.. something for cgroups?  Ie.
> android moves visible/foreground apps to a different cgroup to given
> them higher priority.  Tvrtko had a patchset to add drm cgroup
> support..

For the priority stuff, yes, probably. The visibility information on
the other hand could be used to determine if we want to ramp up the
GPU in the first place.

> > We know when there are
> > input events which actually matter.
>
> This I see input as a different boost source for the driver.  (Ie. one
> boost signal is missing fence deadlines, another is input events,
> etc.)
>
> We end up using downstream input-handlers on the kernel side for this.
> Partially for the freq boost (but mostly not, UI interactive workloads
> like touchscreen scrolling don't generally need high GPU freqs, they
> are more memory bandwidth limited if they are limited by anything)..
> really the reason here is to get a head-start on the ~2ms that it
> takes to power up the GPU if it is suspended.

Right, but one of my main points I want to make here is that we could
get the head-start not only in response to input events but also for
the GPU work the compositor submits and in the future also to GPU work
that clients commit. Except that we don't have a way to tell the
kernel about it.

> But this is not quite perfect, since for example some keys should be
> handled on key-down but others on key-up.
>
> But again, this is something different from fence deadlines.  I'm
> interested in proposals because we do need something for this.  But I
> think it is something is orthogonal to this series.  For input, we
> want the kernel to know long before userspace is ready to submit
> rendering.

We can do that in the compositor! Input events are really not
something you should care about in the kernel. Input itself is also
not the only indication of incoming animated content. Some network
packets arriving could equally well result in the same situation.

> > We know when the deadline for
> > client work is.
> >
> > In the future we also want to make sure clients know beforehand when
> > they should start their work and when the deadline is but that's all
> > very much WIP in both wayland and vulkan.
> >
> > There are two issues:
> >
> > 1. The compositor has no way to communicate any of that information to
> > the kernel.
> > 2. The only connection to client work the compositor has is a fence to
> > the last bit of work that must be done before the deadline after a
> > wl_surface.commit.
>
> If the client isn't using multiple GPUs, a single fence should be
> sufficient.  And even if it is, well we still have all the dependency
> information on the kernel side.  Ie. drm/sched knows what fences it is
> waiting on if it is waiting to schedule the work associated with the
> last fence.  It would otherwise require drm/sched to be a bit more
> tricky than it is so far in this series.
>
> But I think the normal dual-gpu case, the app is only dealing with a single GPU?

We generally don't know which GPU a client uses though. We know which
one we're using and tell the client that the buffer should be
compatible with it but that's the extent of information we have, until
we get a fence but that fence usually gets to the compositor pretty
late. Way too late for the compositor to tell the kernel to ramp up
the GPU and still have an impact.

It also seems like we're moving away from tracking execution
dependencies with fences when we're switching to user mode fences.

> > So in both cases a fence is just not the right primitive for us. We
> > need to be able to provide per-context/queue information for work that
> > will happen in the future and we need a way to refer to a
> > context/queue generically and over IPC to boost the clocks of the
> > device that a client is actually using and maybe even give priority.
> >
> > But like I said, having a per-fence deadline is probably still a good
> > idea and doesn't conflict with any of the more coarse information.
>
> Yeah, I think the thing is you need multiple things, and this is only
> one of them ;-)
>
> BR,
> -R
>


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-28 14:30                                           ` Sebastian Wick
@ 2023-02-28 22:52                                             ` Rob Clark
  2023-03-01 15:31                                               ` Sebastian Wick
  0 siblings, 1 reply; 93+ messages in thread
From: Rob Clark @ 2023-02-28 22:52 UTC (permalink / raw)
  To: Sebastian Wick
  Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Sumit Semwal, open list:SYNC FILE FRAMEWORK

On Tue, Feb 28, 2023 at 6:30 AM Sebastian Wick
<sebastian.wick@redhat.com> wrote:
>
> On Tue, Feb 28, 2023 at 12:48 AM Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Mon, Feb 27, 2023 at 2:44 PM Sebastian Wick
> > <sebastian.wick@redhat.com> wrote:
> > >
> > > On Mon, Feb 27, 2023 at 11:20 PM Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> > > > >
> > > > > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > > > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > > > > >
> > > > > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > > > > >
> > > > > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > > >>
> > > > > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > > >>>>
> > > > > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >>>>>>
> > > > > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > >>>>>>
> > > > > > > >>>>>> ...
> > > > > > > >>>>>>
> > > > > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > > > > >>>>>>>>>> another timestamp, what should happen?
> > > > > > > >>>>>>>>>
> > > > > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > > > > >>>>>>>>
> > > > > > > >>>>>>>
> > > > > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > > > > >>>>>>
> > > > > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > > > > >>>>>> that.
> > > > > > > >>>>>
> > > > > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > > > > >>>>>
> > > > > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > > > > >>>>
> > > > > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > > > > >>>> understanding and common usage patterns.
> > > > > > > >>>>
> > > > > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > > > > >>>
> > > > > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > > > > >>> always been the same over the years, I don't think there is a strict
> > > > > > > >>> contract.
> > > > > > > >>>
> > > > > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > > > > >>>
> > > > > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > > > > >>>>
> > > > > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > > > > >>>>
> > > > > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > > > > >>>>
> > > > > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > > > > >>>> and how it was intended to be used. That's all.
> > > > > > > >>>
> > > > > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > > > > >>> make yourself go faster does not required additional privilege I also
> > > > > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > > > > >>>
> > > > > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > > > > >>
> > > > > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > > > > >> battery life) and it does not penalize other programs like e.g.
> > > > > > > >> job priorities do.
> > > > > > > >
> > > > > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > > > > make optimal choices without the full coordination between both schedulers.
> > > > > > > >
> > > > > > > > But that is even not the main point, which is that if everyone sets the
> > > > > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > > > > >
> > > > > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > > > > has accepted it so can we I guess.
> > > > > > > >
> > > > > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > > > > >
> > > > > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > > > > >> right? I don't recall seeing any mention of that.
> > > > > > > >>
> > > > > > > >> ...right?
> > > > > > > >
> > > > > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > > > > assume what drivers would do with the info to begin with.
> > > > > > > >
> > > > > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > > > > we would either revive or re-implement that idea. In which case
> > > > > > > > considering the fence deadline would naturally slot in and give true
> > > > > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > > > > helps).
> > > > > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > > > > poll(POLLPRI)?
> > > > > >
> > > > > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > > > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > > > > poll() without POLLPRI.
> > > > > >
> > > > > > Other than compositors which do frame-pacing I expect the main usage
> > > > > > of either of these is mesa.
> > > > >
> > > > > Okay, so it looks like we already agreed that having a way to bump frequency
> > > > > from userspace is acceptable. either because there are already other ways
> > > > > that you can waste power and because this already acceptable in the CPU
> > > > > world.
> > > > >
> > > > > But why we are doing this in hidden ways then?
> > > > >
> > > > > Why can't we have this hint per context that is getting executed?
> > > > > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > > > > value like the latency priority)?
> > > > >
> > > > > I don't like the waitboost because this heurisitic fails in some media cases.
> > > > > I don't like the global setting because we might be alternating a top-priority
> > > > > with low-priority cases...
> > > > >
> > > > > So, why not something per context in execution?
> > > > >
> > > >
> > > > It needs to be finer granularity than per-context, because not all
> > > > waits should trigger boosting.  For example, virglrenderer ends up
> > > > with a thread polling unsignaled fences to know when to signal an
> > > > interrupt to the guest virtgpu.  This alone shouldn't trigger
> > > > boosting.  (We also wouldn't want to completely disable boosting for
> > > > virglrenderer.)  Or the usermode driver could be waiting on a fence to
> > > > know when to do some cleanup.
> > > >
> > > > That is not to say that there isn't room for per-context flags to
> > > > disable/enable boosting for fences created by that context, meaning it
> > > > could be an AND operation for i915 if it needs to be.
> > >
> > > First of all, I believe that the fence deadline hint is a good idea.
> > > With that being said, I also don't think it is sufficient in a lot of
> > > cases.
> > >
> > > The one thing I was alluding to before and that Pekka mentioned as
> > > well is that mutter for example has a problem where we're missing the
> > > deadline consistently because the clocks don't ramp up fast enough and
> > > there is a MR which is just trying to keep the GPU busy to avoid this.
> >
> > the dynamic double/triple buffer thing?
>
> Yes
>
> > > It would be much better if the kernel could make sure the clocks are
> > > all ramped up when we start submitting work. In the compositor we
> > > actually have a lot of information that *should* influence clocks. We
> > > know when we're going to start submitting work and when the deadline
> > > for that work is beforehand. We know which windows are visible, and
> > > which one should have the highest priority.
> >
> > This sounds like something orthogonal.. something for cgroups?  Ie.
> > android moves visible/foreground apps to a different cgroup to given
> > them higher priority.  Tvrtko had a patchset to add drm cgroup
> > support..
>
> For the priority stuff, yes, probably. The visibility information on
> the other hand could be used to determine if we want to ramp up the
> GPU in the first place.

Right, but I think that we could have multiple cgroup based knobs, one
that adjusts priority and one that limits/disables deadline based
boost?  This way the compositor could setup different policies for
visible vs hidden apps influencing both how much time they get on the
GPU and boost.

> > > We know when there are
> > > input events which actually matter.
> >
> > This I see input as a different boost source for the driver.  (Ie. one
> > boost signal is missing fence deadlines, another is input events,
> > etc.)
> >
> > We end up using downstream input-handlers on the kernel side for this.
> > Partially for the freq boost (but mostly not, UI interactive workloads
> > like touchscreen scrolling don't generally need high GPU freqs, they
> > are more memory bandwidth limited if they are limited by anything)..
> > really the reason here is to get a head-start on the ~2ms that it
> > takes to power up the GPU if it is suspended.
>
> Right, but one of my main points I want to make here is that we could
> get the head-start not only in response to input events but also for
> the GPU work the compositor submits and in the future also to GPU work
> that clients commit. Except that we don't have a way to tell the
> kernel about it.
>
> > But this is not quite perfect, since for example some keys should be
> > handled on key-down but others on key-up.
> >
> > But again, this is something different from fence deadlines.  I'm
> > interested in proposals because we do need something for this.  But I
> > think it is something is orthogonal to this series.  For input, we
> > want the kernel to know long before userspace is ready to submit
> > rendering.
>
> We can do that in the compositor! Input events are really not
> something you should care about in the kernel. Input itself is also
> not the only indication of incoming animated content. Some network
> packets arriving could equally well result in the same situation.

We do use input boost not just for GPU freq, but also for CPU freq.
It seems like triggering it from the kernel could happen somewhat
sooner than userspace.  (But just speculation.)

I'm not sure network packets count.  Or at least compared to a touch
interface, the user won't perceive the lag nearly as much, compared to
whether or not the UI tracks their fingertips.

> > > We know when the deadline for
> > > client work is.
> > >
> > > In the future we also want to make sure clients know beforehand when
> > > they should start their work and when the deadline is but that's all
> > > very much WIP in both wayland and vulkan.
> > >
> > > There are two issues:
> > >
> > > 1. The compositor has no way to communicate any of that information to
> > > the kernel.
> > > 2. The only connection to client work the compositor has is a fence to
> > > the last bit of work that must be done before the deadline after a
> > > wl_surface.commit.
> >
> > If the client isn't using multiple GPUs, a single fence should be
> > sufficient.  And even if it is, well we still have all the dependency
> > information on the kernel side.  Ie. drm/sched knows what fences it is
> > waiting on if it is waiting to schedule the work associated with the
> > last fence.  It would otherwise require drm/sched to be a bit more
> > tricky than it is so far in this series.
> >
> > But I think the normal dual-gpu case, the app is only dealing with a single GPU?
>
> We generally don't know which GPU a client uses though. We know which
> one we're using and tell the client that the buffer should be
> compatible with it but that's the extent of information we have, until
> we get a fence but that fence usually gets to the compositor pretty
> late. Way too late for the compositor to tell the kernel to ramp up
> the GPU and still have an impact.

Are you sure about that?  I'd have expected the app to hand off
fence+buffer to the compositor pretty quickly after rendering is
submitted.  This is what I've seen elsewhere.

> It also seems like we're moving away from tracking execution
> dependencies with fences when we're switching to user mode fences.

I suppose there are two cases..

1) dependent fences all from the same device.. no prob, the right
driver gets the signal that it needs to clk up, and figures out the
rest on it's own

2) dependent fences from different devices.. I suppose if device B is
waiting for a fence from device A before it can make forward progress,
why not express this as a deadline hint on A's fence?  (But by the
time you have this problem, you are probably not really caring about
power consumption, so meh..)

BR,
-R

> > > So in both cases a fence is just not the right primitive for us. We
> > > need to be able to provide per-context/queue information for work that
> > > will happen in the future and we need a way to refer to a
> > > context/queue generically and over IPC to boost the clocks of the
> > > device that a client is actually using and maybe even give priority.
> > >
> > > But like I said, having a per-fence deadline is probably still a good
> > > idea and doesn't conflict with any of the more coarse information.
> >
> > Yeah, I think the thing is you need multiple things, and this is only
> > one of them ;-)
> >
> > BR,
> > -R
> >
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-28 22:52                                             ` Rob Clark
@ 2023-03-01 15:31                                               ` Sebastian Wick
  2023-03-01 16:02                                                 ` Rob Clark
  0 siblings, 1 reply; 93+ messages in thread
From: Sebastian Wick @ 2023-03-01 15:31 UTC (permalink / raw)
  To: Rob Clark
  Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Sumit Semwal, open list:SYNC FILE FRAMEWORK

On Tue, Feb 28, 2023 at 11:52 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Tue, Feb 28, 2023 at 6:30 AM Sebastian Wick
> <sebastian.wick@redhat.com> wrote:
> >
> > On Tue, Feb 28, 2023 at 12:48 AM Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > On Mon, Feb 27, 2023 at 2:44 PM Sebastian Wick
> > > <sebastian.wick@redhat.com> wrote:
> > > >
> > > > On Mon, Feb 27, 2023 at 11:20 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> > > > > >
> > > > > > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > > > > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > > > > > >
> > > > > > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > > > > > >
> > > > > > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > > > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > > > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > > > >>
> > > > > > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > > > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > > > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > > > >>>>
> > > > > > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > > > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > > > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > >>>>>>
> > > > > > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > > >>>>>>>>
> > > > > > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > > > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > >>>>>>>>
> > > > > > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > > >>>>>>
> > > > > > > > >>>>>> ...
> > > > > > > > >>>>>>
> > > > > > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > > > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > > > > > >>>>>>>>>> another timestamp, what should happen?
> > > > > > > > >>>>>>>>>
> > > > > > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > > > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > > > > > >>>>>>>>
> > > > > > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > > > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > > > > > >>>>>>>>
> > > > > > > > >>>>>>>
> > > > > > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > > > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > > > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > > > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > > > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > > > > > >>>>>>
> > > > > > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > > > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > > > > > >>>>>> that.
> > > > > > > > >>>>>
> > > > > > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > > > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > > > > > >>>>>
> > > > > > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > > > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > > > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > > > > > >>>>
> > > > > > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > > > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > > > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > > > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > > > > > >>>> understanding and common usage patterns.
> > > > > > > > >>>>
> > > > > > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > > > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > > > > > >>>
> > > > > > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > > > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > > > > > >>> always been the same over the years, I don't think there is a strict
> > > > > > > > >>> contract.
> > > > > > > > >>>
> > > > > > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > > > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > > > > > >>>
> > > > > > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > > > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > > > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > > > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > > > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > > > > > >>>>
> > > > > > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > > > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > > > > > >>>>
> > > > > > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > > > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > > > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > > > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > > > > > >>>>
> > > > > > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > > > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > > > > > >>>> and how it was intended to be used. That's all.
> > > > > > > > >>>
> > > > > > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > > > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > > > > > >>> make yourself go faster does not required additional privilege I also
> > > > > > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > > > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > > > > > >>>
> > > > > > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > > > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > > > > > >>
> > > > > > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > > > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > > > > > >> battery life) and it does not penalize other programs like e.g.
> > > > > > > > >> job priorities do.
> > > > > > > > >
> > > > > > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > > > > > make optimal choices without the full coordination between both schedulers.
> > > > > > > > >
> > > > > > > > > But that is even not the main point, which is that if everyone sets the
> > > > > > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > > > > > >
> > > > > > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > > > > > has accepted it so can we I guess.
> > > > > > > > >
> > > > > > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > > > > > >
> > > > > > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > > > > > >> right? I don't recall seeing any mention of that.
> > > > > > > > >>
> > > > > > > > >> ...right?
> > > > > > > > >
> > > > > > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > > > > > assume what drivers would do with the info to begin with.
> > > > > > > > >
> > > > > > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > > > > > we would either revive or re-implement that idea. In which case
> > > > > > > > > considering the fence deadline would naturally slot in and give true
> > > > > > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > > > > > helps).
> > > > > > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > > > > > poll(POLLPRI)?
> > > > > > >
> > > > > > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > > > > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > > > > > poll() without POLLPRI.
> > > > > > >
> > > > > > > Other than compositors which do frame-pacing I expect the main usage
> > > > > > > of either of these is mesa.
> > > > > >
> > > > > > Okay, so it looks like we already agreed that having a way to bump frequency
> > > > > > from userspace is acceptable. either because there are already other ways
> > > > > > that you can waste power and because this already acceptable in the CPU
> > > > > > world.
> > > > > >
> > > > > > But why we are doing this in hidden ways then?
> > > > > >
> > > > > > Why can't we have this hint per context that is getting executed?
> > > > > > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > > > > > value like the latency priority)?
> > > > > >
> > > > > > I don't like the waitboost because this heurisitic fails in some media cases.
> > > > > > I don't like the global setting because we might be alternating a top-priority
> > > > > > with low-priority cases...
> > > > > >
> > > > > > So, why not something per context in execution?
> > > > > >
> > > > >
> > > > > It needs to be finer granularity than per-context, because not all
> > > > > waits should trigger boosting.  For example, virglrenderer ends up
> > > > > with a thread polling unsignaled fences to know when to signal an
> > > > > interrupt to the guest virtgpu.  This alone shouldn't trigger
> > > > > boosting.  (We also wouldn't want to completely disable boosting for
> > > > > virglrenderer.)  Or the usermode driver could be waiting on a fence to
> > > > > know when to do some cleanup.
> > > > >
> > > > > That is not to say that there isn't room for per-context flags to
> > > > > disable/enable boosting for fences created by that context, meaning it
> > > > > could be an AND operation for i915 if it needs to be.
> > > >
> > > > First of all, I believe that the fence deadline hint is a good idea.
> > > > With that being said, I also don't think it is sufficient in a lot of
> > > > cases.
> > > >
> > > > The one thing I was alluding to before and that Pekka mentioned as
> > > > well is that mutter for example has a problem where we're missing the
> > > > deadline consistently because the clocks don't ramp up fast enough and
> > > > there is a MR which is just trying to keep the GPU busy to avoid this.
> > >
> > > the dynamic double/triple buffer thing?
> >
> > Yes
> >
> > > > It would be much better if the kernel could make sure the clocks are
> > > > all ramped up when we start submitting work. In the compositor we
> > > > actually have a lot of information that *should* influence clocks. We
> > > > know when we're going to start submitting work and when the deadline
> > > > for that work is beforehand. We know which windows are visible, and
> > > > which one should have the highest priority.
> > >
> > > This sounds like something orthogonal.. something for cgroups?  Ie.
> > > android moves visible/foreground apps to a different cgroup to given
> > > them higher priority.  Tvrtko had a patchset to add drm cgroup
> > > support..
> >
> > For the priority stuff, yes, probably. The visibility information on
> > the other hand could be used to determine if we want to ramp up the
> > GPU in the first place.
>
> Right, but I think that we could have multiple cgroup based knobs, one
> that adjusts priority and one that limits/disables deadline based
> boost?  This way the compositor could setup different policies for
> visible vs hidden apps influencing both how much time they get on the
> GPU and boost.

I'm not sure if a negative control really makes sense. There are
probably timing sensitive use cases where the result doesn't show up
on the local screen and penalizing them when they are not focused or
hidden might also not be the best idea.

> > > > We know when there are
> > > > input events which actually matter.
> > >
> > > This I see input as a different boost source for the driver.  (Ie. one
> > > boost signal is missing fence deadlines, another is input events,
> > > etc.)
> > >
> > > We end up using downstream input-handlers on the kernel side for this.
> > > Partially for the freq boost (but mostly not, UI interactive workloads
> > > like touchscreen scrolling don't generally need high GPU freqs, they
> > > are more memory bandwidth limited if they are limited by anything)..
> > > really the reason here is to get a head-start on the ~2ms that it
> > > takes to power up the GPU if it is suspended.
> >
> > Right, but one of my main points I want to make here is that we could
> > get the head-start not only in response to input events but also for
> > the GPU work the compositor submits and in the future also to GPU work
> > that clients commit. Except that we don't have a way to tell the
> > kernel about it.
> >
> > > But this is not quite perfect, since for example some keys should be
> > > handled on key-down but others on key-up.
> > >
> > > But again, this is something different from fence deadlines.  I'm
> > > interested in proposals because we do need something for this.  But I
> > > think it is something is orthogonal to this series.  For input, we
> > > want the kernel to know long before userspace is ready to submit
> > > rendering.
> >
> > We can do that in the compositor! Input events are really not
> > something you should care about in the kernel. Input itself is also
> > not the only indication of incoming animated content. Some network
> > packets arriving could equally well result in the same situation.
>
> We do use input boost not just for GPU freq, but also for CPU freq.
> It seems like triggering it from the kernel could happen somewhat
> sooner than userspace.  (But just speculation.)

Technically it has to be sooner but I doubt it makes any difference.
Getting a lot of false-positives on the other hand does make a
difference.

> I'm not sure network packets count.  Or at least compared to a touch
> interface, the user won't perceive the lag nearly as much, compared to
> whether or not the UI tracks their fingertips.

Sure, stutter in interactive cases is especially awful but stutter is
awful in general. If we can solve it in all cases we should do so.

> > > > We know when the deadline for
> > > > client work is.
> > > >
> > > > In the future we also want to make sure clients know beforehand when
> > > > they should start their work and when the deadline is but that's all
> > > > very much WIP in both wayland and vulkan.
> > > >
> > > > There are two issues:
> > > >
> > > > 1. The compositor has no way to communicate any of that information to
> > > > the kernel.
> > > > 2. The only connection to client work the compositor has is a fence to
> > > > the last bit of work that must be done before the deadline after a
> > > > wl_surface.commit.
> > >
> > > If the client isn't using multiple GPUs, a single fence should be
> > > sufficient.  And even if it is, well we still have all the dependency
> > > information on the kernel side.  Ie. drm/sched knows what fences it is
> > > waiting on if it is waiting to schedule the work associated with the
> > > last fence.  It would otherwise require drm/sched to be a bit more
> > > tricky than it is so far in this series.
> > >
> > > But I think the normal dual-gpu case, the app is only dealing with a single GPU?
> >
> > We generally don't know which GPU a client uses though. We know which
> > one we're using and tell the client that the buffer should be
> > compatible with it but that's the extent of information we have, until
> > we get a fence but that fence usually gets to the compositor pretty
> > late. Way too late for the compositor to tell the kernel to ramp up
> > the GPU and still have an impact.
>
> Are you sure about that?  I'd have expected the app to hand off
> fence+buffer to the compositor pretty quickly after rendering is
> submitted.  This is what I've seen elsewhere.

After rendering is submitted it is already too late if the GPU takes
2ms to wake up. And if there is no rendering submitted there is no
fence and if there is no fence it has no deadline. If we want to solve
that we also need some kind of 'work will be submitted to the queue
starting from this time' hint.

> > It also seems like we're moving away from tracking execution
> > dependencies with fences when we're switching to user mode fences.
>
> I suppose there are two cases..
>
> 1) dependent fences all from the same device.. no prob, the right
> driver gets the signal that it needs to clk up, and figures out the
> rest on it's own

AFAIU with user mode fences it's impossible for the kernel to figure
out what work depends on it and it might never signal. The whole
deadline on fences thing breaks down with user mode fences.

> 2) dependent fences from different devices.. I suppose if device B is
> waiting for a fence from device A before it can make forward progress,
> why not express this as a deadline hint on A's fence?  (But by the
> time you have this problem, you are probably not really caring about
> power consumption, so meh..)
>
> BR,
> -R
>
> > > > So in both cases a fence is just not the right primitive for us. We
> > > > need to be able to provide per-context/queue information for work that
> > > > will happen in the future and we need a way to refer to a
> > > > context/queue generically and over IPC to boost the clocks of the
> > > > device that a client is actually using and maybe even give priority.
> > > >
> > > > But like I said, having a per-fence deadline is probably still a good
> > > > idea and doesn't conflict with any of the more coarse information.
> > >
> > > Yeah, I think the thing is you need multiple things, and this is only
> > > one of them ;-)
> > >
> > > BR,
> > > -R
> > >
> >
>


^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-02-27 22:20                                     ` Rob Clark
  2023-02-27 22:44                                       ` Sebastian Wick
@ 2023-03-01 15:45                                       ` Rodrigo Vivi
  1 sibling, 0 replies; 93+ messages in thread
From: Rodrigo Vivi @ 2023-03-01 15:45 UTC (permalink / raw)
  To: Rob Clark
  Cc: Luben Tuikov, Tvrtko Ursulin, Pekka Paalanen, Rob Clark,
	Tvrtko Ursulin, Gustavo Padovan, Michel Dänzer, open list,
	dri-devel, Sumit Semwal,
	moderated list:DMA BUFFER SHARING FRAMEWORK,
	Christian König, Alex Deucher, freedreno,
	Christian König, open list:SYNC FILE FRAMEWORK

On Mon, Feb 27, 2023 at 02:20:04PM -0800, Rob Clark wrote:
> On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> >
> > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > >
> > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > >
> > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > >>
> > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > >>>>
> > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > >>>>>>
> > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > >>>>>>
> > > > >>>>>> ...
> > > > >>>>>>
> > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > >>>>>>>>>> another timestamp, what should happen?
> > > > >>>>>>>>>
> > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > >>>>>>>>
> > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > >>>>>>>>
> > > > >>>>>>>
> > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > >>>>>>
> > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > >>>>>> that.
> > > > >>>>>
> > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > >>>>>
> > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > >>>>
> > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > >>>> understanding and common usage patterns.
> > > > >>>>
> > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > >>>
> > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > >>> always been the same over the years, I don't think there is a strict
> > > > >>> contract.
> > > > >>>
> > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > >>>
> > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > >>>>
> > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > >>>>
> > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > >>>>
> > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > >>>> and how it was intended to be used. That's all.
> > > > >>>
> > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > >>> make yourself go faster does not required additional privilege I also
> > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > >>>
> > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > >>
> > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > >> battery life) and it does not penalize other programs like e.g.
> > > > >> job priorities do.
> > > > >
> > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > make optimal choices without the full coordination between both schedulers.
> > > > >
> > > > > But that is even not the main point, which is that if everyone sets the
> > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > >
> > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > has accepted it so can we I guess.
> > > > >
> > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > >
> > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > >> right? I don't recall seeing any mention of that.
> > > > >>
> > > > >> ...right?
> > > > >
> > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > assume what drivers would do with the info to begin with.
> > > > >
> > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > we would either revive or re-implement that idea. In which case
> > > > > considering the fence deadline would naturally slot in and give true
> > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > helps).
> > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > poll(POLLPRI)?
> > >
> > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > poll() without POLLPRI.
> > >
> > > Other than compositors which do frame-pacing I expect the main usage
> > > of either of these is mesa.
> >
> > Okay, so it looks like we already agreed that having a way to bump frequency
> > from userspace is acceptable. either because there are already other ways
> > that you can waste power and because this already acceptable in the CPU
> > world.
> >
> > But why we are doing this in hidden ways then?
> >
> > Why can't we have this hint per context that is getting executed?
> > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > value like the latency priority)?
> >
> > I don't like the waitboost because this heurisitic fails in some media cases.
> > I don't like the global setting because we might be alternating a top-priority
> > with low-priority cases...
> >
> > So, why not something per context in execution?
> >
> 
> It needs to be finer granularity than per-context, because not all
> waits should trigger boosting.  For example, virglrenderer ends up
> with a thread polling unsignaled fences to know when to signal an
> interrupt to the guest virtgpu.  This alone shouldn't trigger
> boosting.  (We also wouldn't want to completely disable boosting for
> virglrenderer.)  Or the usermode driver could be waiting on a fence to
> know when to do some cleanup.
> 
> That is not to say that there isn't room for per-context flags to
> disable/enable boosting for fences created by that context, meaning it
> could be an AND operation for i915 if it needs to be.

Right. It can be both ways I agree.

> 
> BR,
> -R

^ permalink raw reply	[flat|nested] 93+ messages in thread

* Re: [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI
  2023-03-01 15:31                                               ` Sebastian Wick
@ 2023-03-01 16:02                                                 ` Rob Clark
  0 siblings, 0 replies; 93+ messages in thread
From: Rob Clark @ 2023-03-01 16:02 UTC (permalink / raw)
  To: Sebastian Wick
  Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, Tvrtko Ursulin,
	Gustavo Padovan, Michel Dänzer, open list, dri-devel,
	Christian König,
	moderated list:DMA BUFFER SHARING FRAMEWORK, Pekka Paalanen,
	Luben Tuikov, Christian König, Alex Deucher, freedreno,
	Sumit Semwal, open list:SYNC FILE FRAMEWORK

On Wed, Mar 1, 2023 at 7:31 AM Sebastian Wick <sebastian.wick@redhat.com> wrote:
>
> On Tue, Feb 28, 2023 at 11:52 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Tue, Feb 28, 2023 at 6:30 AM Sebastian Wick
> > <sebastian.wick@redhat.com> wrote:
> > >
> > > On Tue, Feb 28, 2023 at 12:48 AM Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > On Mon, Feb 27, 2023 at 2:44 PM Sebastian Wick
> > > > <sebastian.wick@redhat.com> wrote:
> > > > >
> > > > > On Mon, Feb 27, 2023 at 11:20 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > On Mon, Feb 27, 2023 at 1:36 PM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> > > > > > >
> > > > > > > On Fri, Feb 24, 2023 at 09:59:57AM -0800, Rob Clark wrote:
> > > > > > > > On Fri, Feb 24, 2023 at 7:27 AM Luben Tuikov <luben.tuikov@amd.com> wrote:
> > > > > > > > >
> > > > > > > > > On 2023-02-24 06:37, Tvrtko Ursulin wrote:
> > > > > > > > > >
> > > > > > > > > > On 24/02/2023 11:00, Pekka Paalanen wrote:
> > > > > > > > > >> On Fri, 24 Feb 2023 10:50:51 +0000
> > > > > > > > > >> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > > > > >>
> > > > > > > > > >>> On 24/02/2023 10:24, Pekka Paalanen wrote:
> > > > > > > > > >>>> On Fri, 24 Feb 2023 09:41:46 +0000
> > > > > > > > > >>>> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > > > > > > > > >>>>
> > > > > > > > > >>>>> On 24/02/2023 09:26, Pekka Paalanen wrote:
> > > > > > > > > >>>>>> On Thu, 23 Feb 2023 10:51:48 -0800
> > > > > > > > > >>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > > >>>>>>
> > > > > > > > > >>>>>>> On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > > > >>>>>>>>
> > > > > > > > > >>>>>>>> On Wed, 22 Feb 2023 07:37:26 -0800
> > > > > > > > > >>>>>>>> Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > > >>>>>>>>
> > > > > > > > > >>>>>>>>> On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> > > > > > > > > >>>>>>
> > > > > > > > > >>>>>> ...
> > > > > > > > > >>>>>>
> > > > > > > > > >>>>>>>>>> On another matter, if the application uses SET_DEADLINE with one
> > > > > > > > > >>>>>>>>>> timestamp, and the compositor uses SET_DEADLINE on the same thing with
> > > > > > > > > >>>>>>>>>> another timestamp, what should happen?
> > > > > > > > > >>>>>>>>>
> > > > > > > > > >>>>>>>>> The expectation is that many deadline hints can be set on a fence.
> > > > > > > > > >>>>>>>>> The fence signaller should track the soonest deadline.
> > > > > > > > > >>>>>>>>
> > > > > > > > > >>>>>>>> You need to document that as UAPI, since it is observable to userspace.
> > > > > > > > > >>>>>>>> It would be bad if drivers or subsystems would differ in behaviour.
> > > > > > > > > >>>>>>>>
> > > > > > > > > >>>>>>>
> > > > > > > > > >>>>>>> It is in the end a hint.  It is about giving the driver more
> > > > > > > > > >>>>>>> information so that it can make better choices.  But the driver is
> > > > > > > > > >>>>>>> even free to ignore it.  So maybe "expectation" is too strong of a
> > > > > > > > > >>>>>>> word.  Rather, any other behavior doesn't really make sense.  But it
> > > > > > > > > >>>>>>> could end up being dictated by how the hw and/or fw works.
> > > > > > > > > >>>>>>
> > > > > > > > > >>>>>> It will stop being a hint once it has been implemented and used in the
> > > > > > > > > >>>>>> wild long enough. The kernel userspace regression rules make sure of
> > > > > > > > > >>>>>> that.
> > > > > > > > > >>>>>
> > > > > > > > > >>>>> Yeah, tricky and maybe a gray area in this case. I think we eluded
> > > > > > > > > >>>>> elsewhere in the thread that renaming the thing might be an option.
> > > > > > > > > >>>>>
> > > > > > > > > >>>>> So maybe instead of deadline, which is a very strong word, use something
> > > > > > > > > >>>>> along the lines of "present time hint", or "signalled time hint"? Maybe
> > > > > > > > > >>>>> reads clumsy. Just throwing some ideas for a start.
> > > > > > > > > >>>>
> > > > > > > > > >>>> You can try, but I fear that if it ever changes behaviour and
> > > > > > > > > >>>> someone notices that, it's labelled as a kernel regression. I don't
> > > > > > > > > >>>> think documentation has ever been the authoritative definition of UABI
> > > > > > > > > >>>> in Linux, it just guides drivers and userspace towards a common
> > > > > > > > > >>>> understanding and common usage patterns.
> > > > > > > > > >>>>
> > > > > > > > > >>>> So even if the UABI contract is not documented (ugh), you need to be
> > > > > > > > > >>>> prepared to set the UABI contract through kernel implementation.
> > > > > > > > > >>>
> > > > > > > > > >>> To be the devil's advocate it probably wouldn't be an ABI regression but
> > > > > > > > > >>> just an regression. Same way as what nice(2) priorities mean hasn't
> > > > > > > > > >>> always been the same over the years, I don't think there is a strict
> > > > > > > > > >>> contract.
> > > > > > > > > >>>
> > > > > > > > > >>> Having said that, it may be different with latency sensitive stuff such
> > > > > > > > > >>> as UIs though since it is very observable and can be very painful to users.
> > > > > > > > > >>>
> > > > > > > > > >>>> If you do not document the UABI contract, then different drivers are
> > > > > > > > > >>>> likely to implement it differently, leading to differing behaviour.
> > > > > > > > > >>>> Also userspace will invent wild ways to abuse the UABI if there is no
> > > > > > > > > >>>> documentation guiding it on proper use. If userspace or end users
> > > > > > > > > >>>> observe different behaviour, that's bad even if it's not a regression.
> > > > > > > > > >>>>
> > > > > > > > > >>>> I don't like the situation either, but it is what it is. UABI stability
> > > > > > > > > >>>> trumps everything regardless of whether it was documented or not.
> > > > > > > > > >>>>
> > > > > > > > > >>>> I bet userspace is going to use this as a "make it faster, make it
> > > > > > > > > >>>> hotter" button. I would not be surprised if someone wrote a LD_PRELOAD
> > > > > > > > > >>>> library that stamps any and all fences with an expired deadline to
> > > > > > > > > >>>> just squeeze out a little more through some weird side-effect.
> > > > > > > > > >>>>
> > > > > > > > > >>>> Well, that's hopefully overboard in scaring, but in the end, I would
> > > > > > > > > >>>> like to see UABI documented so I can have a feeling of what it is for
> > > > > > > > > >>>> and how it was intended to be used. That's all.
> > > > > > > > > >>>
> > > > > > > > > >>> We share the same concern. If you read elsewhere in these threads you
> > > > > > > > > >>> will notice I have been calling this an "arms race". If the ability to
> > > > > > > > > >>> make yourself go faster does not required additional privilege I also
> > > > > > > > > >>> worry everyone will do it at which point it becomes pointless. So yes, I
> > > > > > > > > >>> do share this concern about exposing any of this as an unprivileged uapi.
> > > > > > > > > >>>
> > > > > > > > > >>> Is it possible to limit access to only compositors in some sane way?
> > > > > > > > > >>> Sounds tricky when dma-fence should be disconnected from DRM..
> > > > > > > > > >>
> > > > > > > > > >> Maybe it's not that bad in this particular case, because we are talking
> > > > > > > > > >> only about boosting GPU clocks which benefits everyone (except
> > > > > > > > > >> battery life) and it does not penalize other programs like e.g.
> > > > > > > > > >> job priorities do.
> > > > > > > > > >
> > > > > > > > > > Apart from efficiency that you mentioned, which does not always favor
> > > > > > > > > > higher clocks, sometimes thermal budget is also shared between CPU and
> > > > > > > > > > GPU. So more GPU clocks can mean fewer CPU clocks. It's really hard to
> > > > > > > > > > make optimal choices without the full coordination between both schedulers.
> > > > > > > > > >
> > > > > > > > > > But that is even not the main point, which is that if everyone sets the
> > > > > > > > > > immediate deadline then having the deadline API is a bit pointless. For
> > > > > > > > > > instance there is a reason negative nice needs CAP_SYS_ADMIN.
> > > > > > > > > >
> > > > > > > > > > However Rob has also pointed out the existence of uclamp.min via
> > > > > > > > > > sched_setattr which is unprivileged and can influence frequency
> > > > > > > > > > selection in the CPU world, so I conceded on that point. If CPU world
> > > > > > > > > > has accepted it so can we I guess.
> > > > > > > > > >
> > > > > > > > > > So IMO we are back to whether we can agree defining it is a hint is good
> > > > > > > > > > enough, be in via the name of the ioctl/flag itself or via documentation.
> > > > > > > > > >
> > > > > > > > > >> Drivers are not going to use the deadline for scheduling priorities,
> > > > > > > > > >> right? I don't recall seeing any mention of that.
> > > > > > > > > >>
> > > > > > > > > >> ...right?
> > > > > > > > > >
> > > > > > > > > > I wouldn't have thought it would be beneficial to preclude that, or
> > > > > > > > > > assume what drivers would do with the info to begin with.
> > > > > > > > > >
> > > > > > > > > > For instance in i915 we almost had a deadline based scheduler which was
> > > > > > > > > > much fairer than the current priority sorted fifo and in an ideal world
> > > > > > > > > > we would either revive or re-implement that idea. In which case
> > > > > > > > > > considering the fence deadline would naturally slot in and give true
> > > > > > > > > > integration with compositor deadlines (not just boost clocks and pray it
> > > > > > > > > > helps).
> > > > > > > > > How is user-space to decide whether to use ioctl(SET_DEADLINE) or
> > > > > > > > > poll(POLLPRI)?
> > > > > > > >
> > > > > > > > Implementation of blocking gl/vk/cl APIs, like glFinish() would use
> > > > > > > > poll(POLLPRI).  It could also set an immediate deadline and then call
> > > > > > > > poll() without POLLPRI.
> > > > > > > >
> > > > > > > > Other than compositors which do frame-pacing I expect the main usage
> > > > > > > > of either of these is mesa.
> > > > > > >
> > > > > > > Okay, so it looks like we already agreed that having a way to bump frequency
> > > > > > > from userspace is acceptable. either because there are already other ways
> > > > > > > that you can waste power and because this already acceptable in the CPU
> > > > > > > world.
> > > > > > >
> > > > > > > But why we are doing this in hidden ways then?
> > > > > > >
> > > > > > > Why can't we have this hint per context that is getting executed?
> > > > > > > (either with a boost-context flag or with some low/med/max or '-1' to '1'
> > > > > > > value like the latency priority)?
> > > > > > >
> > > > > > > I don't like the waitboost because this heurisitic fails in some media cases.
> > > > > > > I don't like the global setting because we might be alternating a top-priority
> > > > > > > with low-priority cases...
> > > > > > >
> > > > > > > So, why not something per context in execution?
> > > > > > >
> > > > > >
> > > > > > It needs to be finer granularity than per-context, because not all
> > > > > > waits should trigger boosting.  For example, virglrenderer ends up
> > > > > > with a thread polling unsignaled fences to know when to signal an
> > > > > > interrupt to the guest virtgpu.  This alone shouldn't trigger
> > > > > > boosting.  (We also wouldn't want to completely disable boosting for
> > > > > > virglrenderer.)  Or the usermode driver could be waiting on a fence to
> > > > > > know when to do some cleanup.
> > > > > >
> > > > > > That is not to say that there isn't room for per-context flags to
> > > > > > disable/enable boosting for fences created by that context, meaning it
> > > > > > could be an AND operation for i915 if it needs to be.
> > > > >
> > > > > First of all, I believe that the fence deadline hint is a good idea.
> > > > > With that being said, I also don't think it is sufficient in a lot of
> > > > > cases.
> > > > >
> > > > > The one thing I was alluding to before and that Pekka mentioned as
> > > > > well is that mutter for example has a problem where we're missing the
> > > > > deadline consistently because the clocks don't ramp up fast enough and
> > > > > there is a MR which is just trying to keep the GPU busy to avoid this.
> > > >
> > > > the dynamic double/triple buffer thing?
> > >
> > > Yes
> > >
> > > > > It would be much better if the kernel could make sure the clocks are
> > > > > all ramped up when we start submitting work. In the compositor we
> > > > > actually have a lot of information that *should* influence clocks. We
> > > > > know when we're going to start submitting work and when the deadline
> > > > > for that work is beforehand. We know which windows are visible, and
> > > > > which one should have the highest priority.
> > > >
> > > > This sounds like something orthogonal.. something for cgroups?  Ie.
> > > > android moves visible/foreground apps to a different cgroup to given
> > > > them higher priority.  Tvrtko had a patchset to add drm cgroup
> > > > support..
> > >
> > > For the priority stuff, yes, probably. The visibility information on
> > > the other hand could be used to determine if we want to ramp up the
> > > GPU in the first place.
> >
> > Right, but I think that we could have multiple cgroup based knobs, one
> > that adjusts priority and one that limits/disables deadline based
> > boost?  This way the compositor could setup different policies for
> > visible vs hidden apps influencing both how much time they get on the
> > GPU and boost.
>
> I'm not sure if a negative control really makes sense. There are
> probably timing sensitive use cases where the result doesn't show up
> on the local screen and penalizing them when they are not focused or
> hidden might also not be the best idea.

Policy decisions are up to the OS/distro.. we can only provide
controls that can be tuned, it is up to someone else to choose how to
use those controls, such as whether it wants to differentiate between
visible and non-visible apps.  That is why cgroups and a negative
control are a good solution for controlling what the driver does with
the deadline boost hint.

> > > > > We know when there are
> > > > > input events which actually matter.
> > > >
> > > > This I see input as a different boost source for the driver.  (Ie. one
> > > > boost signal is missing fence deadlines, another is input events,
> > > > etc.)
> > > >
> > > > We end up using downstream input-handlers on the kernel side for this.
> > > > Partially for the freq boost (but mostly not, UI interactive workloads
> > > > like touchscreen scrolling don't generally need high GPU freqs, they
> > > > are more memory bandwidth limited if they are limited by anything)..
> > > > really the reason here is to get a head-start on the ~2ms that it
> > > > takes to power up the GPU if it is suspended.
> > >
> > > Right, but one of my main points I want to make here is that we could
> > > get the head-start not only in response to input events but also for
> > > the GPU work the compositor submits and in the future also to GPU work
> > > that clients commit. Except that we don't have a way to tell the
> > > kernel about it.
> > >
> > > > But this is not quite perfect, since for example some keys should be
> > > > handled on key-down but others on key-up.
> > > >
> > > > But again, this is something different from fence deadlines.  I'm
> > > > interested in proposals because we do need something for this.  But I
> > > > think it is something is orthogonal to this series.  For input, we
> > > > want the kernel to know long before userspace is ready to submit
> > > > rendering.
> > >
> > > We can do that in the compositor! Input events are really not
> > > something you should care about in the kernel. Input itself is also
> > > not the only indication of incoming animated content. Some network
> > > packets arriving could equally well result in the same situation.
> >
> > We do use input boost not just for GPU freq, but also for CPU freq.
> > It seems like triggering it from the kernel could happen somewhat
> > sooner than userspace.  (But just speculation.)
>
> Technically it has to be sooner but I doubt it makes any difference.
> Getting a lot of false-positives on the other hand does make a
> difference.

Regardless of whether it is done in kernel or userspace, you do want a
cooldown period so you are not constantly boosting.

Generally false-positives aren't much of a problem.. ie. touch or
mouse events are not ambiguous.  The exception is key events, because
ideally you don't want to hard-code in the kernel which keys are
modifier keys that should be handled on key-up instead of key-down.
But I think if we allowed userspace to configure this somehow, it
would be perfectly reasonable (and optimal) to handle input boost in
the kernel.

> > I'm not sure network packets count.  Or at least compared to a touch
> > interface, the user won't perceive the lag nearly as much, compared to
> > whether or not the UI tracks their fingertips.
>
> Sure, stutter in interactive cases is especially awful but stutter is
> awful in general. If we can solve it in all cases we should do so.

Sure, but the cases where I've seen a need for input boost are all
about the transition from idle->active.  Ie. the display panel has
gone to self-refresh, gpu is suspended, cpu and memory is downclocked.
And suddenly the user decides they want to start scrolling.  We need
some help to adapt to the new state where we are all of a sudden busy
doing something.  This is where input-boost comes in.  If there is no
direct connection to user input, then there is no lag for the user to
percieve.

> > > > > We know when the deadline for
> > > > > client work is.
> > > > >
> > > > > In the future we also want to make sure clients know beforehand when
> > > > > they should start their work and when the deadline is but that's all
> > > > > very much WIP in both wayland and vulkan.
> > > > >
> > > > > There are two issues:
> > > > >
> > > > > 1. The compositor has no way to communicate any of that information to
> > > > > the kernel.
> > > > > 2. The only connection to client work the compositor has is a fence to
> > > > > the last bit of work that must be done before the deadline after a
> > > > > wl_surface.commit.
> > > >
> > > > If the client isn't using multiple GPUs, a single fence should be
> > > > sufficient.  And even if it is, well we still have all the dependency
> > > > information on the kernel side.  Ie. drm/sched knows what fences it is
> > > > waiting on if it is waiting to schedule the work associated with the
> > > > last fence.  It would otherwise require drm/sched to be a bit more
> > > > tricky than it is so far in this series.
> > > >
> > > > But I think the normal dual-gpu case, the app is only dealing with a single GPU?
> > >
> > > We generally don't know which GPU a client uses though. We know which
> > > one we're using and tell the client that the buffer should be
> > > compatible with it but that's the extent of information we have, until
> > > we get a fence but that fence usually gets to the compositor pretty
> > > late. Way too late for the compositor to tell the kernel to ramp up
> > > the GPU and still have an impact.
> >
> > Are you sure about that?  I'd have expected the app to hand off
> > fence+buffer to the compositor pretty quickly after rendering is
> > submitted.  This is what I've seen elsewhere.
>
> After rendering is submitted it is already too late if the GPU takes
> 2ms to wake up. And if there is no rendering submitted there is no
> fence and if there is no fence it has no deadline. If we want to solve
> that we also need some kind of 'work will be submitted to the queue
> starting from this time' hint.

Right, which is why I view this as a different problem with a
different solution.

> > > It also seems like we're moving away from tracking execution
> > > dependencies with fences when we're switching to user mode fences.
> >
> > I suppose there are two cases..
> >
> > 1) dependent fences all from the same device.. no prob, the right
> > driver gets the signal that it needs to clk up, and figures out the
> > rest on it's own
>
> AFAIU with user mode fences it's impossible for the kernel to figure
> out what work depends on it and it might never signal. The whole
> deadline on fences thing breaks down with user mode fences.

Given that user-mode fences are not a solved problem (and I don't
think they'll ever completely replace kernel fences), I don't think
this is relevant.

BR,
-R

> > 2) dependent fences from different devices.. I suppose if device B is
> > waiting for a fence from device A before it can make forward progress,
> > why not express this as a deadline hint on A's fence?  (But by the
> > time you have this problem, you are probably not really caring about
> > power consumption, so meh..)
> >
> > BR,
> > -R
> >
> > > > > So in both cases a fence is just not the right primitive for us. We
> > > > > need to be able to provide per-context/queue information for work that
> > > > > will happen in the future and we need a way to refer to a
> > > > > context/queue generically and over IPC to boost the clocks of the
> > > > > device that a client is actually using and maybe even give priority.
> > > > >
> > > > > But like I said, having a per-fence deadline is probably still a good
> > > > > idea and doesn't conflict with any of the more coarse information.
> > > >
> > > > Yeah, I think the thing is you need multiple things, and this is only
> > > > one of them ;-)
> > > >
> > > > BR,
> > > > -R
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 93+ messages in thread

end of thread, other threads:[~2023-03-01 16:02 UTC | newest]

Thread overview: 93+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-18 21:15 [PATCH v4 00/14] dma-fence: Deadline awareness Rob Clark
2023-02-18 21:15 ` [PATCH v4 01/14] dma-buf/dma-fence: Add deadline awareness Rob Clark
2023-02-22 10:23   ` Tvrtko Ursulin
2023-02-22 15:28     ` Christian König
2023-02-22 17:04       ` Tvrtko Ursulin
2023-02-22 17:16         ` Rob Clark
2023-02-22 17:33           ` Tvrtko Ursulin
2023-02-22 18:57             ` Rob Clark
2023-02-22 11:01   ` Luben Tuikov
2023-02-18 21:15 ` [PATCH v4 02/14] dma-buf/fence-array: Add fence deadline support Rob Clark
2023-02-18 21:15 ` [PATCH v4 03/14] dma-buf/fence-chain: " Rob Clark
2023-02-22 10:27   ` Tvrtko Ursulin
2023-02-22 15:55     ` Rob Clark
2023-02-18 21:15 ` [PATCH v4 04/14] dma-buf/dma-resv: Add a way to set fence deadline Rob Clark
2023-02-20  8:16   ` Christian König
2023-02-18 21:15 ` [PATCH v4 05/14] dma-buf/sync_file: Add SET_DEADLINE ioctl Rob Clark
2023-02-20  8:27   ` Christian König
2023-02-20 16:09     ` Rob Clark
2023-02-21  8:41       ` Pekka Paalanen
2023-02-23  9:19       ` Christian König
2023-02-20  8:48   ` Pekka Paalanen
2023-02-18 21:15 ` [PATCH v4 06/14] dma-buf/sync_file: Support (E)POLLPRI Rob Clark
2023-02-20  8:31   ` Christian König
2023-02-21  8:38     ` Pekka Paalanen
2023-02-20  8:53   ` Pekka Paalanen
2023-02-20 16:14     ` Rob Clark
2023-02-21  8:37       ` Pekka Paalanen
2023-02-21 16:01         ` Sebastian Wick
2023-02-21 17:55           ` Rob Clark
2023-02-21 16:48       ` Luben Tuikov
2023-02-21 17:53         ` Rob Clark
2023-02-22  9:49           ` Pekka Paalanen
2023-02-22 10:26             ` Luben Tuikov
2023-02-22 15:37             ` Rob Clark
2023-02-23  9:38               ` Pekka Paalanen
2023-02-23 18:51                 ` Rob Clark
2023-02-24  9:26                   ` Pekka Paalanen
2023-02-24  9:41                     ` Tvrtko Ursulin
2023-02-24 10:24                       ` Pekka Paalanen
2023-02-24 10:50                         ` Tvrtko Ursulin
2023-02-24 11:00                           ` Pekka Paalanen
2023-02-24 11:37                             ` Tvrtko Ursulin
2023-02-24 15:26                               ` Luben Tuikov
2023-02-24 17:59                                 ` Rob Clark
2023-02-27 21:35                                   ` Rodrigo Vivi
2023-02-27 22:20                                     ` Rob Clark
2023-02-27 22:44                                       ` Sebastian Wick
2023-02-27 23:48                                         ` Rob Clark
2023-02-28 14:30                                           ` Sebastian Wick
2023-02-28 22:52                                             ` Rob Clark
2023-03-01 15:31                                               ` Sebastian Wick
2023-03-01 16:02                                                 ` Rob Clark
2023-03-01 15:45                                       ` Rodrigo Vivi
2023-02-24 16:59                         ` Rob Clark
2023-02-24 19:44                         ` Rob Clark
2023-02-27  9:34                           ` Pekka Paalanen
2023-02-27 18:43                             ` Rob Clark
2023-02-18 21:15 ` [PATCH v4 07/14] dma-buf/sw_sync: Add fence deadline support Rob Clark
2023-02-20  8:29   ` Christian König
2023-02-18 21:15 ` [PATCH v4 08/14] drm/scheduler: " Rob Clark
2023-02-21 19:40   ` Luben Tuikov
2023-02-18 21:15 ` [PATCH v4 09/14] drm/syncobj: Add deadline support for syncobj waits Rob Clark
2023-02-19 16:09   ` Rob Clark
2023-02-20  9:05   ` Pekka Paalanen
2023-02-20 16:20     ` Rob Clark
2023-02-24  9:51   ` Tvrtko Ursulin
2023-02-18 21:15 ` [PATCH v4 10/14] drm/vblank: Add helper to get next vblank time Rob Clark
2023-02-20  9:08   ` Pekka Paalanen
2023-02-20 15:55     ` Rob Clark
2023-02-21  8:45       ` Pekka Paalanen
2023-02-21 13:01         ` Ville Syrjälä
2023-02-21 13:11           ` Pekka Paalanen
2023-02-21 13:42             ` Ville Syrjälä
2023-02-21 17:50           ` Rob Clark
2023-02-22  9:57             ` Pekka Paalanen
2023-02-22 15:44               ` Rob Clark
2023-02-22 15:55                 ` Ville Syrjälä
2023-02-21 19:54           ` Rob Clark
2023-02-21 21:39             ` Ville Syrjälä
2023-02-21 21:48               ` Ville Syrjälä
2023-02-21 22:28                 ` [Freedreno] " Rob Clark
2023-02-21 22:46                   ` Ville Syrjälä
2023-02-21 23:20                     ` Rob Clark
2023-02-21 23:25                       ` Rob Clark
2023-02-22 10:37   ` Luben Tuikov
2023-02-22 15:48     ` Rob Clark
2023-02-18 21:15 ` [PATCH v4 11/14] drm/atomic-helper: Set fence deadline for vblank Rob Clark
2023-02-22 10:46   ` Luben Tuikov
2023-02-22 15:50     ` Rob Clark
2023-02-18 21:15 ` [PATCH v4 12/14] drm/msm: Add deadline based boost support Rob Clark
2023-02-18 21:15 ` [PATCH v4 13/14] drm/msm: Add wait-boost support Rob Clark
2023-02-18 21:15 ` [PATCH v4 14/14] drm/i915: Add deadline based boost support Rob Clark
2023-02-20 15:46   ` Tvrtko Ursulin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).