linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] drm: commit_work scheduling
@ 2020-09-30 21:17 Rob Clark
  2020-09-30 21:17 ` [PATCH v2 1/3] drm/crtc: Introduce per-crtc kworker Rob Clark
                   ` (4 more replies)
  0 siblings, 5 replies; 28+ messages in thread
From: Rob Clark @ 2020-09-30 21:17 UTC (permalink / raw)
  To: dri-devel
  Cc: linux-arm-msm, Tejun Heo, timmurray, Daniel Vetter, Qais Yousef,
	Rob Clark, open list

From: Rob Clark <robdclark@chromium.org>

The android userspace treats the display pipeline as a realtime problem.
And arguably, if your goal is to not miss frame deadlines (ie. vblank),
it is.  (See https://lwn.net/Articles/809545/ for the best explaination
that I found.)

But this presents a problem with using workqueues for non-blocking
atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
preempt the worker.  Which is not really the outcome you want.. once
the required fences are scheduled, you want to push the atomic commit
down to hw ASAP.

But the decision of whether commit_work should be RT or not really
depends on what userspace is doing.  For a pure CFS userspace display
pipeline, commit_work() should remain SCHED_NORMAL.

To handle this, convert non-blocking commit_work() to use per-CRTC
kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
used to avoid serializing commits when userspace is using a per-CRTC
update loop.  And the last patch exposes the task id to userspace as
a CRTC property, so that userspace can adjust the priority and sched
policy to fit it's needs.


v2: Drop client cap and in-kernel setting of priority/policy in
    favor of exposing the kworker tid to userspace so that user-
    space can set priority/policy.

Rob Clark (3):
  drm/crtc: Introduce per-crtc kworker
  drm/atomic: Use kthread worker for nonblocking commits
  drm: Expose CRTC's kworker task id

 drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
 drivers/gpu/drm/drm_crtc.c          | 14 +++++++++++++
 drivers/gpu/drm/drm_mode_config.c   | 14 +++++++++++++
 drivers/gpu/drm/drm_mode_object.c   |  4 ++++
 include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
 include/drm/drm_crtc.h              |  8 ++++++++
 include/drm/drm_mode_config.h       |  9 +++++++++
 include/drm/drm_property.h          |  9 +++++++++
 8 files changed, 98 insertions(+), 4 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 1/3] drm/crtc: Introduce per-crtc kworker
  2020-09-30 21:17 [PATCH v2 0/3] drm: commit_work scheduling Rob Clark
@ 2020-09-30 21:17 ` Rob Clark
  2020-09-30 21:17 ` [PATCH v2 2/3] drm/atomic: Use kthread worker for nonblocking commits Rob Clark
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 28+ messages in thread
From: Rob Clark @ 2020-09-30 21:17 UTC (permalink / raw)
  To: dri-devel
  Cc: linux-arm-msm, Tejun Heo, timmurray, Daniel Vetter, Qais Yousef,
	Rob Clark, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, open list

From: Rob Clark <robdclark@chromium.org>

This will be used for non-block atomic commits.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/drm_crtc.c | 11 +++++++++++
 include/drm/drm_crtc.h     |  8 ++++++++
 2 files changed, 19 insertions(+)

diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index aecdd7ea26dc..4f7c0bfce0a3 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -326,6 +326,14 @@ int drm_crtc_init_with_planes(struct drm_device *dev, struct drm_crtc *crtc,
 					   config->prop_out_fence_ptr, 0);
 		drm_object_attach_property(&crtc->base,
 					   config->prop_vrr_enabled, 0);
+
+		crtc->worker = kthread_create_worker(0, "%s-worker", crtc->name);
+		if (IS_ERR(crtc->worker)) {
+			drm_mode_object_unregister(dev, &crtc->base);
+			ret = PTR_ERR(crtc->worker);
+			crtc->worker = NULL;
+			return ret;
+		}
 	}
 
 	return 0;
@@ -366,6 +374,9 @@ void drm_crtc_cleanup(struct drm_crtc *crtc)
 
 	kfree(crtc->name);
 
+	if (crtc->worker)
+		kthread_destroy_worker(crtc->worker);
+
 	memset(crtc, 0, sizeof(*crtc));
 }
 EXPORT_SYMBOL(drm_crtc_cleanup);
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
index 59b51a09cae6..dfdb04619b0d 100644
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -30,6 +30,7 @@
 #include <linux/types.h>
 #include <linux/fb.h>
 #include <linux/hdmi.h>
+#include <linux/kthread.h>
 #include <linux/media-bus-format.h>
 #include <uapi/drm/drm_mode.h>
 #include <uapi/drm/drm_fourcc.h>
@@ -1172,6 +1173,13 @@ struct drm_crtc {
 	 * Initialized via drm_self_refresh_helper_init().
 	 */
 	struct drm_self_refresh_data *self_refresh_data;
+
+	/**
+	 * @worker:
+	 *
+	 * Per-CRTC worker for nonblock atomic commits.
+	 */
+	struct kthread_worker *worker;
 };
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/3] drm/atomic: Use kthread worker for nonblocking commits
  2020-09-30 21:17 [PATCH v2 0/3] drm: commit_work scheduling Rob Clark
  2020-09-30 21:17 ` [PATCH v2 1/3] drm/crtc: Introduce per-crtc kworker Rob Clark
@ 2020-09-30 21:17 ` Rob Clark
  2020-09-30 21:17 ` [PATCH v2 3/3] drm: Expose CRTC's kworker task id Rob Clark
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 28+ messages in thread
From: Rob Clark @ 2020-09-30 21:17 UTC (permalink / raw)
  To: dri-devel
  Cc: linux-arm-msm, Tejun Heo, timmurray, Daniel Vetter, Qais Yousef,
	Rob Clark, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, open list

From: Rob Clark <robdclark@chromium.org>

This will allow us to more easily switch scheduling rules based on what
userspace wants.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
 include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 9e1ad493e689..75eeec5e7b10 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1659,11 +1659,11 @@ static void commit_tail(struct drm_atomic_state *old_state)
 	drm_atomic_state_put(old_state);
 }
 
-static void commit_work(struct work_struct *work)
+static void commit_work(struct kthread_work *work)
 {
 	struct drm_atomic_state *state = container_of(work,
 						      struct drm_atomic_state,
-						      commit_work);
+						      commit_kwork);
 	commit_tail(state);
 }
 
@@ -1797,6 +1797,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 			     struct drm_atomic_state *state,
 			     bool nonblock)
 {
+	struct kthread_worker *worker = NULL;
 	int ret;
 
 	if (state->async_update) {
@@ -1814,7 +1815,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 	if (ret)
 		return ret;
 
-	INIT_WORK(&state->commit_work, commit_work);
+	kthread_init_work(&state->commit_kwork, commit_work);
 
 	ret = drm_atomic_helper_prepare_planes(dev, state);
 	if (ret)
@@ -1857,8 +1858,12 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 	 */
 
 	drm_atomic_state_get(state);
+
 	if (nonblock)
-		queue_work(system_unbound_wq, &state->commit_work);
+		worker = drm_atomic_pick_worker(state);
+
+	if (worker)
+		kthread_queue_work(worker, &state->commit_kwork);
 	else
 		commit_tail(state);
 
diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
index d07c851d255b..8d0ee19953df 100644
--- a/include/drm/drm_atomic.h
+++ b/include/drm/drm_atomic.h
@@ -373,8 +373,18 @@ struct drm_atomic_state {
 	 *
 	 * Work item which can be used by the driver or helpers to execute the
 	 * commit without blocking.
+	 *
+	 * This is deprecated, use commit_kwork.
 	 */
 	struct work_struct commit_work;
+
+	/**
+	 * @commit_kwork:
+	 *
+	 * Work item which can be used by the driver or helpers to execute the
+	 * commit without blocking.
+	 */
+	struct kthread_work commit_kwork;
 };
 
 void __drm_crtc_commit_free(struct kref *kref);
@@ -954,6 +964,27 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p);
 		      (new_obj_state) = (__state)->private_objs[__i].new_state, 1); \
 	     (__i)++)
 
+/**
+ * drm_atomic_pick_worker - helper to get kworker to use for nonblocking commit
+ * @state: the &drm_atomic_state for the commit
+ *
+ * Pick an appropriate worker for a given atomic update.  The first CRTC
+ * invovled in the atomic update is used to pick the worker, to prevent
+ * serializing multiple pageflips / atomic-updates on indenpendent CRTCs.
+ */
+static inline struct kthread_worker *
+drm_atomic_pick_worker(const struct drm_atomic_state *state)
+{
+	struct drm_crtc_state *crtc_state;
+	struct drm_crtc *crtc;
+	unsigned i;
+
+	for_each_new_crtc_in_state(state, crtc, crtc_state, i)
+		return crtc->worker;
+
+	return NULL;
+}
+
 /**
  * drm_atomic_crtc_needs_modeset - compute combined modeset need
  * @state: &drm_crtc_state for the CRTC
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/3] drm: Expose CRTC's kworker task id
  2020-09-30 21:17 [PATCH v2 0/3] drm: commit_work scheduling Rob Clark
  2020-09-30 21:17 ` [PATCH v2 1/3] drm/crtc: Introduce per-crtc kworker Rob Clark
  2020-09-30 21:17 ` [PATCH v2 2/3] drm/atomic: Use kthread worker for nonblocking commits Rob Clark
@ 2020-09-30 21:17 ` Rob Clark
  2020-10-01  7:25 ` [PATCH v2 0/3] drm: commit_work scheduling Daniel Vetter
  2020-10-02 11:01 ` Qais Yousef
  4 siblings, 0 replies; 28+ messages in thread
From: Rob Clark @ 2020-09-30 21:17 UTC (permalink / raw)
  To: dri-devel
  Cc: linux-arm-msm, Tejun Heo, timmurray, Daniel Vetter, Qais Yousef,
	Rob Clark, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, open list

From: Rob Clark <robdclark@chromium.org>

This will allow userspace to control the scheduling policy and priority.
In particular if the userspace half of the display pipeline is SCHED_FIFO
then it will want to use the same scheduling policy and an appropriate
priority to ensure that it is not preempting commit_work.

Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/gpu/drm/drm_crtc.c        |  3 +++
 drivers/gpu/drm/drm_mode_config.c | 14 ++++++++++++++
 drivers/gpu/drm/drm_mode_object.c |  4 ++++
 include/drm/drm_mode_config.h     |  9 +++++++++
 include/drm/drm_property.h        |  9 +++++++++
 5 files changed, 39 insertions(+)

diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index 4f7c0bfce0a3..1828853542dc 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -334,6 +334,9 @@ int drm_crtc_init_with_planes(struct drm_device *dev, struct drm_crtc *crtc,
 			crtc->worker = NULL;
 			return ret;
 		}
+
+		drm_object_attach_property(&crtc->base,
+					   config->kwork_tid_property, 0);
 	}
 
 	return 0;
diff --git a/drivers/gpu/drm/drm_mode_config.c b/drivers/gpu/drm/drm_mode_config.c
index f1affc1bb679..b11a1fc8ed0d 100644
--- a/drivers/gpu/drm/drm_mode_config.c
+++ b/drivers/gpu/drm/drm_mode_config.c
@@ -215,6 +215,13 @@ static const struct drm_prop_enum_list drm_plane_type_enum_list[] = {
 	{ DRM_PLANE_TYPE_CURSOR, "Cursor" },
 };
 
+static int get_kwork_tid(struct drm_mode_object *obj, uint64_t *val)
+{
+	struct drm_crtc *crtc = obj_to_crtc(obj);
+	*val = task_pid_vnr(crtc->worker->task);
+	return 0;
+}
+
 static int drm_mode_create_standard_properties(struct drm_device *dev)
 {
 	struct drm_property *prop;
@@ -371,6 +378,13 @@ static int drm_mode_create_standard_properties(struct drm_device *dev)
 		return -ENOMEM;
 	dev->mode_config.modifiers_property = prop;
 
+	prop = drm_property_create_range(dev, DRM_MODE_PROP_ATOMIC,
+			"KWORK_TID", 0, UINT_MAX);
+	if (!prop)
+		return -ENOMEM;
+	prop->get_value = get_kwork_tid;
+	dev->mode_config.kwork_tid_property = prop;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
index db05f386a709..1a4df65baf0f 100644
--- a/drivers/gpu/drm/drm_mode_object.c
+++ b/drivers/gpu/drm/drm_mode_object.c
@@ -285,6 +285,7 @@ int drm_object_property_set_value(struct drm_mode_object *obj,
 
 	WARN_ON(drm_drv_uses_atomic_modeset(property->dev) &&
 		!(property->flags & DRM_MODE_PROP_IMMUTABLE));
+	WARN_ON(property->get_value);
 
 	for (i = 0; i < obj->properties->count; i++) {
 		if (obj->properties->properties[i] == property) {
@@ -303,6 +304,9 @@ static int __drm_object_property_get_value(struct drm_mode_object *obj,
 {
 	int i;
 
+	if (property->get_value)
+		return property->get_value(obj, val);
+
 	/* read-only properties bypass atomic mechanism and still store
 	 * their value in obj->properties->values[].. mostly to avoid
 	 * having to deal w/ EDID and similar props in atomic paths:
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index c2d3d71d133c..7244df926a6d 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -926,6 +926,15 @@ struct drm_mode_config {
 	 */
 	struct drm_property *modifiers_property;
 
+	/**
+	 * @kwork_tid_property: CRTC property to expose the task-id of the per-
+	 * CRTC kthread-worker, used for non-block atomic commit.  This is exposed
+	 * to userspace, to allow userspace to control the scheduling policy and
+	 * priority, as this is a decision that depends on how userspace structures
+	 * it's rendering pipeline.
+	 */
+	struct drm_property *kwork_tid_property;
+
 	/* cursor size */
 	uint32_t cursor_width, cursor_height;
 
diff --git a/include/drm/drm_property.h b/include/drm/drm_property.h
index 4a0a80d658c7..6843be6aa3ec 100644
--- a/include/drm/drm_property.h
+++ b/include/drm/drm_property.h
@@ -188,6 +188,15 @@ struct drm_property {
 	 * enum and bitmask values.
 	 */
 	struct list_head enum_list;
+
+	/**
+	 * @get_value: accessor to get current value for "virtual" properties
+	 *
+	 * For properties with dynamic values, where it is for whatever reason
+	 * not feasible to keep updated with drm_object_property_set_value(),
+	 * this callback can be used to retrieve the current value on demand.
+	 */
+	int (*get_value)(struct drm_mode_object *obj, uint64_t *val);
 };
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-09-30 21:17 [PATCH v2 0/3] drm: commit_work scheduling Rob Clark
                   ` (2 preceding siblings ...)
  2020-09-30 21:17 ` [PATCH v2 3/3] drm: Expose CRTC's kworker task id Rob Clark
@ 2020-10-01  7:25 ` Daniel Vetter
  2020-10-01 15:15   ` Rob Clark
  2020-10-02 11:01 ` Qais Yousef
  4 siblings, 1 reply; 28+ messages in thread
From: Daniel Vetter @ 2020-10-01  7:25 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Qais Yousef,
	Rob Clark, open list

On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> The android userspace treats the display pipeline as a realtime problem.
> And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> that I found.)
>
> But this presents a problem with using workqueues for non-blocking
> atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> preempt the worker.  Which is not really the outcome you want.. once
> the required fences are scheduled, you want to push the atomic commit
> down to hw ASAP.
>
> But the decision of whether commit_work should be RT or not really
> depends on what userspace is doing.  For a pure CFS userspace display
> pipeline, commit_work() should remain SCHED_NORMAL.
>
> To handle this, convert non-blocking commit_work() to use per-CRTC
> kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> used to avoid serializing commits when userspace is using a per-CRTC
> update loop.  And the last patch exposes the task id to userspace as
> a CRTC property, so that userspace can adjust the priority and sched
> policy to fit it's needs.
>
>
> v2: Drop client cap and in-kernel setting of priority/policy in
>     favor of exposing the kworker tid to userspace so that user-
>     space can set priority/policy.

Yeah I think this looks more reasonable. Still a bit irky interface,
so I'd like to get some kworker/rt ack on this. Other opens:
- needs userspace, the usual drill
- we need this also for vblank workers, otherwise this wont work for
drivers needing those because of another priority inversion.
- we probably want some indication of whether this actually does
something useful, not all drivers use atomic commit helpers. Not sure
how to do that.
- not sure whether the vfunc is an awesome idea, I'd frankly just
open-code this inline. We have similar special cases already for e.g.
dpms (and in multiple places), this isn't the worst.
- still feeling we could at least change the default to highpriority niceness.
- there's still the problem that commit works can overlap, and a
single worker can't do that anymore. So rolling that out for everyone
as-is feels a bit risky.

Cheers, Daniel

>
> Rob Clark (3):
>   drm/crtc: Introduce per-crtc kworker
>   drm/atomic: Use kthread worker for nonblocking commits
>   drm: Expose CRTC's kworker task id
>
>  drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
>  drivers/gpu/drm/drm_crtc.c          | 14 +++++++++++++
>  drivers/gpu/drm/drm_mode_config.c   | 14 +++++++++++++
>  drivers/gpu/drm/drm_mode_object.c   |  4 ++++
>  include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
>  include/drm/drm_crtc.h              |  8 ++++++++
>  include/drm/drm_mode_config.h       |  9 +++++++++
>  include/drm/drm_property.h          |  9 +++++++++
>  8 files changed, 98 insertions(+), 4 deletions(-)
>
> --
> 2.26.2
>


-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-01  7:25 ` [PATCH v2 0/3] drm: commit_work scheduling Daniel Vetter
@ 2020-10-01 15:15   ` Rob Clark
  2020-10-01 15:25     ` Daniel Vetter
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2020-10-01 15:15 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Qais Yousef,
	Rob Clark, open list

On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > From: Rob Clark <robdclark@chromium.org>
> >
> > The android userspace treats the display pipeline as a realtime problem.
> > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > that I found.)
> >
> > But this presents a problem with using workqueues for non-blocking
> > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > preempt the worker.  Which is not really the outcome you want.. once
> > the required fences are scheduled, you want to push the atomic commit
> > down to hw ASAP.
> >
> > But the decision of whether commit_work should be RT or not really
> > depends on what userspace is doing.  For a pure CFS userspace display
> > pipeline, commit_work() should remain SCHED_NORMAL.
> >
> > To handle this, convert non-blocking commit_work() to use per-CRTC
> > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > used to avoid serializing commits when userspace is using a per-CRTC
> > update loop.  And the last patch exposes the task id to userspace as
> > a CRTC property, so that userspace can adjust the priority and sched
> > policy to fit it's needs.
> >
> >
> > v2: Drop client cap and in-kernel setting of priority/policy in
> >     favor of exposing the kworker tid to userspace so that user-
> >     space can set priority/policy.
>
> Yeah I think this looks more reasonable. Still a bit irky interface,
> so I'd like to get some kworker/rt ack on this. Other opens:
> - needs userspace, the usual drill

fwiw, right now the userspace is "modetest + chrt".. *probably* the
userspace will become a standalone helper or daemon, mostly because
the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
still entertaining the possibility of switching between rt and cfs
depending on what is in the foreground (ie. only do rt for android
apps).

> - we need this also for vblank workers, otherwise this wont work for
> drivers needing those because of another priority inversion.

I have a thought on that, see below..

> - we probably want some indication of whether this actually does
> something useful, not all drivers use atomic commit helpers. Not sure
> how to do that.

I'm leaning towards converting the other drivers over to use the
per-crtc kwork, and then dropping the 'commit_work` from atomic state.
I can add a patch to that, but figured I could postpone that churn
until there is some by-in on this whole idea.

> - not sure whether the vfunc is an awesome idea, I'd frankly just
> open-code this inline. We have similar special cases already for e.g.
> dpms (and in multiple places), this isn't the worst.

I could introduce a "PID" property type.  This would be useful if we
wanted to also expose the vblank workers.

> - still feeling we could at least change the default to highpriority niceness.

AFAIU this would still be preempted by something that is SCHED_FIFO.
Also, with cfs/SCHED_NORMAL, you can still be preempted by lower
priority things that haven't had a chance to run for a while.

> - there's still the problem that commit works can overlap, and a
> single worker can't do that anymore. So rolling that out for everyone
> as-is feels a bit risky.

That is why it is per-CRTC..  I'm not sure there is a need to overlap
work for a single CRTC?

We could OFC make this a driver knob, and keep the old 'commit_work'
option, but that doesn't really feel like the right direction

BR,
-R

> Cheers, Daniel
>
> >
> > Rob Clark (3):
> >   drm/crtc: Introduce per-crtc kworker
> >   drm/atomic: Use kthread worker for nonblocking commits
> >   drm: Expose CRTC's kworker task id
> >
> >  drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
> >  drivers/gpu/drm/drm_crtc.c          | 14 +++++++++++++
> >  drivers/gpu/drm/drm_mode_config.c   | 14 +++++++++++++
> >  drivers/gpu/drm/drm_mode_object.c   |  4 ++++
> >  include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
> >  include/drm/drm_crtc.h              |  8 ++++++++
> >  include/drm/drm_mode_config.h       |  9 +++++++++
> >  include/drm/drm_property.h          |  9 +++++++++
> >  8 files changed, 98 insertions(+), 4 deletions(-)
> >
> > --
> > 2.26.2
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-01 15:15   ` Rob Clark
@ 2020-10-01 15:25     ` Daniel Vetter
  2020-10-02 10:52       ` Ville Syrjälä
  0 siblings, 1 reply; 28+ messages in thread
From: Daniel Vetter @ 2020-10-01 15:25 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Qais Yousef,
	Rob Clark, open list

On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > The android userspace treats the display pipeline as a realtime problem.
> > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > that I found.)
> > >
> > > But this presents a problem with using workqueues for non-blocking
> > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > preempt the worker.  Which is not really the outcome you want.. once
> > > the required fences are scheduled, you want to push the atomic commit
> > > down to hw ASAP.
> > >
> > > But the decision of whether commit_work should be RT or not really
> > > depends on what userspace is doing.  For a pure CFS userspace display
> > > pipeline, commit_work() should remain SCHED_NORMAL.
> > >
> > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > used to avoid serializing commits when userspace is using a per-CRTC
> > > update loop.  And the last patch exposes the task id to userspace as
> > > a CRTC property, so that userspace can adjust the priority and sched
> > > policy to fit it's needs.
> > >
> > >
> > > v2: Drop client cap and in-kernel setting of priority/policy in
> > >     favor of exposing the kworker tid to userspace so that user-
> > >     space can set priority/policy.
> >
> > Yeah I think this looks more reasonable. Still a bit irky interface,
> > so I'd like to get some kworker/rt ack on this. Other opens:
> > - needs userspace, the usual drill
>
> fwiw, right now the userspace is "modetest + chrt".. *probably* the
> userspace will become a standalone helper or daemon, mostly because
> the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> still entertaining the possibility of switching between rt and cfs
> depending on what is in the foreground (ie. only do rt for android
> apps).
>
> > - we need this also for vblank workers, otherwise this wont work for
> > drivers needing those because of another priority inversion.
>
> I have a thought on that, see below..

Hm, not seeing anything about vblank worker below?

> > - we probably want some indication of whether this actually does
> > something useful, not all drivers use atomic commit helpers. Not sure
> > how to do that.
>
> I'm leaning towards converting the other drivers over to use the
> per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> I can add a patch to that, but figured I could postpone that churn
> until there is some by-in on this whole idea.

i915 has its own commit code, it's not even using the current commit
helpers (nor the commit_work). Not sure how much other fun there is.

> > - not sure whether the vfunc is an awesome idea, I'd frankly just
> > open-code this inline. We have similar special cases already for e.g.
> > dpms (and in multiple places), this isn't the worst.
>
> I could introduce a "PID" property type.  This would be useful if we
> wanted to also expose the vblank workers.

Hm right, but I think we need at most 2 for commit thread and vblank
thread (at least with the current design). So open-coded if with two
if (prop == crtc_worker_pid_prop || prop  ==
crtc_vblank_worker_pid_prop) doesn't seem too horrible to me.
Otherwise people start creating really funny stuff in their drivers
with this backend, and I don't want to spend all the time making sure
they don't abuse this :-)

> > - still feeling we could at least change the default to highpriority niceness.
>
> AFAIU this would still be preempted by something that is SCHED_FIFO.
> Also, with cfs/SCHED_NORMAL, you can still be preempted by lower
> priority things that haven't had a chance to run for a while.

i915 uses highprio workqueue, so I guess to avoid regressions we need
that (it's also not using the commit helpers right now, but no reason
not to afaics, stuff simply happened in parallel back then.

> > - there's still the problem that commit works can overlap, and a
> > single worker can't do that anymore. So rolling that out for everyone
> > as-is feels a bit risky.
>
> That is why it is per-CRTC..  I'm not sure there is a need to overlap
> work for a single CRTC?
>
> We could OFC make this a driver knob, and keep the old 'commit_work'
> option, but that doesn't really feel like the right direction

drm_atomic_helper_commit_hw_done is what unblocks the next worker on
the same set of crtc. It's before we do all the buffer cleanup, which
has a full vblank stall beforehand. Most drivers also have the same
vblank stall in their next commit, plus generally the fb cleanup is
cheap, but neither is a requirement. So yeah you can get overlapping
commit work on the same crtc, and that was kinda intentional. Maybe
was overkill, I guess minimally just something that needs to be in the
commit message.
-Daniel

>
> BR,
> -R
>
> > Cheers, Daniel
> >
> > >
> > > Rob Clark (3):
> > >   drm/crtc: Introduce per-crtc kworker
> > >   drm/atomic: Use kthread worker for nonblocking commits
> > >   drm: Expose CRTC's kworker task id
> > >
> > >  drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
> > >  drivers/gpu/drm/drm_crtc.c          | 14 +++++++++++++
> > >  drivers/gpu/drm/drm_mode_config.c   | 14 +++++++++++++
> > >  drivers/gpu/drm/drm_mode_object.c   |  4 ++++
> > >  include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
> > >  include/drm/drm_crtc.h              |  8 ++++++++
> > >  include/drm/drm_mode_config.h       |  9 +++++++++
> > >  include/drm/drm_property.h          |  9 +++++++++
> > >  8 files changed, 98 insertions(+), 4 deletions(-)
> > >
> > > --
> > > 2.26.2
> > >
> >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-01 15:25     ` Daniel Vetter
@ 2020-10-02 10:52       ` Ville Syrjälä
  2020-10-02 11:05         ` Ville Syrjälä
  0 siblings, 1 reply; 28+ messages in thread
From: Ville Syrjälä @ 2020-10-02 10:52 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Rob Clark, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > >
> > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > The android userspace treats the display pipeline as a realtime problem.
> > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > that I found.)
> > > >
> > > > But this presents a problem with using workqueues for non-blocking
> > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > the required fences are scheduled, you want to push the atomic commit
> > > > down to hw ASAP.
> > > >
> > > > But the decision of whether commit_work should be RT or not really
> > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > >
> > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > update loop.  And the last patch exposes the task id to userspace as
> > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > policy to fit it's needs.
> > > >
> > > >
> > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > >     favor of exposing the kworker tid to userspace so that user-
> > > >     space can set priority/policy.
> > >
> > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > - needs userspace, the usual drill
> >
> > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > userspace will become a standalone helper or daemon, mostly because
> > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > still entertaining the possibility of switching between rt and cfs
> > depending on what is in the foreground (ie. only do rt for android
> > apps).
> >
> > > - we need this also for vblank workers, otherwise this wont work for
> > > drivers needing those because of another priority inversion.
> >
> > I have a thought on that, see below..
> 
> Hm, not seeing anything about vblank worker below?
> 
> > > - we probably want some indication of whether this actually does
> > > something useful, not all drivers use atomic commit helpers. Not sure
> > > how to do that.
> >
> > I'm leaning towards converting the other drivers over to use the
> > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > I can add a patch to that, but figured I could postpone that churn
> > until there is some by-in on this whole idea.
> 
> i915 has its own commit code, it's not even using the current commit
> helpers (nor the commit_work). Not sure how much other fun there is.

I don't think we want per-crtc threads for this in i915. Seems
to me easier to guarantee atomicity across multiple crtcs if
we just commit them from the same thread.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-09-30 21:17 [PATCH v2 0/3] drm: commit_work scheduling Rob Clark
                   ` (3 preceding siblings ...)
  2020-10-01  7:25 ` [PATCH v2 0/3] drm: commit_work scheduling Daniel Vetter
@ 2020-10-02 11:01 ` Qais Yousef
  2020-10-02 18:07   ` Rob Clark
  4 siblings, 1 reply; 28+ messages in thread
From: Qais Yousef @ 2020-10-02 11:01 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, timmurray, Daniel Vetter,
	Rob Clark, open list

On 09/30/20 14:17, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> The android userspace treats the display pipeline as a realtime problem.
> And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> that I found.)
> 
> But this presents a problem with using workqueues for non-blocking
> atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> preempt the worker.  Which is not really the outcome you want.. once
> the required fences are scheduled, you want to push the atomic commit
> down to hw ASAP.

For me thees 2 properties

	1. Run ASAP
	2. Finish the work un-interrupted

Scream the workers need to be SCHED_FIFO by default. CFS can't give you these
guarantees.

IMO using sched_set_fifo() for these workers is the right thing.

> 
> But the decision of whether commit_work should be RT or not really
> depends on what userspace is doing.  For a pure CFS userspace display
> pipeline, commit_work() should remain SCHED_NORMAL.

I'm not sure I agree with this. I think it's better to characterize tasks based
on their properties/requirements rather than what the rest of the userspace is
using.

I do appreciate that maybe some of these tasks have varying requirements during
their life time. e.g: they have RT property during specific critical section
but otherwise are CFS tasks. I think the UI thread in Android behaves like
that.

It's worth IMO trying that approach I pointed out earlier to see if making RT
try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
accepted but IMHO it's a better direction to consider and discuss.

Or maybe you can wrap userspace pipeline critical section lock such that any
task holding it will automatically be promoted to SCHED_FIFO and then demoted
to CFS once it releases it.

Haven't worked with display pipelines before, so hopefully this makes sense :-)

Thanks

--
Qais Yousef

> 
> To handle this, convert non-blocking commit_work() to use per-CRTC
> kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> used to avoid serializing commits when userspace is using a per-CRTC
> update loop.  And the last patch exposes the task id to userspace as
> a CRTC property, so that userspace can adjust the priority and sched
> policy to fit it's needs.
> 
> 
> v2: Drop client cap and in-kernel setting of priority/policy in
>     favor of exposing the kworker tid to userspace so that user-
>     space can set priority/policy.
> 
> Rob Clark (3):
>   drm/crtc: Introduce per-crtc kworker
>   drm/atomic: Use kthread worker for nonblocking commits
>   drm: Expose CRTC's kworker task id
> 
>  drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
>  drivers/gpu/drm/drm_crtc.c          | 14 +++++++++++++
>  drivers/gpu/drm/drm_mode_config.c   | 14 +++++++++++++
>  drivers/gpu/drm/drm_mode_object.c   |  4 ++++
>  include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
>  include/drm/drm_crtc.h              |  8 ++++++++
>  include/drm/drm_mode_config.h       |  9 +++++++++
>  include/drm/drm_property.h          |  9 +++++++++
>  8 files changed, 98 insertions(+), 4 deletions(-)
> 
> -- 
> 2.26.2
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-02 10:52       ` Ville Syrjälä
@ 2020-10-02 11:05         ` Ville Syrjälä
  2020-10-02 17:55           ` Rob Clark
  0 siblings, 1 reply; 28+ messages in thread
From: Ville Syrjälä @ 2020-10-02 11:05 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Rob Clark, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > >
> > > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > that I found.)
> > > > >
> > > > > But this presents a problem with using workqueues for non-blocking
> > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > down to hw ASAP.
> > > > >
> > > > > But the decision of whether commit_work should be RT or not really
> > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > > >
> > > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > > update loop.  And the last patch exposes the task id to userspace as
> > > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > > policy to fit it's needs.
> > > > >
> > > > >
> > > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > > >     favor of exposing the kworker tid to userspace so that user-
> > > > >     space can set priority/policy.
> > > >
> > > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > > - needs userspace, the usual drill
> > >
> > > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > > userspace will become a standalone helper or daemon, mostly because
> > > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > > still entertaining the possibility of switching between rt and cfs
> > > depending on what is in the foreground (ie. only do rt for android
> > > apps).
> > >
> > > > - we need this also for vblank workers, otherwise this wont work for
> > > > drivers needing those because of another priority inversion.
> > >
> > > I have a thought on that, see below..
> > 
> > Hm, not seeing anything about vblank worker below?
> > 
> > > > - we probably want some indication of whether this actually does
> > > > something useful, not all drivers use atomic commit helpers. Not sure
> > > > how to do that.
> > >
> > > I'm leaning towards converting the other drivers over to use the
> > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > I can add a patch to that, but figured I could postpone that churn
> > > until there is some by-in on this whole idea.
> > 
> > i915 has its own commit code, it's not even using the current commit
> > helpers (nor the commit_work). Not sure how much other fun there is.
> 
> I don't think we want per-crtc threads for this in i915. Seems
> to me easier to guarantee atomicity across multiple crtcs if
> we just commit them from the same thread.

Oh, and we may have to commit things in a very specific order
to guarantee the hw doesn't fall over, so yeah definitely per-crtc
thread is a no go.

I don't even understand the serialization argument. If the commits
are truly independent then why isn't the unbound wq enough to avoid
the serialization? It should just spin up a new thread for each commit
no?

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-02 11:05         ` Ville Syrjälä
@ 2020-10-02 17:55           ` Rob Clark
  2020-10-05 12:15             ` Ville Syrjälä
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2020-10-02 17:55 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > >
> > > > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > >
> > > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > > that I found.)
> > > > > >
> > > > > > But this presents a problem with using workqueues for non-blocking
> > > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > > down to hw ASAP.
> > > > > >
> > > > > > But the decision of whether commit_work should be RT or not really
> > > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > > > >
> > > > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > > > update loop.  And the last patch exposes the task id to userspace as
> > > > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > > > policy to fit it's needs.
> > > > > >
> > > > > >
> > > > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > > > >     favor of exposing the kworker tid to userspace so that user-
> > > > > >     space can set priority/policy.
> > > > >
> > > > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > > > - needs userspace, the usual drill
> > > >
> > > > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > > > userspace will become a standalone helper or daemon, mostly because
> > > > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > > > still entertaining the possibility of switching between rt and cfs
> > > > depending on what is in the foreground (ie. only do rt for android
> > > > apps).
> > > >
> > > > > - we need this also for vblank workers, otherwise this wont work for
> > > > > drivers needing those because of another priority inversion.
> > > >
> > > > I have a thought on that, see below..
> > >
> > > Hm, not seeing anything about vblank worker below?
> > >
> > > > > - we probably want some indication of whether this actually does
> > > > > something useful, not all drivers use atomic commit helpers. Not sure
> > > > > how to do that.
> > > >
> > > > I'm leaning towards converting the other drivers over to use the
> > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > I can add a patch to that, but figured I could postpone that churn
> > > > until there is some by-in on this whole idea.
> > >
> > > i915 has its own commit code, it's not even using the current commit
> > > helpers (nor the commit_work). Not sure how much other fun there is.
> >
> > I don't think we want per-crtc threads for this in i915. Seems
> > to me easier to guarantee atomicity across multiple crtcs if
> > we just commit them from the same thread.
>
> Oh, and we may have to commit things in a very specific order
> to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> thread is a no go.

If I'm understanding the i915 code, this is only the case for modeset
commits?  I suppose we could achieve the same result by just deciding
to pick the kthread of the first CRTC for modeset commits.  I'm not
really so much concerned about parallelism for modeset.

> I don't even understand the serialization argument. If the commits
> are truly independent then why isn't the unbound wq enough to avoid
> the serialization? It should just spin up a new thread for each commit
> no?

The problem with wq is prioritization and SCHED_FIFO userspace
components stomping on the feet of commit_work.  That is the entire
motivation of this series in the first place, so no we cannot use
unbound wq.

BR,
-R

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-02 11:01 ` Qais Yousef
@ 2020-10-02 18:07   ` Rob Clark
  2020-10-05 15:00     ` Qais Yousef
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2020-10-02 18:07 UTC (permalink / raw)
  To: Qais Yousef
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list

On Fri, Oct 2, 2020 at 4:01 AM Qais Yousef <qais.yousef@arm.com> wrote:
>
> On 09/30/20 14:17, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > The android userspace treats the display pipeline as a realtime problem.
> > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > that I found.)
> >
> > But this presents a problem with using workqueues for non-blocking
> > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > preempt the worker.  Which is not really the outcome you want.. once
> > the required fences are scheduled, you want to push the atomic commit
> > down to hw ASAP.
>
> For me thees 2 properties
>
>         1. Run ASAP
>         2. Finish the work un-interrupted
>
> Scream the workers need to be SCHED_FIFO by default. CFS can't give you these
> guarantees.

fwiw, commit_work does sleep/block for some time until fences are
signalled, but then once that happens we want it to run ASAP,
preempting lower priority SCHED_FIFO.

>
> IMO using sched_set_fifo() for these workers is the right thing.
>

Possibly, but we still have limited prioritization options (ie. not
enough) to set these from the kernel.  Giving userspace the control,
so it can pick sensible priorities for commit_work and vblank_work,
which fits in with the priorities of the other userspace threads seems
like the sensible thing.

> >
> > But the decision of whether commit_work should be RT or not really
> > depends on what userspace is doing.  For a pure CFS userspace display
> > pipeline, commit_work() should remain SCHED_NORMAL.
>
> I'm not sure I agree with this. I think it's better to characterize tasks based
> on their properties/requirements rather than what the rest of the userspace is
> using.

I mean, the issue is that userspace is already using a few different
rt priority levels for different SF threads.  We want commit_work to
run ASAP once fences are signalled, and vblank_work to run at a
slightly higher priority still.  But the correct choice for priorities
here depends on what userspace is using, it all needs to fit together
properly.

>
> I do appreciate that maybe some of these tasks have varying requirements during
> their life time. e.g: they have RT property during specific critical section
> but otherwise are CFS tasks. I think the UI thread in Android behaves like
> that.
>
> It's worth IMO trying that approach I pointed out earlier to see if making RT
> try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
> accepted but IMHO it's a better direction to consider and discuss.

The problem I was seeing was actually the opposite..  commit_work
becomes runnable (fences signalled) but doesn't get a chance to run
because a SCHED_FIFO SF thread is running.  (Maybe I misunderstood and
you're approach would help this case too?)

> Or maybe you can wrap userspace pipeline critical section lock such that any
> task holding it will automatically be promoted to SCHED_FIFO and then demoted
> to CFS once it releases it.

The SCHED_DEADLINE + token passing approach that the lwn article
mentioned sounds interesting, if that eventually becomes possible.
But doesn't really help today..

BR,
-R

> Haven't worked with display pipelines before, so hopefully this makes sense :-)
>
> Thanks
>
> --
> Qais Yousef
>
> >
> > To handle this, convert non-blocking commit_work() to use per-CRTC
> > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > used to avoid serializing commits when userspace is using a per-CRTC
> > update loop.  And the last patch exposes the task id to userspace as
> > a CRTC property, so that userspace can adjust the priority and sched
> > policy to fit it's needs.
> >
> >
> > v2: Drop client cap and in-kernel setting of priority/policy in
> >     favor of exposing the kworker tid to userspace so that user-
> >     space can set priority/policy.
> >
> > Rob Clark (3):
> >   drm/crtc: Introduce per-crtc kworker
> >   drm/atomic: Use kthread worker for nonblocking commits
> >   drm: Expose CRTC's kworker task id
> >
> >  drivers/gpu/drm/drm_atomic_helper.c | 13 ++++++++----
> >  drivers/gpu/drm/drm_crtc.c          | 14 +++++++++++++
> >  drivers/gpu/drm/drm_mode_config.c   | 14 +++++++++++++
> >  drivers/gpu/drm/drm_mode_object.c   |  4 ++++
> >  include/drm/drm_atomic.h            | 31 +++++++++++++++++++++++++++++
> >  include/drm/drm_crtc.h              |  8 ++++++++
> >  include/drm/drm_mode_config.h       |  9 +++++++++
> >  include/drm/drm_property.h          |  9 +++++++++
> >  8 files changed, 98 insertions(+), 4 deletions(-)
> >
> > --
> > 2.26.2
> >

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-02 17:55           ` Rob Clark
@ 2020-10-05 12:15             ` Ville Syrjälä
  2020-10-05 14:15               ` Daniel Vetter
  2020-10-07 16:44               ` Rob Clark
  0 siblings, 2 replies; 28+ messages in thread
From: Ville Syrjälä @ 2020-10-05 12:15 UTC (permalink / raw)
  To: Rob Clark
  Cc: Daniel Vetter, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Fri, Oct 02, 2020 at 10:55:52AM -0700, Rob Clark wrote:
> On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > >
> > > > > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > >
> > > > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > > > that I found.)
> > > > > > >
> > > > > > > But this presents a problem with using workqueues for non-blocking
> > > > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > > > down to hw ASAP.
> > > > > > >
> > > > > > > But the decision of whether commit_work should be RT or not really
> > > > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > > > > >
> > > > > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > > > > update loop.  And the last patch exposes the task id to userspace as
> > > > > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > > > > policy to fit it's needs.
> > > > > > >
> > > > > > >
> > > > > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > > > > >     favor of exposing the kworker tid to userspace so that user-
> > > > > > >     space can set priority/policy.
> > > > > >
> > > > > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > > > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > > > > - needs userspace, the usual drill
> > > > >
> > > > > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > > > > userspace will become a standalone helper or daemon, mostly because
> > > > > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > > > > still entertaining the possibility of switching between rt and cfs
> > > > > depending on what is in the foreground (ie. only do rt for android
> > > > > apps).
> > > > >
> > > > > > - we need this also for vblank workers, otherwise this wont work for
> > > > > > drivers needing those because of another priority inversion.
> > > > >
> > > > > I have a thought on that, see below..
> > > >
> > > > Hm, not seeing anything about vblank worker below?
> > > >
> > > > > > - we probably want some indication of whether this actually does
> > > > > > something useful, not all drivers use atomic commit helpers. Not sure
> > > > > > how to do that.
> > > > >
> > > > > I'm leaning towards converting the other drivers over to use the
> > > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > > I can add a patch to that, but figured I could postpone that churn
> > > > > until there is some by-in on this whole idea.
> > > >
> > > > i915 has its own commit code, it's not even using the current commit
> > > > helpers (nor the commit_work). Not sure how much other fun there is.
> > >
> > > I don't think we want per-crtc threads for this in i915. Seems
> > > to me easier to guarantee atomicity across multiple crtcs if
> > > we just commit them from the same thread.
> >
> > Oh, and we may have to commit things in a very specific order
> > to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> > thread is a no go.
> 
> If I'm understanding the i915 code, this is only the case for modeset
> commits?  I suppose we could achieve the same result by just deciding
> to pick the kthread of the first CRTC for modeset commits.  I'm not
> really so much concerned about parallelism for modeset.

I'm not entirely happy about the random differences between modesets
and other commits. Ideally we wouldn't need any.

Anyways, even if we ignore modesets we still have the issue with
atomicity guarantees across multiple crtcs. So I think we still
don't want per-crtc threads, rather it should be thread for each 
commit.

Well, if the crtcs aren't running in lockstep then maybe we could
shove them off to separate threads, but that'll just complicate things
needlessly I think since we'd need yet another way to iterate
the crtcs in each thread. With the thread-per-commit apporach we
can just use the normal atomic iterators.

> 
> > I don't even understand the serialization argument. If the commits
> > are truly independent then why isn't the unbound wq enough to avoid
> > the serialization? It should just spin up a new thread for each commit
> > no?
> 
> The problem with wq is prioritization and SCHED_FIFO userspace
> components stomping on the feet of commit_work. That is the entire
> motivation of this series in the first place, so no we cannot use
> unbound wq.

This is a bit dejavu of the vblank worker discussion, where I actually
did want a per-crtc RT kthread but people weren't convinced they
actually help. The difference is that for vblank workers we actually
tried to get some numbers, here I've not seen any.

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-05 12:15             ` Ville Syrjälä
@ 2020-10-05 14:15               ` Daniel Vetter
  2020-10-05 22:58                 ` Rob Clark
  2020-10-07 16:44               ` Rob Clark
  1 sibling, 1 reply; 28+ messages in thread
From: Daniel Vetter @ 2020-10-05 14:15 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Rob Clark, Daniel Vetter, Rob Clark, linux-arm-msm, open list,
	Tim Murray, dri-devel, Tejun Heo, Qais Yousef

On Mon, Oct 05, 2020 at 03:15:24PM +0300, Ville Syrjälä wrote:
> On Fri, Oct 02, 2020 at 10:55:52AM -0700, Rob Clark wrote:
> > On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
> > <ville.syrjala@linux.intel.com> wrote:
> > >
> > > On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > > > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > >
> > > > > > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >
> > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > >
> > > > > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > > > > that I found.)
> > > > > > > >
> > > > > > > > But this presents a problem with using workqueues for non-blocking
> > > > > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > > > > down to hw ASAP.
> > > > > > > >
> > > > > > > > But the decision of whether commit_work should be RT or not really
> > > > > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > > > > > >
> > > > > > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > > > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > > > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > > > > > update loop.  And the last patch exposes the task id to userspace as
> > > > > > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > > > > > policy to fit it's needs.
> > > > > > > >
> > > > > > > >
> > > > > > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > > > > > >     favor of exposing the kworker tid to userspace so that user-
> > > > > > > >     space can set priority/policy.
> > > > > > >
> > > > > > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > > > > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > > > > > - needs userspace, the usual drill
> > > > > >
> > > > > > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > > > > > userspace will become a standalone helper or daemon, mostly because
> > > > > > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > > > > > still entertaining the possibility of switching between rt and cfs
> > > > > > depending on what is in the foreground (ie. only do rt for android
> > > > > > apps).
> > > > > >
> > > > > > > - we need this also for vblank workers, otherwise this wont work for
> > > > > > > drivers needing those because of another priority inversion.
> > > > > >
> > > > > > I have a thought on that, see below..
> > > > >
> > > > > Hm, not seeing anything about vblank worker below?
> > > > >
> > > > > > > - we probably want some indication of whether this actually does
> > > > > > > something useful, not all drivers use atomic commit helpers. Not sure
> > > > > > > how to do that.
> > > > > >
> > > > > > I'm leaning towards converting the other drivers over to use the
> > > > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > > > I can add a patch to that, but figured I could postpone that churn
> > > > > > until there is some by-in on this whole idea.
> > > > >
> > > > > i915 has its own commit code, it's not even using the current commit
> > > > > helpers (nor the commit_work). Not sure how much other fun there is.
> > > >
> > > > I don't think we want per-crtc threads for this in i915. Seems
> > > > to me easier to guarantee atomicity across multiple crtcs if
> > > > we just commit them from the same thread.
> > >
> > > Oh, and we may have to commit things in a very specific order
> > > to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> > > thread is a no go.
> > 
> > If I'm understanding the i915 code, this is only the case for modeset
> > commits?  I suppose we could achieve the same result by just deciding
> > to pick the kthread of the first CRTC for modeset commits.  I'm not
> > really so much concerned about parallelism for modeset.
> 
> I'm not entirely happy about the random differences between modesets
> and other commits. Ideally we wouldn't need any.
> 
> Anyways, even if we ignore modesets we still have the issue with
> atomicity guarantees across multiple crtcs. So I think we still
> don't want per-crtc threads, rather it should be thread for each 
> commit.
> 
> Well, if the crtcs aren't running in lockstep then maybe we could
> shove them off to separate threads, but that'll just complicate things
> needlessly I think since we'd need yet another way to iterate
> the crtcs in each thread. With the thread-per-commit apporach we
> can just use the normal atomic iterators.
> 
> > 
> > > I don't even understand the serialization argument. If the commits
> > > are truly independent then why isn't the unbound wq enough to avoid
> > > the serialization? It should just spin up a new thread for each commit
> > > no?
> > 
> > The problem with wq is prioritization and SCHED_FIFO userspace
> > components stomping on the feet of commit_work. That is the entire
> > motivation of this series in the first place, so no we cannot use
> > unbound wq.
> 
> This is a bit dejavu of the vblank worker discussion, where I actually
> did want a per-crtc RT kthread but people weren't convinced they
> actually help. The difference is that for vblank workers we actually
> tried to get some numbers, here I've not seen any.

The problem here is priority inversion, not latency: Android runs
surface-flinger as SCHED_FIFO, so when surfaceflinger does something it
can preempt the kernel's commit work, and we miss a frame. Apparently
otherwise the soft-rt of just having a normal worker (with maybe elevated
niceness) seems nice enough.

Aside: I just double-checked, and vblank work has a per-crtc kthread.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-02 18:07   ` Rob Clark
@ 2020-10-05 15:00     ` Qais Yousef
  2020-10-05 23:24       ` Rob Clark
  0 siblings, 1 reply; 28+ messages in thread
From: Qais Yousef @ 2020-10-05 15:00 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

+CC Steve and Peter - they might be interested.

On 10/02/20 11:07, Rob Clark wrote:
> On Fri, Oct 2, 2020 at 4:01 AM Qais Yousef <qais.yousef@arm.com> wrote:
> >
> > On 09/30/20 14:17, Rob Clark wrote:
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > The android userspace treats the display pipeline as a realtime problem.
> > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > that I found.)
> > >
> > > But this presents a problem with using workqueues for non-blocking
> > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > preempt the worker.  Which is not really the outcome you want.. once
> > > the required fences are scheduled, you want to push the atomic commit
> > > down to hw ASAP.
> >
> > For me thees 2 properties
> >
> >         1. Run ASAP
> >         2. Finish the work un-interrupted
> >
> > Scream the workers need to be SCHED_FIFO by default. CFS can't give you these
> > guarantees.
> 
> fwiw, commit_work does sleep/block for some time until fences are
> signalled, but then once that happens we want it to run ASAP,
> preempting lower priority SCHED_FIFO.
> 
> >
> > IMO using sched_set_fifo() for these workers is the right thing.
> >
> 
> Possibly, but we still have limited prioritization options (ie. not
> enough) to set these from the kernel.  Giving userspace the control,
> so it can pick sensible priorities for commit_work and vblank_work,
> which fits in with the priorities of the other userspace threads seems
> like the sensible thing.

The problem is that the kernel can run on all types of systems. It's impossible
to pick one value that fits all. Userspace must manage these priorities, and
you can still export the TID to help with that.

But why do you need several priorities in your pipeline? I would have thought
it should execute each stage sequentially and all tasks running at the same RT
priority is fine.

On SMP priorities matter once you've overcomitted the systems. You need to have
more RT tasks running than CPUs for priorities to matter. It seems you have
a high count of RT tasks in your system?

I did some profiles on Android and found that being overcomitted is hard. But
that was a while ago.

> 
> > >
> > > But the decision of whether commit_work should be RT or not really
> > > depends on what userspace is doing.  For a pure CFS userspace display
> > > pipeline, commit_work() should remain SCHED_NORMAL.
> >
> > I'm not sure I agree with this. I think it's better to characterize tasks based
> > on their properties/requirements rather than what the rest of the userspace is
> > using.
> 
> I mean, the issue is that userspace is already using a few different
> rt priority levels for different SF threads.  We want commit_work to

Why are they at different priorities? Different priority levels means that some
of them have more urgent deadlines to meet and it's okay to steal execution
time from lower priority tasks. Is this the case?

RT planning and partitioning is not easy task for sure. You might want to
consider using affinities too to get stronger guarantees for some tasks and
prevent cross-talking.

> run ASAP once fences are signalled, and vblank_work to run at a
> slightly higher priority still.  But the correct choice for priorities
> here depends on what userspace is using, it all needs to fit together
> properly.

By userspace here I think you mean none display pipeline related RT tasks that
you need to coexit with and could still disrupt your pipeline?

Using RT on Gerneral Purpose System is hard for sure. One of the major
challenge is that there's no admin that has full view of the system to do
proper RT planning.

We need proper RT balancer daemon that helps partitioning the system for
multiple RT apps on these systems..

> 
> >
> > I do appreciate that maybe some of these tasks have varying requirements during
> > their life time. e.g: they have RT property during specific critical section
> > but otherwise are CFS tasks. I think the UI thread in Android behaves like
> > that.
> >
> > It's worth IMO trying that approach I pointed out earlier to see if making RT
> > try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
> > accepted but IMHO it's a better direction to consider and discuss.
> 
> The problem I was seeing was actually the opposite..  commit_work
> becomes runnable (fences signalled) but doesn't get a chance to run
> because a SCHED_FIFO SF thread is running.  (Maybe I misunderstood and
> you're approach would help this case too?)

Ah okay. Sorry I got it the wrong way around for some reason. I thought this
task is preempting other CFS-based pipelined tasks.

So your system seems to be overcomitted. Is SF short for SufraceFlinger? Under
what scenarios do you have many SurfaceFlinger tasks? On Android I remember
seeing they have priority of 1 or 2.

sched_set_fifo() will use priority 50. If you set all your pipeline tasks
to this priority, what happens?

> 
> > Or maybe you can wrap userspace pipeline critical section lock such that any
> > task holding it will automatically be promoted to SCHED_FIFO and then demoted
> > to CFS once it releases it.
> 
> The SCHED_DEADLINE + token passing approach that the lwn article
> mentioned sounds interesting, if that eventually becomes possible.
> But doesn't really help today..

We were present in the room with Alessio when he gave that talk :-)

You might have seen Valentin's talk in LPC where he's trying to get
proxy-execution into shape. Which is a pre-requisite to enable using of
SCHED_DEADLINE for these scenarios. IIRC it should allow all dependent tasks to
run from the context of the deadline task during the display pipeline critical
section.

By the way, do you have issues with SoftIrqs delaying your RT tasks execution
time?

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-05 14:15               ` Daniel Vetter
@ 2020-10-05 22:58                 ` Rob Clark
  0 siblings, 0 replies; 28+ messages in thread
From: Rob Clark @ 2020-10-05 22:58 UTC (permalink / raw)
  To: Ville Syrjälä,
	Rob Clark, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef
  Cc: Daniel Vetter

On Mon, Oct 5, 2020 at 7:15 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Mon, Oct 05, 2020 at 03:15:24PM +0300, Ville Syrjälä wrote:
> > On Fri, Oct 02, 2020 at 10:55:52AM -0700, Rob Clark wrote:
> > > On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
> > > <ville.syrjala@linux.intel.com> wrote:
> > > >
> > > > On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > > > > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > > > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > > >
> > > > > > > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > >
> > > > > > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > > > > > that I found.)
> > > > > > > > >
> > > > > > > > > But this presents a problem with using workqueues for non-blocking
> > > > > > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > > > > > down to hw ASAP.
> > > > > > > > >
> > > > > > > > > But the decision of whether commit_work should be RT or not really
> > > > > > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > > > > > > >
> > > > > > > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > > > > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > > > > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > > > > > > update loop.  And the last patch exposes the task id to userspace as
> > > > > > > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > > > > > > policy to fit it's needs.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > > > > > > >     favor of exposing the kworker tid to userspace so that user-
> > > > > > > > >     space can set priority/policy.
> > > > > > > >
> > > > > > > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > > > > > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > > > > > > - needs userspace, the usual drill
> > > > > > >
> > > > > > > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > > > > > > userspace will become a standalone helper or daemon, mostly because
> > > > > > > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > > > > > > still entertaining the possibility of switching between rt and cfs
> > > > > > > depending on what is in the foreground (ie. only do rt for android
> > > > > > > apps).
> > > > > > >
> > > > > > > > - we need this also for vblank workers, otherwise this wont work for
> > > > > > > > drivers needing those because of another priority inversion.
> > > > > > >
> > > > > > > I have a thought on that, see below..
> > > > > >
> > > > > > Hm, not seeing anything about vblank worker below?
> > > > > >
> > > > > > > > - we probably want some indication of whether this actually does
> > > > > > > > something useful, not all drivers use atomic commit helpers. Not sure
> > > > > > > > how to do that.
> > > > > > >
> > > > > > > I'm leaning towards converting the other drivers over to use the
> > > > > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > > > > I can add a patch to that, but figured I could postpone that churn
> > > > > > > until there is some by-in on this whole idea.
> > > > > >
> > > > > > i915 has its own commit code, it's not even using the current commit
> > > > > > helpers (nor the commit_work). Not sure how much other fun there is.
> > > > >
> > > > > I don't think we want per-crtc threads for this in i915. Seems
> > > > > to me easier to guarantee atomicity across multiple crtcs if
> > > > > we just commit them from the same thread.
> > > >
> > > > Oh, and we may have to commit things in a very specific order
> > > > to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> > > > thread is a no go.
> > >
> > > If I'm understanding the i915 code, this is only the case for modeset
> > > commits?  I suppose we could achieve the same result by just deciding
> > > to pick the kthread of the first CRTC for modeset commits.  I'm not
> > > really so much concerned about parallelism for modeset.
> >
> > I'm not entirely happy about the random differences between modesets
> > and other commits. Ideally we wouldn't need any.
> >
> > Anyways, even if we ignore modesets we still have the issue with
> > atomicity guarantees across multiple crtcs. So I think we still
> > don't want per-crtc threads, rather it should be thread for each
> > commit.
> >
> > Well, if the crtcs aren't running in lockstep then maybe we could
> > shove them off to separate threads, but that'll just complicate things
> > needlessly I think since we'd need yet another way to iterate
> > the crtcs in each thread. With the thread-per-commit apporach we
> > can just use the normal atomic iterators.
> >
> > >
> > > > I don't even understand the serialization argument. If the commits
> > > > are truly independent then why isn't the unbound wq enough to avoid
> > > > the serialization? It should just spin up a new thread for each commit
> > > > no?
> > >
> > > The problem with wq is prioritization and SCHED_FIFO userspace
> > > components stomping on the feet of commit_work. That is the entire
> > > motivation of this series in the first place, so no we cannot use
> > > unbound wq.
> >
> > This is a bit dejavu of the vblank worker discussion, where I actually
> > did want a per-crtc RT kthread but people weren't convinced they
> > actually help. The difference is that for vblank workers we actually
> > tried to get some numbers, here I've not seen any.
>
> The problem here is priority inversion, not latency: Android runs
> surface-flinger as SCHED_FIFO, so when surfaceflinger does something it
> can preempt the kernel's commit work, and we miss a frame. Apparently
> otherwise the soft-rt of just having a normal worker (with maybe elevated
> niceness) seems nice enough.

yes, exactly, this is about priority inversion.

Not sure if this is clear (you can't really fit all the relevant parts
of the trace in one screenshot), but here is an example of commit_work
preempted by SF EventThread and missing a deadline:

https://usercontent.irccloud-cdn.com/file/Awgp8Sdj/image.png

BR,
-R

>
> Aside: I just double-checked, and vblank work has a per-crtc kthread.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-05 15:00     ` Qais Yousef
@ 2020-10-05 23:24       ` Rob Clark
  2020-10-06  9:08         ` Daniel Vetter
  2020-10-06 10:59         ` Qais Yousef
  0 siblings, 2 replies; 28+ messages in thread
From: Rob Clark @ 2020-10-05 23:24 UTC (permalink / raw)
  To: Qais Yousef
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

On Mon, Oct 5, 2020 at 8:00 AM Qais Yousef <qais.yousef@arm.com> wrote:
>
> +CC Steve and Peter - they might be interested.
>
> On 10/02/20 11:07, Rob Clark wrote:
> > On Fri, Oct 2, 2020 at 4:01 AM Qais Yousef <qais.yousef@arm.com> wrote:
> > >
> > > On 09/30/20 14:17, Rob Clark wrote:
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > The android userspace treats the display pipeline as a realtime problem.
> > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > that I found.)
> > > >
> > > > But this presents a problem with using workqueues for non-blocking
> > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > the required fences are scheduled, you want to push the atomic commit
> > > > down to hw ASAP.
> > >
> > > For me thees 2 properties
> > >
> > >         1. Run ASAP
> > >         2. Finish the work un-interrupted
> > >
> > > Scream the workers need to be SCHED_FIFO by default. CFS can't give you these
> > > guarantees.
> >
> > fwiw, commit_work does sleep/block for some time until fences are
> > signalled, but then once that happens we want it to run ASAP,
> > preempting lower priority SCHED_FIFO.
> >
> > >
> > > IMO using sched_set_fifo() for these workers is the right thing.
> > >
> >
> > Possibly, but we still have limited prioritization options (ie. not
> > enough) to set these from the kernel.  Giving userspace the control,
> > so it can pick sensible priorities for commit_work and vblank_work,
> > which fits in with the priorities of the other userspace threads seems
> > like the sensible thing.
>
> The problem is that the kernel can run on all types of systems. It's impossible
> to pick one value that fits all. Userspace must manage these priorities, and
> you can still export the TID to help with that.
>
> But why do you need several priorities in your pipeline? I would have thought
> it should execute each stage sequentially and all tasks running at the same RT
> priority is fine.

On the kernel side, vblank work should complete during the vblank
period, making it a harder real time requirement.  So the thinking is
this should be a higher priority.

But you are right, if you aren't overcommitted it probably doesn't matter.

> On SMP priorities matter once you've overcomitted the systems. You need to have
> more RT tasks running than CPUs for priorities to matter. It seems you have
> a high count of RT tasks in your system?
>
> I did some profiles on Android and found that being overcomitted is hard. But
> that was a while ago.
>
> >
> > > >
> > > > But the decision of whether commit_work should be RT or not really
> > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > >
> > > I'm not sure I agree with this. I think it's better to characterize tasks based
> > > on their properties/requirements rather than what the rest of the userspace is
> > > using.
> >
> > I mean, the issue is that userspace is already using a few different
> > rt priority levels for different SF threads.  We want commit_work to
>
> Why are they at different priorities? Different priority levels means that some
> of them have more urgent deadlines to meet and it's okay to steal execution
> time from lower priority tasks. Is this the case?

tbh, I'm not fully aware of the background.  It looks like most of the
SF threads run at priority=2 (100-2==98), and the main one runs at
priority=1

> RT planning and partitioning is not easy task for sure. You might want to
> consider using affinities too to get stronger guarantees for some tasks and
> prevent cross-talking.

There is some cgroup stuff that is pinning SF and some other stuff to
the small cores, fwiw.. I think the reasoning is that they shouldn't
be doing anything heavy enough to need the big cores.

> > run ASAP once fences are signalled, and vblank_work to run at a
> > slightly higher priority still.  But the correct choice for priorities
> > here depends on what userspace is using, it all needs to fit together
> > properly.
>
> By userspace here I think you mean none display pipeline related RT tasks that
> you need to coexit with and could still disrupt your pipeline?

I mean, commit_work should be higher priority than the other (display
related) RT tasks.  But the kernel doesn't know what those priorities
are.

> Using RT on Gerneral Purpose System is hard for sure. One of the major
> challenge is that there's no admin that has full view of the system to do
> proper RT planning.
>
> We need proper RT balancer daemon that helps partitioning the system for
> multiple RT apps on these systems..
>
> >
> > >
> > > I do appreciate that maybe some of these tasks have varying requirements during
> > > their life time. e.g: they have RT property during specific critical section
> > > but otherwise are CFS tasks. I think the UI thread in Android behaves like
> > > that.
> > >
> > > It's worth IMO trying that approach I pointed out earlier to see if making RT
> > > try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
> > > accepted but IMHO it's a better direction to consider and discuss.
> >
> > The problem I was seeing was actually the opposite..  commit_work
> > becomes runnable (fences signalled) but doesn't get a chance to run
> > because a SCHED_FIFO SF thread is running.  (Maybe I misunderstood and
> > you're approach would help this case too?)
>
> Ah okay. Sorry I got it the wrong way around for some reason. I thought this
> task is preempting other CFS-based pipelined tasks.
>
> So your system seems to be overcomitted. Is SF short for SufraceFlinger? Under
> what scenarios do you have many SurfaceFlinger tasks? On Android I remember
> seeing they have priority of 1 or 2.

yeah, SF==SurfaceFlinger, and yeah, 1 and 2..

> sched_set_fifo() will use priority 50. If you set all your pipeline tasks
> to this priority, what happens?

I think this would work.. drm/msm doesn't use vblank work, so I
wouldn't really have problems with commit_work preempting vblank_work.
But I think the best option (and to handle the case if android changes
the RT priorties around in the future) is to let userspace set the
priorities.

> >
> > > Or maybe you can wrap userspace pipeline critical section lock such that any
> > > task holding it will automatically be promoted to SCHED_FIFO and then demoted
> > > to CFS once it releases it.
> >
> > The SCHED_DEADLINE + token passing approach that the lwn article
> > mentioned sounds interesting, if that eventually becomes possible.
> > But doesn't really help today..
>
> We were present in the room with Alessio when he gave that talk :-)
>
> You might have seen Valentin's talk in LPC where he's trying to get
> proxy-execution into shape. Which is a pre-requisite to enable using of
> SCHED_DEADLINE for these scenarios. IIRC it should allow all dependent tasks to
> run from the context of the deadline task during the display pipeline critical
> section.
>
> By the way, do you have issues with SoftIrqs delaying your RT tasks execution
> time?

I don't *think* so, but I'm not 100% sure if they are showing up in
traces.  So far it seems like SF stomping on commit_work.  (There is
the added complication that there are some chrome gpu-process tasks in
between SF and the display, including CrGpuMain (which really doesn't
want to be SCHED_FIFO when executing gl commands on behalf of
something unrelated to the compositor.. the deadline approach, IIUC,
might be the better option eventually for this?)

BR,
-R

>
> Thanks
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-05 23:24       ` Rob Clark
@ 2020-10-06  9:08         ` Daniel Vetter
  2020-10-06 10:01           ` Peter Zijlstra
  2020-10-06 10:59         ` Qais Yousef
  1 sibling, 1 reply; 28+ messages in thread
From: Daniel Vetter @ 2020-10-06  9:08 UTC (permalink / raw)
  To: Rob Clark
  Cc: Qais Yousef, dri-devel, linux-arm-msm, Tejun Heo, Tim Murray,
	Daniel Vetter, Rob Clark, open list, Steven Rostedt,
	Peter Zijlstra (Intel)

On Mon, Oct 05, 2020 at 04:24:38PM -0700, Rob Clark wrote:
> On Mon, Oct 5, 2020 at 8:00 AM Qais Yousef <qais.yousef@arm.com> wrote:
> >
> > +CC Steve and Peter - they might be interested.
> >
> > On 10/02/20 11:07, Rob Clark wrote:
> > > On Fri, Oct 2, 2020 at 4:01 AM Qais Yousef <qais.yousef@arm.com> wrote:
> > > >
> > > > On 09/30/20 14:17, Rob Clark wrote:
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > that I found.)
> > > > >
> > > > > But this presents a problem with using workqueues for non-blocking
> > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > down to hw ASAP.
> > > >
> > > > For me thees 2 properties
> > > >
> > > >         1. Run ASAP
> > > >         2. Finish the work un-interrupted
> > > >
> > > > Scream the workers need to be SCHED_FIFO by default. CFS can't give you these
> > > > guarantees.
> > >
> > > fwiw, commit_work does sleep/block for some time until fences are
> > > signalled, but then once that happens we want it to run ASAP,
> > > preempting lower priority SCHED_FIFO.
> > >
> > > >
> > > > IMO using sched_set_fifo() for these workers is the right thing.
> > > >
> > >
> > > Possibly, but we still have limited prioritization options (ie. not
> > > enough) to set these from the kernel.  Giving userspace the control,
> > > so it can pick sensible priorities for commit_work and vblank_work,
> > > which fits in with the priorities of the other userspace threads seems
> > > like the sensible thing.
> >
> > The problem is that the kernel can run on all types of systems. It's impossible
> > to pick one value that fits all. Userspace must manage these priorities, and
> > you can still export the TID to help with that.
> >
> > But why do you need several priorities in your pipeline? I would have thought
> > it should execute each stage sequentially and all tasks running at the same RT
> > priority is fine.
> 
> On the kernel side, vblank work should complete during the vblank
> period, making it a harder real time requirement.  So the thinking is
> this should be a higher priority.
> 
> But you are right, if you aren't overcommitted it probably doesn't matter.

vblank work needs to preempt commit work.

Right now we don't have any driver requiring this, but if we e.g. roll out
the gamma table update for i915, then this _has_ to happen in the vblank
period.

Whereas the commit work can happen in there, but it can also be delayed a
bit (until the vblank worker has finished) we will not miss any additional
deadline due to that.

So that's why we have 2 levels. I'm not even sure you can model that with
SCHED_DEADLINE, since essentially we need a few usec of cpu time very
vblank (16ms normally), but thos few usec _must_ be scheduled within a
very specific time slot or we're toast. And that vblank period is only
1-2ms usually.

> > On SMP priorities matter once you've overcomitted the systems. You need to have
> > more RT tasks running than CPUs for priorities to matter. It seems you have
> > a high count of RT tasks in your system?
> >
> > I did some profiles on Android and found that being overcomitted is hard. But
> > that was a while ago.
> >
> > >
> > > > >
> > > > > But the decision of whether commit_work should be RT or not really
> > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > >
> > > > I'm not sure I agree with this. I think it's better to characterize tasks based
> > > > on their properties/requirements rather than what the rest of the userspace is
> > > > using.
> > >
> > > I mean, the issue is that userspace is already using a few different
> > > rt priority levels for different SF threads.  We want commit_work to
> >
> > Why are they at different priorities? Different priority levels means that some
> > of them have more urgent deadlines to meet and it's okay to steal execution
> > time from lower priority tasks. Is this the case?
> 
> tbh, I'm not fully aware of the background.  It looks like most of the
> SF threads run at priority=2 (100-2==98), and the main one runs at
> priority=1
> 
> > RT planning and partitioning is not easy task for sure. You might want to
> > consider using affinities too to get stronger guarantees for some tasks and
> > prevent cross-talking.
> 
> There is some cgroup stuff that is pinning SF and some other stuff to
> the small cores, fwiw.. I think the reasoning is that they shouldn't
> be doing anything heavy enough to need the big cores.
> 
> > > run ASAP once fences are signalled, and vblank_work to run at a
> > > slightly higher priority still.  But the correct choice for priorities
> > > here depends on what userspace is using, it all needs to fit together
> > > properly.
> >
> > By userspace here I think you mean none display pipeline related RT tasks that
> > you need to coexit with and could still disrupt your pipeline?
> 
> I mean, commit_work should be higher priority than the other (display
> related) RT tasks.  But the kernel doesn't know what those priorities
> are.
> 
> > Using RT on Gerneral Purpose System is hard for sure. One of the major
> > challenge is that there's no admin that has full view of the system to do
> > proper RT planning.
> >
> > We need proper RT balancer daemon that helps partitioning the system for
> > multiple RT apps on these systems..
> >
> > >
> > > >
> > > > I do appreciate that maybe some of these tasks have varying requirements during
> > > > their life time. e.g: they have RT property during specific critical section
> > > > but otherwise are CFS tasks. I think the UI thread in Android behaves like
> > > > that.
> > > >
> > > > It's worth IMO trying that approach I pointed out earlier to see if making RT
> > > > try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
> > > > accepted but IMHO it's a better direction to consider and discuss.
> > >
> > > The problem I was seeing was actually the opposite..  commit_work
> > > becomes runnable (fences signalled) but doesn't get a chance to run
> > > because a SCHED_FIFO SF thread is running.  (Maybe I misunderstood and
> > > you're approach would help this case too?)
> >
> > Ah okay. Sorry I got it the wrong way around for some reason. I thought this
> > task is preempting other CFS-based pipelined tasks.
> >
> > So your system seems to be overcomitted. Is SF short for SufraceFlinger? Under
> > what scenarios do you have many SurfaceFlinger tasks? On Android I remember
> > seeing they have priority of 1 or 2.
> 
> yeah, SF==SurfaceFlinger, and yeah, 1 and 2..
> 
> > sched_set_fifo() will use priority 50. If you set all your pipeline tasks
> > to this priority, what happens?
> 
> I think this would work.. drm/msm doesn't use vblank work, so I
> wouldn't really have problems with commit_work preempting vblank_work.
> But I think the best option (and to handle the case if android changes
> the RT priorties around in the future) is to let userspace set the
> priorities.
> 
> > >
> > > > Or maybe you can wrap userspace pipeline critical section lock such that any
> > > > task holding it will automatically be promoted to SCHED_FIFO and then demoted
> > > > to CFS once it releases it.
> > >
> > > The SCHED_DEADLINE + token passing approach that the lwn article
> > > mentioned sounds interesting, if that eventually becomes possible.
> > > But doesn't really help today..
> >
> > We were present in the room with Alessio when he gave that talk :-)
> >
> > You might have seen Valentin's talk in LPC where he's trying to get
> > proxy-execution into shape. Which is a pre-requisite to enable using of
> > SCHED_DEADLINE for these scenarios. IIRC it should allow all dependent tasks to
> > run from the context of the deadline task during the display pipeline critical
> > section.
> >
> > By the way, do you have issues with SoftIrqs delaying your RT tasks execution
> > time?
> 
> I don't *think* so, but I'm not 100% sure if they are showing up in
> traces.  So far it seems like SF stomping on commit_work.  (There is
> the added complication that there are some chrome gpu-process tasks in
> between SF and the display, including CrGpuMain (which really doesn't
> want to be SCHED_FIFO when executing gl commands on behalf of
> something unrelated to the compositor.. the deadline approach, IIUC,
> might be the better option eventually for this?)

deadline has the upshot that it compose much better than SCHED_FIFO:
Everyone just drops their deadline requirements onto the scheduler,
scheduler makes sure it's all obeyed (or rejects your request).

The trouble is we'd need to know how long a commit takes, worst case, on a
given platform. And for that you need to measure stuff, and we kinda can't
spend a few minutes at boot-up going through the combinatorial maze of
atomic commits to make sure we have it all.

So I think in practice letting userspace set the right rt priority/mode is
the only way to go here :-/
-Daniel

> 
> BR,
> -R
> 
> >
> > Thanks
> >
> > --
> > Qais Yousef

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-06  9:08         ` Daniel Vetter
@ 2020-10-06 10:01           ` Peter Zijlstra
  0 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2020-10-06 10:01 UTC (permalink / raw)
  To: Rob Clark, Qais Yousef, dri-devel, linux-arm-msm, Tejun Heo,
	Tim Murray, Rob Clark, open list, Steven Rostedt

On Tue, Oct 06, 2020 at 11:08:59AM +0200, Daniel Vetter wrote:
> vblank work needs to preempt commit work.
> 
> Right now we don't have any driver requiring this, but if we e.g. roll out
> the gamma table update for i915, then this _has_ to happen in the vblank
> period.
> 
> Whereas the commit work can happen in there, but it can also be delayed a
> bit (until the vblank worker has finished) we will not miss any additional
> deadline due to that.
> 
> So that's why we have 2 levels. I'm not even sure you can model that with
> SCHED_DEADLINE, since essentially we need a few usec of cpu time very
> vblank (16ms normally), but thos few usec _must_ be scheduled within a
> very specific time slot or we're toast. And that vblank period is only
> 1-2ms usually.

Depends a bit on what the hardware gets us. If for example we're
provided an interrupt on vblank start, then that could wake a DEADLINE
job with (given your numbers above):

 .sched_period = 16ms,
 .sched_deadline = 1-2ms,
 .sched_runtime = 1-2ms,

The effective utilization of that task would be: 1-2/16.

> deadline has the upshot that it compose much better than SCHED_FIFO:
> Everyone just drops their deadline requirements onto the scheduler,
> scheduler makes sure it's all obeyed (or rejects your request).
> 
> The trouble is we'd need to know how long a commit takes, worst case, on a
> given platform. And for that you need to measure stuff, and we kinda can't
> spend a few minutes at boot-up going through the combinatorial maze of
> atomic commits to make sure we have it all.
> 
> So I think in practice letting userspace set the right rt priority/mode is
> the only way to go here :-/

Or you can have it adjust it's expected runtime as the system runs
(always keeping a little extra room over what you measure to make sure).

Given you have period > deadline, you can simply start with runtime =
deadline and adjust downwards during use (carefully).



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-05 23:24       ` Rob Clark
  2020-10-06  9:08         ` Daniel Vetter
@ 2020-10-06 10:59         ` Qais Yousef
  2020-10-06 20:04           ` Rob Clark
  1 sibling, 1 reply; 28+ messages in thread
From: Qais Yousef @ 2020-10-06 10:59 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

On 10/05/20 16:24, Rob Clark wrote:

[...]

> > RT planning and partitioning is not easy task for sure. You might want to
> > consider using affinities too to get stronger guarantees for some tasks and
> > prevent cross-talking.
> 
> There is some cgroup stuff that is pinning SF and some other stuff to
> the small cores, fwiw.. I think the reasoning is that they shouldn't
> be doing anything heavy enough to need the big cores.

Ah, so you're on big.LITTLE type of system. I have done some work which enables
biasing RT tasks towards big cores and control the default boost value if you
have util_clamp and schedutil enabled. You can use util_clamp in general to
help with DVFS related response time delays.

I haven't done any work to try our best to pick a small core first but fallback
to big if there's no other alternative.

It'd be interesting to know how often you end up on a big core if you remove
the affinity. The RT scheduler picks the first cpu in the lowest priority mask.
So it should have this bias towards picking smaller cores first if they're
in the lower priority mask (ie: not running higher priority RT tasks).

So unless you absolutely don't want any RT tasks on a big cores, it'd be worth
removing this affinity and check the percentage of time you spend on little
cores. This should help with your worst case scenario as you make more cpus
available.

> > > run ASAP once fences are signalled, and vblank_work to run at a
> > > slightly higher priority still.  But the correct choice for priorities
> > > here depends on what userspace is using, it all needs to fit together
> > > properly.
> >
> > By userspace here I think you mean none display pipeline related RT tasks that
> > you need to coexit with and could still disrupt your pipeline?
> 
> I mean, commit_work should be higher priority than the other (display
> related) RT tasks.  But the kernel doesn't know what those priorities
> are.

So if you set commit_work to sched_set_fifo(), it'd be at a reasonably high
priority (50) by default. Which means you just need to manage your SF
priorities without having to change commit_work priority itself?

> 
> > Using RT on Gerneral Purpose System is hard for sure. One of the major
> > challenge is that there's no admin that has full view of the system to do
> > proper RT planning.
> >
> > We need proper RT balancer daemon that helps partitioning the system for
> > multiple RT apps on these systems..
> >
> > >
> > > >
> > > > I do appreciate that maybe some of these tasks have varying requirements during
> > > > their life time. e.g: they have RT property during specific critical section
> > > > but otherwise are CFS tasks. I think the UI thread in Android behaves like
> > > > that.
> > > >
> > > > It's worth IMO trying that approach I pointed out earlier to see if making RT
> > > > try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
> > > > accepted but IMHO it's a better direction to consider and discuss.
> > >
> > > The problem I was seeing was actually the opposite..  commit_work
> > > becomes runnable (fences signalled) but doesn't get a chance to run
> > > because a SCHED_FIFO SF thread is running.  (Maybe I misunderstood and
> > > you're approach would help this case too?)
> >
> > Ah okay. Sorry I got it the wrong way around for some reason. I thought this
> > task is preempting other CFS-based pipelined tasks.
> >
> > So your system seems to be overcomitted. Is SF short for SufraceFlinger? Under
> > what scenarios do you have many SurfaceFlinger tasks? On Android I remember
> > seeing they have priority of 1 or 2.
> 
> yeah, SF==SurfaceFlinger, and yeah, 1 and 2..
> 
> > sched_set_fifo() will use priority 50. If you set all your pipeline tasks
> > to this priority, what happens?
> 
> I think this would work.. drm/msm doesn't use vblank work, so I
> wouldn't really have problems with commit_work preempting vblank_work.
> But I think the best option (and to handle the case if android changes
> the RT priorties around in the future) is to let userspace set the
> priorities.

I don't really mind. But it seems better for me if we know that two kernel
threads need to have a specific relative priorities to each others then to
handle this in the kernel properly. Userspace will only need then to worry
about managing its *own* priorities relative to that.

Just seen Peter suggesting in another email to use SCHED_DEADLINE for vblank
work. Which I think achieves the above if commit_work uses sched_set_fifo().

> 
> > >
> > > > Or maybe you can wrap userspace pipeline critical section lock such that any
> > > > task holding it will automatically be promoted to SCHED_FIFO and then demoted
> > > > to CFS once it releases it.
> > >
> > > The SCHED_DEADLINE + token passing approach that the lwn article
> > > mentioned sounds interesting, if that eventually becomes possible.
> > > But doesn't really help today..
> >
> > We were present in the room with Alessio when he gave that talk :-)
> >
> > You might have seen Valentin's talk in LPC where he's trying to get
> > proxy-execution into shape. Which is a pre-requisite to enable using of
> > SCHED_DEADLINE for these scenarios. IIRC it should allow all dependent tasks to
> > run from the context of the deadline task during the display pipeline critical
> > section.
> >
> > By the way, do you have issues with SoftIrqs delaying your RT tasks execution
> > time?
> 
> I don't *think* so, but I'm not 100% sure if they are showing up in

If you ever get a chance to run a high network throughput test, it might help
to see if softirqs are affecting you. I know Android has issues with this under
some circumstances.

> traces.  So far it seems like SF stomping on commit_work.  (There is
> the added complication that there are some chrome gpu-process tasks in
> between SF and the display, including CrGpuMain (which really doesn't
> want to be SCHED_FIFO when executing gl commands on behalf of
> something unrelated to the compositor.. the deadline approach, IIUC,
> might be the better option eventually for this?)

If you meant sched_deadline + token approach, then yeah I think it'd be better.
But as you said, we can't do this yet :/

But as Peter pointed out, this doesn't mean you can't use SCHED_DEADLINE at all
if it does make sense.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-06 10:59         ` Qais Yousef
@ 2020-10-06 20:04           ` Rob Clark
  2020-10-07 10:36             ` Qais Yousef
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2020-10-06 20:04 UTC (permalink / raw)
  To: Qais Yousef
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

On Tue, Oct 6, 2020 at 3:59 AM Qais Yousef <qais.yousef@arm.com> wrote:
>
> On 10/05/20 16:24, Rob Clark wrote:
>
> [...]
>
> > > RT planning and partitioning is not easy task for sure. You might want to
> > > consider using affinities too to get stronger guarantees for some tasks and
> > > prevent cross-talking.
> >
> > There is some cgroup stuff that is pinning SF and some other stuff to
> > the small cores, fwiw.. I think the reasoning is that they shouldn't
> > be doing anything heavy enough to need the big cores.
>
> Ah, so you're on big.LITTLE type of system. I have done some work which enables
> biasing RT tasks towards big cores and control the default boost value if you
> have util_clamp and schedutil enabled. You can use util_clamp in general to
> help with DVFS related response time delays.
>
> I haven't done any work to try our best to pick a small core first but fallback
> to big if there's no other alternative.
>
> It'd be interesting to know how often you end up on a big core if you remove
> the affinity. The RT scheduler picks the first cpu in the lowest priority mask.
> So it should have this bias towards picking smaller cores first if they're
> in the lower priority mask (ie: not running higher priority RT tasks).

fwiw, the issue I'm looking at is actually at the opposite end of the
spectrum, less demanding apps that let cpus throttle down to low
OPPs.. which stretches out the time taken at each step in the path
towards screen (which seems to improve the odds that we hit priority
inversion scenarios with SCHED_FIFO things stomping on important CFS
things)

There is a *big* difference in # of cpu cycles per frame between
highest and lowest OPP..

BR,
-R

> So unless you absolutely don't want any RT tasks on a big cores, it'd be worth
> removing this affinity and check the percentage of time you spend on little
> cores. This should help with your worst case scenario as you make more cpus
> available.
>
> > > > run ASAP once fences are signalled, and vblank_work to run at a
> > > > slightly higher priority still.  But the correct choice for priorities
> > > > here depends on what userspace is using, it all needs to fit together
> > > > properly.
> > >
> > > By userspace here I think you mean none display pipeline related RT tasks that
> > > you need to coexit with and could still disrupt your pipeline?
> >
> > I mean, commit_work should be higher priority than the other (display
> > related) RT tasks.  But the kernel doesn't know what those priorities
> > are.
>
> So if you set commit_work to sched_set_fifo(), it'd be at a reasonably high
> priority (50) by default. Which means you just need to manage your SF
> priorities without having to change commit_work priority itself?
>
> >
> > > Using RT on Gerneral Purpose System is hard for sure. One of the major
> > > challenge is that there's no admin that has full view of the system to do
> > > proper RT planning.
> > >
> > > We need proper RT balancer daemon that helps partitioning the system for
> > > multiple RT apps on these systems..
> > >
> > > >
> > > > >
> > > > > I do appreciate that maybe some of these tasks have varying requirements during
> > > > > their life time. e.g: they have RT property during specific critical section
> > > > > but otherwise are CFS tasks. I think the UI thread in Android behaves like
> > > > > that.
> > > > >
> > > > > It's worth IMO trying that approach I pointed out earlier to see if making RT
> > > > > try to pick an idle CPU rather than preempt CFS helps. Not sure if it'd be
> > > > > accepted but IMHO it's a better direction to consider and discuss.
> > > >
> > > > The problem I was seeing was actually the opposite..  commit_work
> > > > becomes runnable (fences signalled) but doesn't get a chance to run
> > > > because a SCHED_FIFO SF thread is running.  (Maybe I misunderstood and
> > > > you're approach would help this case too?)
> > >
> > > Ah okay. Sorry I got it the wrong way around for some reason. I thought this
> > > task is preempting other CFS-based pipelined tasks.
> > >
> > > So your system seems to be overcomitted. Is SF short for SufraceFlinger? Under
> > > what scenarios do you have many SurfaceFlinger tasks? On Android I remember
> > > seeing they have priority of 1 or 2.
> >
> > yeah, SF==SurfaceFlinger, and yeah, 1 and 2..
> >
> > > sched_set_fifo() will use priority 50. If you set all your pipeline tasks
> > > to this priority, what happens?
> >
> > I think this would work.. drm/msm doesn't use vblank work, so I
> > wouldn't really have problems with commit_work preempting vblank_work.
> > But I think the best option (and to handle the case if android changes
> > the RT priorties around in the future) is to let userspace set the
> > priorities.
>
> I don't really mind. But it seems better for me if we know that two kernel
> threads need to have a specific relative priorities to each others then to
> handle this in the kernel properly. Userspace will only need then to worry
> about managing its *own* priorities relative to that.
>
> Just seen Peter suggesting in another email to use SCHED_DEADLINE for vblank
> work. Which I think achieves the above if commit_work uses sched_set_fifo().
>
> >
> > > >
> > > > > Or maybe you can wrap userspace pipeline critical section lock such that any
> > > > > task holding it will automatically be promoted to SCHED_FIFO and then demoted
> > > > > to CFS once it releases it.
> > > >
> > > > The SCHED_DEADLINE + token passing approach that the lwn article
> > > > mentioned sounds interesting, if that eventually becomes possible.
> > > > But doesn't really help today..
> > >
> > > We were present in the room with Alessio when he gave that talk :-)
> > >
> > > You might have seen Valentin's talk in LPC where he's trying to get
> > > proxy-execution into shape. Which is a pre-requisite to enable using of
> > > SCHED_DEADLINE for these scenarios. IIRC it should allow all dependent tasks to
> > > run from the context of the deadline task during the display pipeline critical
> > > section.
> > >
> > > By the way, do you have issues with SoftIrqs delaying your RT tasks execution
> > > time?
> >
> > I don't *think* so, but I'm not 100% sure if they are showing up in
>
> If you ever get a chance to run a high network throughput test, it might help
> to see if softirqs are affecting you. I know Android has issues with this under
> some circumstances.
>
> > traces.  So far it seems like SF stomping on commit_work.  (There is
> > the added complication that there are some chrome gpu-process tasks in
> > between SF and the display, including CrGpuMain (which really doesn't
> > want to be SCHED_FIFO when executing gl commands on behalf of
> > something unrelated to the compositor.. the deadline approach, IIUC,
> > might be the better option eventually for this?)
>
> If you meant sched_deadline + token approach, then yeah I think it'd be better.
> But as you said, we can't do this yet :/
>
> But as Peter pointed out, this doesn't mean you can't use SCHED_DEADLINE at all
> if it does make sense.
>
> Thanks
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-06 20:04           ` Rob Clark
@ 2020-10-07 10:36             ` Qais Yousef
  2020-10-07 15:57               ` Rob Clark
  0 siblings, 1 reply; 28+ messages in thread
From: Qais Yousef @ 2020-10-07 10:36 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

On 10/06/20 13:04, Rob Clark wrote:
> On Tue, Oct 6, 2020 at 3:59 AM Qais Yousef <qais.yousef@arm.com> wrote:
> >
> > On 10/05/20 16:24, Rob Clark wrote:
> >
> > [...]
> >
> > > > RT planning and partitioning is not easy task for sure. You might want to
> > > > consider using affinities too to get stronger guarantees for some tasks and
> > > > prevent cross-talking.
> > >
> > > There is some cgroup stuff that is pinning SF and some other stuff to
> > > the small cores, fwiw.. I think the reasoning is that they shouldn't
> > > be doing anything heavy enough to need the big cores.
> >
> > Ah, so you're on big.LITTLE type of system. I have done some work which enables
> > biasing RT tasks towards big cores and control the default boost value if you
> > have util_clamp and schedutil enabled. You can use util_clamp in general to
> > help with DVFS related response time delays.
> >
> > I haven't done any work to try our best to pick a small core first but fallback
> > to big if there's no other alternative.
> >
> > It'd be interesting to know how often you end up on a big core if you remove
> > the affinity. The RT scheduler picks the first cpu in the lowest priority mask.
> > So it should have this bias towards picking smaller cores first if they're
> > in the lower priority mask (ie: not running higher priority RT tasks).
> 
> fwiw, the issue I'm looking at is actually at the opposite end of the
> spectrum, less demanding apps that let cpus throttle down to low
> OPPs.. which stretches out the time taken at each step in the path
> towards screen (which seems to improve the odds that we hit priority
> inversion scenarios with SCHED_FIFO things stomping on important CFS
> things)

So you do have the problem of RT task preempting an important CFS task.

> 
> There is a *big* difference in # of cpu cycles per frame between
> highest and lowest OPP..

To combat DVFS related delays, you can use util clamp.

Hopefully this article helps explain it if you didn't come across it before

	https://lwn.net/Articles/762043/

You can use sched_setattr() to set SCHED_FLAG_UTIL_CLAMP_MIN for a task. This
will guarantee everytime this task is running it'll appear it has at least
this utilization value, so schedutil governor (which must be used for this to
work) will pick up the right performance point (OPP).

The scheduler will try its best to make sure that the task will run on a core
that meets the minimum requested performance point (hinted by setting
uclamp_min).

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-07 10:36             ` Qais Yousef
@ 2020-10-07 15:57               ` Rob Clark
  2020-10-07 16:30                 ` Qais Yousef
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2020-10-07 15:57 UTC (permalink / raw)
  To: Qais Yousef
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

On Wed, Oct 7, 2020 at 3:36 AM Qais Yousef <qais.yousef@arm.com> wrote:
>
> On 10/06/20 13:04, Rob Clark wrote:
> > On Tue, Oct 6, 2020 at 3:59 AM Qais Yousef <qais.yousef@arm.com> wrote:
> > >
> > > On 10/05/20 16:24, Rob Clark wrote:
> > >
> > > [...]
> > >
> > > > > RT planning and partitioning is not easy task for sure. You might want to
> > > > > consider using affinities too to get stronger guarantees for some tasks and
> > > > > prevent cross-talking.
> > > >
> > > > There is some cgroup stuff that is pinning SF and some other stuff to
> > > > the small cores, fwiw.. I think the reasoning is that they shouldn't
> > > > be doing anything heavy enough to need the big cores.
> > >
> > > Ah, so you're on big.LITTLE type of system. I have done some work which enables
> > > biasing RT tasks towards big cores and control the default boost value if you
> > > have util_clamp and schedutil enabled. You can use util_clamp in general to
> > > help with DVFS related response time delays.
> > >
> > > I haven't done any work to try our best to pick a small core first but fallback
> > > to big if there's no other alternative.
> > >
> > > It'd be interesting to know how often you end up on a big core if you remove
> > > the affinity. The RT scheduler picks the first cpu in the lowest priority mask.
> > > So it should have this bias towards picking smaller cores first if they're
> > > in the lower priority mask (ie: not running higher priority RT tasks).
> >
> > fwiw, the issue I'm looking at is actually at the opposite end of the
> > spectrum, less demanding apps that let cpus throttle down to low
> > OPPs.. which stretches out the time taken at each step in the path
> > towards screen (which seems to improve the odds that we hit priority
> > inversion scenarios with SCHED_FIFO things stomping on important CFS
> > things)
>
> So you do have the problem of RT task preempting an important CFS task.
>
> >
> > There is a *big* difference in # of cpu cycles per frame between
> > highest and lowest OPP..
>
> To combat DVFS related delays, you can use util clamp.
>
> Hopefully this article helps explain it if you didn't come across it before
>
>         https://lwn.net/Articles/762043/
>
> You can use sched_setattr() to set SCHED_FLAG_UTIL_CLAMP_MIN for a task. This
> will guarantee everytime this task is running it'll appear it has at least
> this utilization value, so schedutil governor (which must be used for this to
> work) will pick up the right performance point (OPP).
>
> The scheduler will try its best to make sure that the task will run on a core
> that meets the minimum requested performance point (hinted by setting
> uclamp_min).

Yeah, I think we will end up making some use of uclamp.. there is
someone else working on that angle

But without it, this is a case that exposes legit prioritization
problems with commit_work which we should fix ;-)

BR,
-R

>
> Thanks
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-07 15:57               ` Rob Clark
@ 2020-10-07 16:30                 ` Qais Yousef
  2020-10-08  9:10                   ` Daniel Vetter
  0 siblings, 1 reply; 28+ messages in thread
From: Qais Yousef @ 2020-10-07 16:30 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, linux-arm-msm, Tejun Heo, Tim Murray, Daniel Vetter,
	Rob Clark, open list, Steven Rostedt, Peter Zijlstra (Intel)

On 10/07/20 08:57, Rob Clark wrote:
> Yeah, I think we will end up making some use of uclamp.. there is
> someone else working on that angle
> 
> But without it, this is a case that exposes legit prioritization
> problems with commit_work which we should fix ;-)

I wasn't suggesting this as an alternative to fixing the other problem. But it
seemed you had a different problem here that I thought I could help with :-)

I did give my opinion about how to handle that priority issue. If the 2 threads
are kernel threads and by design they need relative priorities IMO the kernel
need to be taught to set this relative priority. It seemed the vblank worker
could run as SCHED_DEADLINE. If this works, then the priority problem for
commit_work disappears as SCHED_DEADLINE will preempt RT. If commit_work uses
sched_set_fifo(), its priority will be 50, hence your SF threads can no longer
preempt it. And you can manage the SF threads to be any value you want relative
to 50 anyway without having to manage commit_work itself.

I'm not sure if you have problems with RT tasks preempting important CFS
tasks. My brain registered two conflicting statements.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-05 12:15             ` Ville Syrjälä
  2020-10-05 14:15               ` Daniel Vetter
@ 2020-10-07 16:44               ` Rob Clark
  2020-10-08  8:24                 ` Ville Syrjälä
  1 sibling, 1 reply; 28+ messages in thread
From: Rob Clark @ 2020-10-07 16:44 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Mon, Oct 5, 2020 at 5:15 AM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Fri, Oct 02, 2020 at 10:55:52AM -0700, Rob Clark wrote:
> > On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
> > <ville.syrjala@linux.intel.com> wrote:
> > >
> > > On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > > > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > I'm leaning towards converting the other drivers over to use the
> > > > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > > > I can add a patch to that, but figured I could postpone that churn
> > > > > > until there is some by-in on this whole idea.
> > > > >
> > > > > i915 has its own commit code, it's not even using the current commit
> > > > > helpers (nor the commit_work). Not sure how much other fun there is.
> > > >
> > > > I don't think we want per-crtc threads for this in i915. Seems
> > > > to me easier to guarantee atomicity across multiple crtcs if
> > > > we just commit them from the same thread.
> > >
> > > Oh, and we may have to commit things in a very specific order
> > > to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> > > thread is a no go.
> >
> > If I'm understanding the i915 code, this is only the case for modeset
> > commits?  I suppose we could achieve the same result by just deciding
> > to pick the kthread of the first CRTC for modeset commits.  I'm not
> > really so much concerned about parallelism for modeset.
>
> I'm not entirely happy about the random differences between modesets
> and other commits. Ideally we wouldn't need any.
>
> Anyways, even if we ignore modesets we still have the issue with
> atomicity guarantees across multiple crtcs. So I think we still
> don't want per-crtc threads, rather it should be thread for each
> commit.

I don't really see any other way to solve the priority inversion other
than per-CRTC kthreads.  I've been thinking about it a bit more, and
my conclusion is:

(1) There isn't really any use for the N+1'th commit to start running
before the kthread_work for the N'th commit completes, so I don't mind
losing the unbound aspect of the workqueue approach
(2) For cases where there does need to be serialization between
commits on different CRTCs, since there is a per-CRTC kthread, you
could achieve this with locking

Since i915 isn't using the atomic helpers here, I suppose it is an
option for i915 to just continue doing what it is doing.

And I could ofc just stop using the atomic commit helper and do the
kthreads thing in msm. But my first preference would be that the
commit helper does generally the right thing.

BR,
-R

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-07 16:44               ` Rob Clark
@ 2020-10-08  8:24                 ` Ville Syrjälä
  2020-10-16 16:27                   ` Rob Clark
  0 siblings, 1 reply; 28+ messages in thread
From: Ville Syrjälä @ 2020-10-08  8:24 UTC (permalink / raw)
  To: Rob Clark
  Cc: Daniel Vetter, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Wed, Oct 07, 2020 at 09:44:09AM -0700, Rob Clark wrote:
> On Mon, Oct 5, 2020 at 5:15 AM Ville Syrjälä
> <ville.syrjala@linux.intel.com> wrote:
> >
> > On Fri, Oct 02, 2020 at 10:55:52AM -0700, Rob Clark wrote:
> > > On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
> > > <ville.syrjala@linux.intel.com> wrote:
> > > >
> > > > On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > > > > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > > > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > I'm leaning towards converting the other drivers over to use the
> > > > > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > > > > I can add a patch to that, but figured I could postpone that churn
> > > > > > > until there is some by-in on this whole idea.
> > > > > >
> > > > > > i915 has its own commit code, it's not even using the current commit
> > > > > > helpers (nor the commit_work). Not sure how much other fun there is.
> > > > >
> > > > > I don't think we want per-crtc threads for this in i915. Seems
> > > > > to me easier to guarantee atomicity across multiple crtcs if
> > > > > we just commit them from the same thread.
> > > >
> > > > Oh, and we may have to commit things in a very specific order
> > > > to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> > > > thread is a no go.
> > >
> > > If I'm understanding the i915 code, this is only the case for modeset
> > > commits?  I suppose we could achieve the same result by just deciding
> > > to pick the kthread of the first CRTC for modeset commits.  I'm not
> > > really so much concerned about parallelism for modeset.
> >
> > I'm not entirely happy about the random differences between modesets
> > and other commits. Ideally we wouldn't need any.
> >
> > Anyways, even if we ignore modesets we still have the issue with
> > atomicity guarantees across multiple crtcs. So I think we still
> > don't want per-crtc threads, rather it should be thread for each
> > commit.
> 
> I don't really see any other way to solve the priority inversion other
> than per-CRTC kthreads.

What's the problem with just something like a dedicated commit
thread pool?

> I've been thinking about it a bit more, and
> my conclusion is:
> 
> (1) There isn't really any use for the N+1'th commit to start running
> before the kthread_work for the N'th commit completes, so I don't mind
> losing the unbound aspect of the workqueue approach
> (2) For cases where there does need to be serialization between
> commits on different CRTCs, since there is a per-CRTC kthread, you
> could achieve this with locking
> 
> Since i915 isn't using the atomic helpers here, I suppose it is an
> option for i915 to just continue doing what it is doing.
> 
> And I could ofc just stop using the atomic commit helper and do the
> kthreads thing in msm. But my first preference would be that the
> commit helper does generally the right thing.
> 
> BR,
> -R

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-07 16:30                 ` Qais Yousef
@ 2020-10-08  9:10                   ` Daniel Vetter
  0 siblings, 0 replies; 28+ messages in thread
From: Daniel Vetter @ 2020-10-08  9:10 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Rob Clark, dri-devel, linux-arm-msm, Tejun Heo, Tim Murray,
	Daniel Vetter, Rob Clark, open list, Steven Rostedt,
	Peter Zijlstra (Intel)

On Wed, Oct 07, 2020 at 05:30:10PM +0100, Qais Yousef wrote:
> On 10/07/20 08:57, Rob Clark wrote:
> > Yeah, I think we will end up making some use of uclamp.. there is
> > someone else working on that angle
> > 
> > But without it, this is a case that exposes legit prioritization
> > problems with commit_work which we should fix ;-)
> 
> I wasn't suggesting this as an alternative to fixing the other problem. But it
> seemed you had a different problem here that I thought I could help with :-)
> 
> I did give my opinion about how to handle that priority issue. If the 2 threads
> are kernel threads and by design they need relative priorities IMO the kernel
> need to be taught to set this relative priority. It seemed the vblank worker
> could run as SCHED_DEADLINE. If this works, then the priority problem for
> commit_work disappears as SCHED_DEADLINE will preempt RT. If commit_work uses
> sched_set_fifo(), its priority will be 50, hence your SF threads can no longer
> preempt it. And you can manage the SF threads to be any value you want relative
> to 50 anyway without having to manage commit_work itself.
> 
> I'm not sure if you have problems with RT tasks preempting important CFS
> tasks. My brain registered two conflicting statements.

I think the problem is there's two modes cros runs in: Normal cros mode,
which mostly works like a linux desktop. CFS commit work seems fine.

Other mode is android emulation, where we have the surface flinger thread
running at SCHED_FIFO. I think Rob's plan is to runtime switch priorities
to match each use case.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 0/3] drm: commit_work scheduling
  2020-10-08  8:24                 ` Ville Syrjälä
@ 2020-10-16 16:27                   ` Rob Clark
  0 siblings, 0 replies; 28+ messages in thread
From: Rob Clark @ 2020-10-16 16:27 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: Daniel Vetter, Rob Clark, linux-arm-msm, open list, Tim Murray,
	dri-devel, Tejun Heo, Qais Yousef

On Thu, Oct 8, 2020 at 1:24 AM Ville Syrjälä
<ville.syrjala@linux.intel.com> wrote:
>
> On Wed, Oct 07, 2020 at 09:44:09AM -0700, Rob Clark wrote:
> > On Mon, Oct 5, 2020 at 5:15 AM Ville Syrjälä
> > <ville.syrjala@linux.intel.com> wrote:
> > >
> > > On Fri, Oct 02, 2020 at 10:55:52AM -0700, Rob Clark wrote:
> > > > On Fri, Oct 2, 2020 at 4:05 AM Ville Syrjälä
> > > > <ville.syrjala@linux.intel.com> wrote:
> > > > >
> > > > > On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> > > > > > On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > > > > > > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >
> > > > > > > > I'm leaning towards converting the other drivers over to use the
> > > > > > > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > > > > > > I can add a patch to that, but figured I could postpone that churn
> > > > > > > > until there is some by-in on this whole idea.
> > > > > > >
> > > > > > > i915 has its own commit code, it's not even using the current commit
> > > > > > > helpers (nor the commit_work). Not sure how much other fun there is.
> > > > > >
> > > > > > I don't think we want per-crtc threads for this in i915. Seems
> > > > > > to me easier to guarantee atomicity across multiple crtcs if
> > > > > > we just commit them from the same thread.
> > > > >
> > > > > Oh, and we may have to commit things in a very specific order
> > > > > to guarantee the hw doesn't fall over, so yeah definitely per-crtc
> > > > > thread is a no go.
> > > >
> > > > If I'm understanding the i915 code, this is only the case for modeset
> > > > commits?  I suppose we could achieve the same result by just deciding
> > > > to pick the kthread of the first CRTC for modeset commits.  I'm not
> > > > really so much concerned about parallelism for modeset.
> > >
> > > I'm not entirely happy about the random differences between modesets
> > > and other commits. Ideally we wouldn't need any.
> > >
> > > Anyways, even if we ignore modesets we still have the issue with
> > > atomicity guarantees across multiple crtcs. So I think we still
> > > don't want per-crtc threads, rather it should be thread for each
> > > commit.
> >
> > I don't really see any other way to solve the priority inversion other
> > than per-CRTC kthreads.
>
> What's the problem with just something like a dedicated commit
> thread pool?

partly, I was trying to avoid re-implementing workqueue.  And partly
the thread-pool approach seems like it would be harder for userspace
to find the tasks which need priority adjustment.

But each CRTC is essentially a FIFO, pageflip N+1 on a given CRTC will
happen after pageflip N.

BR,
-R

> > I've been thinking about it a bit more, and
> > my conclusion is:
> >
> > (1) There isn't really any use for the N+1'th commit to start running
> > before the kthread_work for the N'th commit completes, so I don't mind
> > losing the unbound aspect of the workqueue approach
> > (2) For cases where there does need to be serialization between
> > commits on different CRTCs, since there is a per-CRTC kthread, you
> > could achieve this with locking
> >
> > Since i915 isn't using the atomic helpers here, I suppose it is an
> > option for i915 to just continue doing what it is doing.
> >
> > And I could ofc just stop using the atomic commit helper and do the
> > kthreads thing in msm. But my first preference would be that the
> > commit helper does generally the right thing.
> >
> > BR,
> > -R
>
> --
> Ville Syrjälä
> Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-10-16 16:28 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-30 21:17 [PATCH v2 0/3] drm: commit_work scheduling Rob Clark
2020-09-30 21:17 ` [PATCH v2 1/3] drm/crtc: Introduce per-crtc kworker Rob Clark
2020-09-30 21:17 ` [PATCH v2 2/3] drm/atomic: Use kthread worker for nonblocking commits Rob Clark
2020-09-30 21:17 ` [PATCH v2 3/3] drm: Expose CRTC's kworker task id Rob Clark
2020-10-01  7:25 ` [PATCH v2 0/3] drm: commit_work scheduling Daniel Vetter
2020-10-01 15:15   ` Rob Clark
2020-10-01 15:25     ` Daniel Vetter
2020-10-02 10:52       ` Ville Syrjälä
2020-10-02 11:05         ` Ville Syrjälä
2020-10-02 17:55           ` Rob Clark
2020-10-05 12:15             ` Ville Syrjälä
2020-10-05 14:15               ` Daniel Vetter
2020-10-05 22:58                 ` Rob Clark
2020-10-07 16:44               ` Rob Clark
2020-10-08  8:24                 ` Ville Syrjälä
2020-10-16 16:27                   ` Rob Clark
2020-10-02 11:01 ` Qais Yousef
2020-10-02 18:07   ` Rob Clark
2020-10-05 15:00     ` Qais Yousef
2020-10-05 23:24       ` Rob Clark
2020-10-06  9:08         ` Daniel Vetter
2020-10-06 10:01           ` Peter Zijlstra
2020-10-06 10:59         ` Qais Yousef
2020-10-06 20:04           ` Rob Clark
2020-10-07 10:36             ` Qais Yousef
2020-10-07 15:57               ` Rob Clark
2020-10-07 16:30                 ` Qais Yousef
2020-10-08  9:10                   ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).