intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Subject: [Intel-gfx] [PATCH 36/42] drm/i915: Improve DFS for priority inheritance
Date: Sun,  2 Aug 2020 17:44:06 +0100	[thread overview]
Message-ID: <20200802164412.2738-37-chris@chris-wilson.co.uk> (raw)
In-Reply-To: <20200802164412.2738-1-chris@chris-wilson.co.uk>

The core of the scheduling algorithm is that we compute the topological
order of the fence DAG. Knowing that we have a DAG, we should be able to
use a DFS to compute the topological sort in linear time. However,
during the conversion of the recursive algorithm into an iterative one,
the memorization of how far we had progressed down a branch was
forgotten.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_scheduler.c | 57 ++++++++++++++++-----------
 1 file changed, 33 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index 3c1a0b001746..ca681e6d9c6d 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -287,15 +287,33 @@ static void ipi_priority(struct i915_dependency *p, int prio)
 	spin_unlock(&ipi_lock);
 }
 
+static struct i915_request *
+stack_push(struct i915_request *rq,
+	   struct i915_request *stack,
+	   struct list_head *pos)
+{
+	stack->sched.dfs.prev = pos;
+	rq->sched.dfs.next = (struct list_head *)stack;
+	return rq;
+}
+
+static struct i915_request *stack_pop(struct i915_request *rq,
+				      struct list_head **pos)
+{
+	rq = (struct i915_request *)rq->sched.dfs.next;
+	if (rq)
+		*pos = rq->sched.dfs.prev;
+	return rq;
+}
+
 static void __i915_request_set_priority(struct i915_request *rq, int prio)
 {
 	struct intel_engine_cs *engine = rq->engine;
-	struct i915_request *rn;
+	struct list_head *pos = &rq->sched.signalers_list;
 	struct list_head *plist;
-	LIST_HEAD(dfs);
 
 	lockdep_assert_held(&engine->active.lock);
-	list_add(&rq->sched.dfs, &dfs);
+	plist = i915_sched_lookup_priolist(engine, prio);
 
 	/*
 	 * Recursively bump all dependent priorities to match the new request.
@@ -315,40 +333,31 @@ static void __i915_request_set_priority(struct i915_request *rq, int prio)
 	 * end result is a topological list of requests in reverse order, the
 	 * last element in the list is the request we must execute first.
 	 */
-	list_for_each_entry(rq, &dfs, sched.dfs) {
-		struct i915_dependency *p;
-
-		/* Also release any children on this engine that are ready */
-		GEM_BUG_ON(rq->engine != engine);
-
-		for_each_signaler(p, rq) {
+	rq->sched.dfs.next = NULL;
+	do {
+		list_for_each_continue(pos, &rq->sched.signalers_list) {
+			struct i915_dependency *p =
+				list_entry(pos, typeof(*p), signal_link);
 			struct i915_request *s =
 				container_of(p->signaler, typeof(*s), sched);
 
-			GEM_BUG_ON(s == rq);
-
 			if (rq_prio(s) >= prio)
 				continue;
 
 			if (i915_request_completed(s))
 				continue;
 
-			if (s->engine != rq->engine) {
+			if (s->engine != engine) {
 				ipi_priority(p, prio);
 				continue;
 			}
 
-			list_move_tail(&s->sched.dfs, &dfs);
+			/* Remember our position along this branch */
+			rq = stack_push(s, rq, pos);
+			pos = &rq->sched.signalers_list;
 		}
-	}
-
-	plist = i915_sched_lookup_priolist(engine, prio);
-
-	/* Fifo and depth-first replacement ensure our deps execute first */
-	list_for_each_entry_safe_reverse(rq, rn, &dfs, sched.dfs) {
-		GEM_BUG_ON(rq->engine != engine);
 
-		INIT_LIST_HEAD(&rq->sched.dfs);
+		RQ_TRACE(rq, "set-priority:%d\n", prio);
 		WRITE_ONCE(rq->sched.attr.priority, prio);
 
 		/*
@@ -362,12 +371,13 @@ static void __i915_request_set_priority(struct i915_request *rq, int prio)
 		if (!i915_request_is_ready(rq))
 			continue;
 
+		GEM_BUG_ON(rq->engine != engine);
 		if (i915_request_in_priority_queue(rq))
 			list_move_tail(&rq->sched.link, plist);
 
 		/* Defer (tasklet) submission until after all updates. */
 		kick_submission(engine, rq, prio);
-	}
+	} while ((rq = stack_pop(rq, &pos)));
 }
 
 void i915_request_set_priority(struct i915_request *rq, int prio)
@@ -435,7 +445,6 @@ void i915_sched_node_init(struct i915_sched_node *node)
 	INIT_LIST_HEAD(&node->signalers_list);
 	INIT_LIST_HEAD(&node->waiters_list);
 	INIT_LIST_HEAD(&node->link);
-	INIT_LIST_HEAD(&node->dfs);
 
 	i915_sched_node_reinit(node);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2020-08-02 16:44 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-02 16:43 [Intel-gfx] Time, where did it go? Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 01/42] drm/i915: Fix wrong return value Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 02/42] drm/i915/gem: Don't drop the timeline lock during execbuf Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 03/42] drm/i915/gem: Reduce context termination list iteration guard to RCU Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 04/42] drm/i915/gt: Protect context lifetime with RCU Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 05/42] drm/i915/gt: Free stale request on destroying the virtual engine Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 06/42] drm/i915/gt: Track signaled breadcrumbs outside of the breadcrumb spinlock Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 07/42] drm/i915/gt: Split the breadcrumb spinlock between global and contexts Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 08/42] drm/i915: Drop i915_request.lock serialisation around await_start Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 09/42] drm/i915: Drop i915_request.lock requirement for intel_rps_boost() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 10/42] drm/i915/gem: Reduce ctx->engine_mutex for reading the clone source Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 11/42] drm/i915/gem: Reduce ctx->engines_mutex for get_engines() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 12/42] drm/i915: Reduce test_and_set_bit to set_bit in i915_request_submit() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 13/42] drm/i915/gt: Decouple completed requests on unwind Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 14/42] drm/i915/gt: Check for a completed last request once Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 15/42] drm/i915/gt: Refactor heartbeat request construction and submission Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 16/42] drm/i915/gt: Replace direct submit with direct call to tasklet Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 17/42] drm/i915/gt: Use virtual_engine during execlists_dequeue Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 18/42] drm/i915/gt: Decouple inflight virtual engines Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 19/42] drm/i915/gt: Defer schedule_out until after the next dequeue Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 20/42] drm/i915/gt: Resubmit the virtual engine on schedule-out Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 21/42] drm/i915/gt: Simplify virtual engine handling for execlists_hold() Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 22/42] drm/i915/gt: ce->inflight updates are now serialised Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 23/42] drm/i915/gt: Drop atomic for engine->fw_active tracking Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 24/42] drm/i915/gt: Extract busy-stats for ring-scheduler Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 25/42] drm/i915/gt: Convert stats.active to plain unsigned int Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 26/42] drm/i915: Lift waiter/signaler iterators Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 27/42] drm/i915: Strip out internal priorities Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 28/42] drm/i915: Remove I915_USER_PRIORITY_SHIFT Chris Wilson
2020-08-02 16:43 ` [Intel-gfx] [PATCH 29/42] drm/i915/gt: Defer the kmem_cache_free() until after the HW submit Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 30/42] drm/i915: Prune empty priolists Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 31/42] drm/i915: Replace engine->schedule() with a known request operation Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 32/42] drm/i915/gt: Do not suspend bonded requests if one hangs Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 33/42] drm/i915: Teach the i915_dependency to use a double-lock Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 34/42] drm/i915: Restructure priority inheritance Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 35/42] drm/i915/selftests: Measure set-priority duration Chris Wilson
2020-08-02 16:44 ` Chris Wilson [this message]
2020-08-02 16:44 ` [Intel-gfx] [PATCH 37/42] drm/i915/gt: Remove timeslice suppression Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 38/42] drm/i915: Fair low-latency scheduling Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 39/42] drm/i915/gt: Specify a deadline for the heartbeat Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 40/42] drm/i915: Replace the priority boosting for the display with a deadline Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 41/42] drm/i915: Move saturated workload detection back to the context Chris Wilson
2020-08-02 16:44 ` [Intel-gfx] [PATCH 42/42] drm/i915/gt: Another tweak for flushing the tasklets Chris Wilson
2020-08-02 17:14 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/42] drm/i915: Fix wrong return value Patchwork
2020-08-02 17:14 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-08-02 17:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-08-02 17:56 ` [Intel-gfx] Time, where did it go? Dave Airlie
2020-08-02 19:36   ` Chris Wilson
2020-08-04 21:45     ` Dave Airlie
2020-08-07  7:12       ` Chris Wilson
2020-08-09 20:01         ` Dave Airlie
2020-08-02 21:03 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/42] drm/i915: Fix wrong return value Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200802164412.2738-37-chris@chris-wilson.co.uk \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).