From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: thomas.hellstrom@intel.com
Subject: Re: [Intel-gfx] [PATCH 08/41] drm/i915: Improve DFS for priority inheritance
Date: Tue, 26 Jan 2021 16:51:14 +0000 [thread overview]
Message-ID: <7f14fc68-1410-57a1-3e3c-78f1da84453e@linux.intel.com> (raw)
In-Reply-To: <4a5b8b67-c917-46d5-9ddb-41bb0159244c@linux.intel.com>
On 26/01/2021 16:42, Tvrtko Ursulin wrote:
>
> On 26/01/2021 16:26, Chris Wilson wrote:
>> Quoting Tvrtko Ursulin (2021-01-26 16:22:58)
>>>
>>>
>>> On 25/01/2021 14:01, Chris Wilson wrote:
>>>> The core of the scheduling algorithm is that we compute the topological
>>>> order of the fence DAG. Knowing that we have a DAG, we should be
>>>> able to
>>>> use a DFS to compute the topological sort in linear time. However,
>>>> during the conversion of the recursive algorithm into an iterative one,
>>>> the memoization of how far we had progressed down a branch was
>>>> forgotten. The result was that instead of running in linear time, it
>>>> was
>>>> running in geometric time and could easily run for a few hundred
>>>> milliseconds given a wide enough graph, not the microseconds as
>>>> required.
>>>>
>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>> ---
>>>> drivers/gpu/drm/i915/i915_scheduler.c | 58
>>>> ++++++++++++++++-----------
>>>> 1 file changed, 34 insertions(+), 24 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_scheduler.c
>>>> b/drivers/gpu/drm/i915/i915_scheduler.c
>>>> index 4802c9b1081d..9139a91f0aa3 100644
>>>> --- a/drivers/gpu/drm/i915/i915_scheduler.c
>>>> +++ b/drivers/gpu/drm/i915/i915_scheduler.c
>>>> @@ -234,6 +234,26 @@ void __i915_priolist_free(struct i915_priolist *p)
>>>> kmem_cache_free(global.slab_priorities, p);
>>>> }
>>>> +static struct i915_request *
>>>> +stack_push(struct i915_request *rq,
>>>> + struct i915_request *stack,
>>>> + struct list_head *pos)
>>>> +{
>>>> + stack->sched.dfs.prev = pos;
>>>> + rq->sched.dfs.next = (struct list_head *)stack;
>>>> + return rq;
>>>> +}
>>>> +
>>>> +static struct i915_request *
>>>> +stack_pop(struct i915_request *rq,
>>>> + struct list_head **pos)
>>>> +{
>>>> + rq = (struct i915_request *)rq->sched.dfs.next;
>>>> + if (rq)
>>>> + *pos = rq->sched.dfs.prev;
>>>> + return rq;
>>>> +}
>>>> +
>>>> static inline bool need_preempt(int prio, int active)
>>>> {
>>>> /*
>>>> @@ -298,11 +318,10 @@ static void ipi_priority(struct i915_request
>>>> *rq, int prio)
>>>> static void __i915_request_set_priority(struct i915_request *rq,
>>>> int prio)
>>>> {
>>>> struct intel_engine_cs *engine = rq->engine;
>>>> - struct i915_request *rn;
>>>> + struct list_head *pos = &rq->sched.signalers_list;
>>>> struct list_head *plist;
>>>> - LIST_HEAD(dfs);
>>>> - list_add(&rq->sched.dfs, &dfs);
>>>> + plist = i915_sched_lookup_priolist(engine, prio);
>>>> /*
>>>> * Recursively bump all dependent priorities to match the new
>>>> request.
>>>> @@ -322,40 +341,31 @@ static void __i915_request_set_priority(struct
>>>> i915_request *rq, int prio)
>>>> * end result is a topological list of requests in reverse
>>>> order, the
>>>> * last element in the list is the request we must execute
>>>> first.
>>>> */
>>>> - list_for_each_entry(rq, &dfs, sched.dfs) {
>>>> - struct i915_dependency *p;
>>>> -
>>>> - /* Also release any children on this engine that are
>>>> ready */
>>>> - GEM_BUG_ON(rq->engine != engine);
>>>> -
>>>> - for_each_signaler(p, rq) {
>>>> + rq->sched.dfs.next = NULL;
>>>> + do {
>>>> + list_for_each_continue(pos, &rq->sched.signalers_list) {
>>>> + struct i915_dependency *p =
>>>> + list_entry(pos, typeof(*p), signal_link);
>>>> struct i915_request *s =
>>>> container_of(p->signaler, typeof(*s),
>>>> sched);
>>>> - GEM_BUG_ON(s == rq);
>>>> -
>>>> if (rq_prio(s) >= prio)
>>>> continue;
>>>> if (__i915_request_is_complete(s))
>>>> continue;
>>>> - if (s->engine != rq->engine) {
>>>> + if (s->engine != engine) {
>>>> ipi_priority(s, prio);
>>>> continue;
>>>> }
>>>> - list_move_tail(&s->sched.dfs, &dfs);
>>>> + /* Remember our position along this branch */
>>>> + rq = stack_push(s, rq, pos);
>>>> + pos = &rq->sched.signalers_list;
>>>> }
>>>> - }
>>>> - plist = i915_sched_lookup_priolist(engine, prio);
>>>> -
>>>> - /* Fifo and depth-first replacement ensure our deps execute
>>>> first */
>>>> - list_for_each_entry_safe_reverse(rq, rn, &dfs, sched.dfs) {
>>>> - GEM_BUG_ON(rq->engine != engine);
>>>> -
>>>> - INIT_LIST_HEAD(&rq->sched.dfs);
>>>> + RQ_TRACE(rq, "set-priority:%d\n", prio);
>>>> WRITE_ONCE(rq->sched.attr.priority, prio);
>>>> /*
>>>> @@ -369,12 +379,13 @@ static void __i915_request_set_priority(struct
>>>> i915_request *rq, int prio)
>>>> if (!i915_request_is_ready(rq))
>>>> continue;
>>>> + GEM_BUG_ON(rq->engine != engine);
>>>> if (i915_request_in_priority_queue(rq))
>>>> list_move_tail(&rq->sched.link, plist);
>>>> /* Defer (tasklet) submission until after all
>>>> updates. */
>>>> kick_submission(engine, rq, prio);
>>>> - }
>>>> + } while ((rq = stack_pop(rq, &pos)));
>>>> }
>>>> void i915_request_set_priority(struct i915_request *rq, int prio)
>>>> @@ -444,7 +455,6 @@ void i915_sched_node_init(struct i915_sched_node
>>>> *node)
>>>> INIT_LIST_HEAD(&node->signalers_list);
>>>> INIT_LIST_HEAD(&node->waiters_list);
>>>> INIT_LIST_HEAD(&node->link);
>>>> - INIT_LIST_HEAD(&node->dfs);
>>>> node->ipi_link = NULL;
>>>>
>>>
>>> Pen and paper was needed here but it looks good.
>>
>> If you highlight the areas that need more commentary, I guess
>> a theory-of-operation for stack_push/stack_pop?
>
> At some point I wanted to suggest you change dfs.list_head abuse to
> explicit rq and list head pointer to better represent how there are two
> pieces of information tracked in there.
>
> In terms of commentary don't know really. Perhaps it could be made
> clearer just with some code re-structure, for instance maybe a new data
> structure like i915_request_stack would work like:
>
> struct i915_request_stack {
> struct i915_request *prev;
> struct list_head *pos;
> };
>
> And then push and pop operate on three distinct data types for clarity,
> request stack being embedded in request. I haven't really thought it
> through to be sure it works so just maybe.
Ah I remember why I did not suggest this, to avoid wasting one pointer
because of:
struct list_head {
struct list_head *next, *prev;
};
There isn't anything for just one.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-01-26 16:51 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-25 14:00 [Intel-gfx] [PATCH 01/41] drm/i915/selftests: Check for engine-reset errors in the middle of workarounds Chris Wilson
2021-01-25 14:00 ` [Intel-gfx] [PATCH 02/41] drm/i915/gt: Move the defer_request waiter active assertion Chris Wilson
2021-01-25 14:53 ` Tvrtko Ursulin
2021-01-25 14:00 ` [Intel-gfx] [PATCH 03/41] drm/i915: Replace engine->schedule() with a known request operation Chris Wilson
2021-01-25 15:14 ` Tvrtko Ursulin
2021-01-25 14:00 ` [Intel-gfx] [PATCH 04/41] drm/i915: Teach the i915_dependency to use a double-lock Chris Wilson
2021-01-25 15:34 ` Tvrtko Ursulin
2021-01-25 21:37 ` Chris Wilson
2021-01-26 9:40 ` Tvrtko Ursulin
2021-01-25 14:01 ` [Intel-gfx] [PATCH 05/41] drm/i915: Restructure priority inheritance Chris Wilson
2021-01-26 11:12 ` Tvrtko Ursulin
2021-01-26 11:30 ` Chris Wilson
2021-01-26 11:40 ` Tvrtko Ursulin
2021-01-26 11:55 ` Chris Wilson
2021-01-26 13:15 ` Tvrtko Ursulin
2021-01-26 13:24 ` Chris Wilson
2021-01-26 13:45 ` Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 06/41] drm/i915/selftests: Measure set-priority duration Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 07/41] drm/i915/selftests: Exercise priority inheritance around an engine loop Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 08/41] drm/i915: Improve DFS for priority inheritance Chris Wilson
2021-01-26 16:22 ` Tvrtko Ursulin
2021-01-26 16:26 ` Chris Wilson
2021-01-26 16:42 ` Tvrtko Ursulin
2021-01-26 16:51 ` Tvrtko Ursulin [this message]
2021-01-26 16:51 ` Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 09/41] drm/i915/selftests: Exercise relative mmio paths to non-privileged registers Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 10/41] drm/i915/selftests: Exercise cross-process context isolation Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 11/41] drm/i915: Extract request submission from execlists Chris Wilson
2021-01-26 16:28 ` Tvrtko Ursulin
2021-01-25 14:01 ` [Intel-gfx] [PATCH 12/41] drm/i915: Extract request rewinding " Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 13/41] drm/i915: Extract request suspension from the execlists Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 14/41] drm/i915: Extract the ability to defer and rerun a request later Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 15/41] drm/i915: Fix the iterative dfs for defering requests Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 16/41] drm/i915: Move common active lists from engine to i915_scheduler Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 17/41] drm/i915: Move scheduler queue Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 18/41] drm/i915: Move tasklet from execlists to sched Chris Wilson
2021-01-27 14:10 ` Tvrtko Ursulin
2021-01-27 14:24 ` Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 19/41] drm/i915/gt: Show scheduler queues when dumping state Chris Wilson
2021-01-27 14:13 ` Tvrtko Ursulin
2021-01-27 14:35 ` Chris Wilson
2021-01-27 14:50 ` Tvrtko Ursulin
2021-01-27 14:55 ` Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 20/41] drm/i915: Replace priolist rbtree with a skiplist Chris Wilson
2021-01-27 15:10 ` Tvrtko Ursulin
2021-01-27 15:33 ` Chris Wilson
2021-01-27 15:44 ` Chris Wilson
2021-01-27 15:58 ` Tvrtko Ursulin
2021-01-28 9:50 ` Chris Wilson
2021-01-28 15:56 ` Tvrtko Ursulin
2021-01-28 16:26 ` Chris Wilson
2021-01-28 16:42 ` Tvrtko Ursulin
2021-01-28 22:20 ` Chris Wilson
2021-01-28 22:44 ` Chris Wilson
2021-01-29 9:24 ` Tvrtko Ursulin
2021-01-29 9:37 ` Tvrtko Ursulin
2021-01-29 10:26 ` Chris Wilson
2021-01-28 22:56 ` Matthew Brost
2021-01-29 10:30 ` Chris Wilson
2021-01-29 17:01 ` Matthew Brost
2021-01-29 10:22 ` Tvrtko Ursulin
2021-01-25 14:01 ` [Intel-gfx] [PATCH 21/41] drm/i915: Wrap cmpxchg64 with try_cmpxchg64() helper Chris Wilson
2021-01-27 15:28 ` Tvrtko Ursulin
2021-01-25 14:01 ` [Intel-gfx] [PATCH 22/41] drm/i915: Fair low-latency scheduling Chris Wilson
2021-01-28 11:35 ` Tvrtko Ursulin
2021-01-28 12:32 ` Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 23/41] drm/i915/gt: Specify a deadline for the heartbeat Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 24/41] drm/i915: Extend the priority boosting for the display with a deadline Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 25/41] drm/i915/gt: Support virtual engine queues Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 26/41] drm/i915: Move saturated workload detection back to the context Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 27/41] drm/i915: Bump default timeslicing quantum to 5ms Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 28/41] drm/i915/gt: Wrap intel_timeline.has_initial_breadcrumb Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 29/41] drm/i915/gt: Track timeline GGTT offset separately from subpage offset Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 30/41] drm/i915/gt: Add timeline "mode" Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 31/41] drm/i915/gt: Use indices for writing into relative timelines Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 32/41] drm/i915/selftests: Exercise relative timeline modes Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 33/41] drm/i915/gt: Use ppHWSP for unshared non-semaphore related timelines Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 34/41] Restore "drm/i915: drop engine_pin/unpin_breadcrumbs_irq" Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 35/41] drm/i915/gt: Couple tasklet scheduling for all CS interrupts Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 36/41] drm/i915/gt: Support creation of 'internal' rings Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 37/41] drm/i915/gt: Use client timeline address for seqno writes Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 38/41] drm/i915/gt: Infrastructure for ring scheduling Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 39/41] drm/i915/gt: Implement ring scheduler for gen4-7 Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 40/41] drm/i915/gt: Enable ring scheduling for gen5-7 Chris Wilson
2021-01-25 14:01 ` [Intel-gfx] [PATCH 41/41] drm/i915: Support secure dispatch on gen6/gen7 Chris Wilson
2021-01-25 14:40 ` [Intel-gfx] [PATCH 01/41] drm/i915/selftests: Check for engine-reset errors in the middle of workarounds Tvrtko Ursulin
2021-01-25 17:08 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/41] " Patchwork
2021-01-25 17:10 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-01-25 17:38 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-01-25 22:45 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7f14fc68-1410-57a1-3e3c-78f1da84453e@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
--cc=thomas.hellstrom@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).