All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 06/12] drm/i915/scheduler: Execute requests in order of priorities
Date: Thu, 3 Nov 2016 17:44:56 +0000	[thread overview]
Message-ID: <20161103174456.GB5674@nuc-i3427.alporthouse.com> (raw)
In-Reply-To: <435979df-37be-cbab-b6fc-0e0f9002bbb6@linux.intel.com>

On Thu, Nov 03, 2016 at 04:21:25PM +0000, Tvrtko Ursulin wrote:
> >+		rb = rb_next(rb);
> >+		rb_erase(&cursor->priotree.node, &engine->execlist_queue);
> >+		RB_CLEAR_NODE(&cursor->priotree.node);
> >+		cursor->priotree.priority = INT_MAX;
> >+
> >+		/* We keep the previous context alive until we retire the
> >+		 * following request. This ensures that any the context object
> >+		 * is still pinned for any residual writes the HW makes into
> >+		 * it on the context switch into the next object following the
> >+		 * breadcrumb. Otherwise, we may retire the context too early.
> >+		 */
> >+		cursor->previous_context = engine->last_context;
> >+		engine->last_context = cursor->ctx;
> >+
> 
> Will we will need a knob to control the amount of context merging?
> Until preemption that is, similar to GuC queue depth "nerfing" from
> the other patch.

Right, timeslicing + preemption is the perhaps the best answer here. A
knob here would be one of those policy decisions that we want to avoid
:) Not quite sure how we would go about autotuning it. A GL workload
looks quite different to a VK workload and quite different again to a CL
workload. One can only imagine the horrors on the transcode side :)

Making those decisions is the real heart of a scheduler.

This code should be more like:

while (req = scheduler.next_request()) {
	if (!is_compatible(port, req)) {
		if (port++)
			break;
	}

	finish_request(req);
	scheduler.remove_request(req);

	port = req;
}

Given that at least this block of code is duplicated for the guc, maybe
it is worth starting. I was trying to avoid doing that because in my
mind it is the preemption that is the complex part and should drive the
next steps.

> >+static void update_priorities(struct i915_priotree *pt, int prio)
> >+{
> >+	struct drm_i915_gem_request *request =
> >+		container_of(pt, struct drm_i915_gem_request, priotree);
> >+	struct intel_engine_cs *engine = request->engine;
> >+	struct i915_dependency *dep;
> >+
> >+	if (prio <= READ_ONCE(pt->priority))
> >+		return;
> >+
> >+	/* Recursively bump all dependent priorities to match the new request */
> >+	list_for_each_entry(dep, &pt->pre_list, pre_link)
> >+		update_priorities(dep->signal, prio);
> 
> John got in trouble from recursion in his scheduler, used for the
> same thing AFAIR. Or was it the priority bumping? Either way, it
> could be imperative to avoid it.

Yup. I haven't thrown a 100,000 requests at it for this very reason.

Hmm, if I put an additional struct list_head into i915_dependency, we
can build a search list using it (praise be to the BKL).
 
> >+	/* Fifo and depth-first replacement ensure our deps execute before us */
> >+	spin_lock_irq(&engine->timeline->lock);
> >+	pt->priority = prio;
> >+	if (!RB_EMPTY_NODE(&pt->node)) {
> >+		rb_erase(&pt->node, &engine->execlist_queue);
> >+		if (insert_request(pt, &engine->execlist_queue))
> >+			engine->execlist_first = &pt->node;
> >+	}
> >+	spin_unlock_irq(&engine->timeline->lock);
> >+}
> >+
> >+static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
> >+{
> >+	/* Make sure that our entire dependency chain shares our prio */
> >+	update_priorities(&request->priotree, prio);
> >+
> >+	/* XXX Do we need to preempt to make room for us and our deps? */
> >+}
> 
> Hm, why isn't scheduling backend agnostic? Makes it much simpler to
> do the first implementation like this?

The scheduling is? The scheduling is just the policy. But we do need
entry points to call into the scheduler, such as change in the request
tree (here), completion of a request (context-switch interrupt) or
completion of a time slice. engine->schedule() is not the best name,
perhaps schedule_request(). Using engine is mostly a placeholder (a
convenient place to store a vfunc), but I do think we will end up with
different interactions with the scheduler based on the backend (in
particular feeding the GuC with dependency information). So, yes, this
is the easy route that should evolve as the need changes, and just
pencil the different callbacks as entry points where the backend and
scheduler should collude.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-11-03 17:45 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-02 17:50 Trivial priority scheduler Chris Wilson
2016-11-02 17:50 ` [PATCH 01/12] drm/i915: Split request submit/execute phase into two Chris Wilson
2016-11-03 10:35   ` Tvrtko Ursulin
2016-11-02 17:50 ` [PATCH 02/12] drm/i915: Defer transfer onto execution timeline to actual hw submission Chris Wilson
2016-11-03 10:34   ` Tvrtko Ursulin
2016-11-03 10:51     ` Chris Wilson
2016-11-03 11:27       ` Chris Wilson
2016-11-02 17:50 ` [PATCH 03/12] drm/i915: Remove engine->execlist_lock Chris Wilson
2016-11-03 10:47   ` Tvrtko Ursulin
2016-11-03 12:28     ` Chris Wilson
2016-11-03 13:31       ` Tvrtko Ursulin
2016-11-02 17:50 ` [PATCH 04/12] drm/i915/scheduler: Signal the arrival of a new request Chris Wilson
2016-11-03 10:49   ` Tvrtko Ursulin
2016-11-02 17:50 ` [PATCH 05/12] drm/i915/scheduler: Record all dependencies upon request construction Chris Wilson
2016-11-03 11:03   ` Tvrtko Ursulin
2016-11-03 11:55     ` Chris Wilson
2016-11-04 14:44       ` Tvrtko Ursulin
2016-11-04 15:11         ` Chris Wilson
2016-11-07  9:12           ` Tvrtko Ursulin
2016-11-07  9:30             ` Chris Wilson
2016-11-07 13:30               ` Tvrtko Ursulin
2016-11-07 13:39                 ` Chris Wilson
2016-11-07 13:42                   ` Tvrtko Ursulin
2016-11-02 17:50 ` [PATCH 06/12] drm/i915/scheduler: Execute requests in order of priorities Chris Wilson
2016-11-03 16:21   ` Tvrtko Ursulin
2016-11-03 17:44     ` Chris Wilson [this message]
2016-11-03 19:47     ` Chris Wilson
2016-11-04  9:20       ` Chris Wilson
2016-11-02 17:50 ` [PATCH 07/12] drm/i915/scheduler: Boost priorities for flips Chris Wilson
2016-11-03 16:29   ` Tvrtko Ursulin
2016-11-03 16:54     ` Chris Wilson
2016-11-02 17:50 ` [PATCH 08/12] drm/i915/guc: Cache the client mapping Chris Wilson
2016-11-03 16:37   ` Tvrtko Ursulin
2016-11-03 20:01     ` Chris Wilson
2016-11-02 17:50 ` [PATCH 09/12] HACK drm/i915/scheduler: emulate a scheduler for guc Chris Wilson
2016-11-03 13:17   ` Tvrtko Ursulin
2016-11-02 17:50 ` [PATCH 10/12] drm/i915/scheduler: Support user-defined priorities Chris Wilson
2016-11-02 17:50 ` [PATCH 11/12] drm/i915: Enable userspace to opt-out of implicit fencing Chris Wilson
2016-11-02 17:50 ` [PATCH 12/12] drm/i915: Support explicit fencing for execbuf Chris Wilson
2016-11-02 18:45 ` ✓ Fi.CI.BAT: success for series starting with [01/12] drm/i915: Split request submit/execute phase into two Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161103174456.GB5674@nuc-i3427.alporthouse.com \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.