All of lore.kernel.org
 help / color / mirror / Atom feed
From: John.C.Harrison@Intel.com
To: Intel-GFX@Lists.FreeDesktop.Org
Subject: [PATCH 07/39] drm/i915: Start of GPU scheduler
Date: Mon, 23 Nov 2015 11:39:02 +0000	[thread overview]
Message-ID: <1448278774-31376-8-git-send-email-John.C.Harrison@Intel.com> (raw)
In-Reply-To: <1448278774-31376-1-git-send-email-John.C.Harrison@Intel.com>

From: John Harrison <John.C.Harrison@Intel.com>

Initial creation of scheduler source files. Note that this patch
implements most of the scheduler functionality but does not hook it in
to the driver yet. It also leaves the scheduler code in 'pass through'
mode so that even when it is hooked in, it will not actually do very
much. This allows the hooks to be added one at a time in byte size
chunks and only when the scheduler is finally enabled at the end does
anything start happening.

The general theory of operation is that when batch buffers are
submitted to the driver, the execbuffer() code assigns a unique
request and then packages up all the information required to execute
the batch buffer at a later time. This package is given over to the
scheduler which adds it to an internal node list. The scheduler also
scans the list of objects associated with the batch buffer and
compares them against the objects already in use by other buffers in
the node list. If matches are found then the new batch buffer node is
marked as being dependent upon the matching node. The same is done for
the context object. The scheduler also bumps up the priority of such
matching nodes on the grounds that the more dependencies a given batch
buffer has the more important it is likely to be.

The scheduler aims to have a given (tuneable) number of batch buffers
in flight on the hardware at any given time. If fewer than this are
currently executing when a new node is queued, then the node is passed
straight through to the submit function. Otherwise it is simply added
to the queue and the driver returns back to user land.

As each batch buffer completes, it raises an interrupt which wakes up
the scheduler. Note that it is possible for multiple buffers to
complete before the IRQ handler gets to run. Further, it is possible
for the seqno values to be un-ordered (particularly once pre-emption
is enabled). However, the scheduler keeps the list of executing
buffers in order of hardware submission. Thus it can scan through the
list until a matching seqno is found and then mark all in flight nodes
from that point on as completed.

A deferred work queue is also poked by the interrupt handler. When
this wakes up it can do more involved processing such as actually
removing completed nodes from the queue and freeing up the resources
associated with them (internal memory allocations, DRM object
references, context reference, etc.). The work handler also checks the
in flight count and calls the submission code if a new slot has
appeared.

When the scheduler's submit code is called, it scans the queued node
list for the highest priority node that has no unmet dependencies.
Note that the dependency calculation is complex as it must take
inter-ring dependencies and potential preemptions into account. Note
also that in the future this will be extended to include external
dependencies such as the Android Native Sync file descriptors and/or
the linux dma-buff synchronisation scheme.

If a suitable node is found then it is sent to execbuff_final() for
submission to the hardware. The in flight count is then re-checked and
a new node popped from the list if appropriate.

Note that this patch does not implement pre-emptive scheduling. Only
basic scheduling by re-ordering batch buffer submission is currently
implemented.

v2: Changed priority levels to +/-1023 due to feedback from Chris
Wilson.

Removed redundant index from scheduler node.

Changed time stamps to use jiffies instead of raw monotonic. This
provides lower resolution but improved compatibility with other i915
code.

Major re-write of completion tracking code due to struct fence
conversion. The scheduler no longer has it's own private IRQ handler
but just lets the existing request code handle completion events.
Instead, the scheduler now hooks into the request notify code to be
told when a request has completed.

Reduced driver mutex locking scope. Removal of scheduler nodes no
longer grabs the mutex lock.

Change-Id: I1e08f59e650a3c2bbaaa9de7627da33849b06106
For: VIZ-1587
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/Makefile         |   1 +
 drivers/gpu/drm/i915/i915_drv.h       |   4 +
 drivers/gpu/drm/i915/i915_gem.c       |   5 +
 drivers/gpu/drm/i915/i915_scheduler.c | 740 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_scheduler.h |  90 +++++
 5 files changed, 840 insertions(+)
 create mode 100644 drivers/gpu/drm/i915/i915_scheduler.c
 create mode 100644 drivers/gpu/drm/i915/i915_scheduler.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 15398c5..79cb38b 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -10,6 +10,7 @@ ccflags-y := -Werror
 i915-y := i915_drv.o \
 	  i915_irq.o \
 	  i915_params.o \
+	  i915_scheduler.o \
           i915_suspend.o \
 	  i915_sysfs.o \
 	  intel_csr.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 5d390d9..23aed32 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1695,6 +1695,8 @@ struct i915_execbuffer_params {
 	struct drm_i915_gem_request     *request;
 };
 
+struct i915_scheduler;
+
 /* used in computing the new watermarks state */
 struct intel_wm_config {
 	unsigned int num_pipes_active;
@@ -1947,6 +1949,8 @@ struct drm_i915_private {
 
 	struct i915_runtime_pm pm;
 
+	struct i915_scheduler *scheduler;
+
 	/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
 	struct {
 		int (*execbuf_submit)(struct i915_execbuffer_params *params,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b23746a..5239340 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -38,6 +38,7 @@
 #include <linux/pci.h>
 #include <linux/dma-buf.h>
 #include <../drivers/android/sync.h>
+#include "i915_scheduler.h"
 
 #define RQ_BUG_ON(expr)
 
@@ -5277,6 +5278,10 @@ int i915_gem_init(struct drm_device *dev)
 	 */
 	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
 
+	ret = i915_scheduler_init(dev);
+	if (ret)
+		goto out_unlock;
+
 	ret = i915_gem_init_userptr(dev);
 	if (ret)
 		goto out_unlock;
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
new file mode 100644
index 0000000..76973ab
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -0,0 +1,740 @@
+/*
+ * Copyright (c) 2014 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_drv.h"
+#include "intel_drv.h"
+#include "i915_scheduler.h"
+
+static int         i915_scheduler_fly_node(struct i915_scheduler_queue_entry *node);
+static int         i915_scheduler_remove_dependent(struct i915_scheduler *scheduler,
+						   struct i915_scheduler_queue_entry *remove);
+static int         i915_scheduler_submit(struct intel_engine_cs *ring,
+					 bool is_locked);
+static uint32_t    i915_scheduler_count_flying(struct i915_scheduler *scheduler,
+					       struct intel_engine_cs *ring);
+static void        i915_scheduler_priority_bump_clear(struct i915_scheduler *scheduler);
+static int         i915_scheduler_priority_bump(struct i915_scheduler *scheduler,
+						struct i915_scheduler_queue_entry *target,
+						uint32_t bump);
+
+int i915_scheduler_init(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct i915_scheduler   *scheduler = dev_priv->scheduler;
+	int                     r;
+
+	if (scheduler)
+		return 0;
+
+	scheduler = kzalloc(sizeof(*scheduler), GFP_KERNEL);
+	if (!scheduler)
+		return -ENOMEM;
+
+	spin_lock_init(&scheduler->lock);
+
+	for (r = 0; r < I915_NUM_RINGS; r++)
+		INIT_LIST_HEAD(&scheduler->node_queue[r]);
+
+	/* Default tuning values: */
+	scheduler->priority_level_min     = -1023;
+	scheduler->priority_level_max     = 1023;
+	scheduler->priority_level_preempt = 900;
+	scheduler->min_flying             = 2;
+
+	dev_priv->scheduler = scheduler;
+
+	return 0;
+}
+
+int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe)
+{
+	struct drm_i915_private *dev_priv = qe->params.dev->dev_private;
+	struct i915_scheduler   *scheduler = dev_priv->scheduler;
+	struct intel_engine_cs  *ring = qe->params.ring;
+	struct i915_scheduler_queue_entry  *node;
+	struct i915_scheduler_queue_entry  *test;
+	unsigned long       flags;
+	bool                not_flying, found;
+	int                 i, j, r;
+	int                 incomplete = 0;
+
+	BUG_ON(!scheduler);
+
+	if (1/*i915.scheduler_override & i915_so_direct_submit*/) {
+		int ret;
+
+		scheduler->flags[qe->params.ring->id] |= i915_sf_submitting;
+		ret = dev_priv->gt.execbuf_final(&qe->params);
+		scheduler->flags[qe->params.ring->id] &= ~i915_sf_submitting;
+
+		/*
+		 * Don't do any clean up on failure because the caller will
+		 * do it all anyway.
+		 */
+		if (ret)
+			return ret;
+
+		/* Free everything that is owned by the QE structure: */
+		kfree(qe->params.cliprects);
+		if (qe->params.dispatch_flags & I915_DISPATCH_SECURE)
+			i915_gem_execbuff_release_batch_obj(qe->params.batch_obj);
+
+		return 0;
+	}
+
+	node = kmalloc(sizeof(*node), GFP_KERNEL);
+	if (!node)
+		return -ENOMEM;
+
+	*node = *qe;
+	INIT_LIST_HEAD(&node->link);
+	node->status = i915_sqs_queued;
+	node->stamp  = jiffies;
+	i915_gem_request_reference(node->params.request);
+
+	/* Need to determine the number of incomplete entries in the list as
+	 * that will be the maximum size of the dependency list.
+	 *
+	 * Note that the allocation must not be made with the spinlock acquired
+	 * as kmalloc can sleep. However, the unlock/relock is safe because no
+	 * new entries can be queued up during the unlock as the i915 driver
+	 * mutex is still held. Entries could be removed from the list but that
+	 * just means the dep_list will be over-allocated which is fine.
+	 */
+	spin_lock_irqsave(&scheduler->lock, flags);
+	for (r = 0; r < I915_NUM_RINGS; r++) {
+		list_for_each_entry(test, &scheduler->node_queue[r], link) {
+			if (I915_SQS_IS_COMPLETE(test))
+				continue;
+
+			incomplete++;
+		}
+	}
+
+	/* Temporarily unlock to allocate memory: */
+	spin_unlock_irqrestore(&scheduler->lock, flags);
+	if (incomplete) {
+		node->dep_list = kmalloc(sizeof(node->dep_list[0]) * incomplete,
+					 GFP_KERNEL);
+		if (!node->dep_list) {
+			kfree(node);
+			return -ENOMEM;
+		}
+	} else
+		node->dep_list = NULL;
+
+	spin_lock_irqsave(&scheduler->lock, flags);
+	node->num_deps = 0;
+
+	if (node->dep_list) {
+		for (r = 0; r < I915_NUM_RINGS; r++) {
+			list_for_each_entry(test, &scheduler->node_queue[r], link) {
+				if (I915_SQS_IS_COMPLETE(test))
+					continue;
+
+				/*
+				 * Batches on the same ring for the same
+				 * context must be kept in order.
+				 */
+				found = (node->params.ctx == test->params.ctx) &&
+					(node->params.ring == test->params.ring);
+
+				/*
+				 * Batches working on the same objects must
+				 * be kept in order.
+				 */
+				for (i = 0; (i < node->num_objs) && !found; i++) {
+					for (j = 0; j < test->num_objs; j++) {
+						if (node->saved_objects[i].obj !=
+							    test->saved_objects[j].obj)
+							continue;
+
+						found = true;
+						break;
+					}
+				}
+
+				if (found) {
+					node->dep_list[node->num_deps] = test;
+					node->num_deps++;
+				}
+			}
+		}
+
+		BUG_ON(node->num_deps > incomplete);
+	}
+
+	if (node->priority > scheduler->priority_level_max)
+		node->priority = scheduler->priority_level_max;
+	else if (node->priority < scheduler->priority_level_min)
+		node->priority = scheduler->priority_level_min;
+
+	if ((node->priority > 0) && node->num_deps) {
+		i915_scheduler_priority_bump_clear(scheduler);
+
+		for (i = 0; i < node->num_deps; i++)
+			i915_scheduler_priority_bump(scheduler,
+					node->dep_list[i], node->priority);
+	}
+
+	list_add_tail(&node->link, &scheduler->node_queue[ring->id]);
+
+	not_flying = i915_scheduler_count_flying(scheduler, ring) <
+						 scheduler->min_flying;
+
+	spin_unlock_irqrestore(&scheduler->lock, flags);
+
+	if (not_flying)
+		i915_scheduler_submit(ring, true);
+
+	return 0;
+}
+
+static int i915_scheduler_fly_node(struct i915_scheduler_queue_entry *node)
+{
+	struct drm_i915_private *dev_priv = node->params.dev->dev_private;
+	struct i915_scheduler   *scheduler = dev_priv->scheduler;
+	struct intel_engine_cs  *ring;
+
+	BUG_ON(!scheduler);
+	BUG_ON(!node);
+	BUG_ON(node->status != i915_sqs_popped);
+
+	ring = node->params.ring;
+
+	/* Add the node (which should currently be in state none) to the front
+	 * of the queue. This ensure that flying nodes are always held in
+	 * hardware submission order. */
+	list_add(&node->link, &scheduler->node_queue[ring->id]);
+
+	node->status = i915_sqs_flying;
+
+	if (!(scheduler->flags[ring->id] & i915_sf_interrupts_enabled)) {
+		bool    success = true;
+
+		success = ring->irq_get(ring);
+		if (success)
+			scheduler->flags[ring->id] |= i915_sf_interrupts_enabled;
+		else
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+/*
+ * Nodes are considered valid dependencies if they are queued on any ring or
+ * if they are in flight on a different ring. In flight on the same ring is no
+ * longer interesting for non-premptive nodes as the ring serialises execution.
+ * For pre-empting nodes, all in flight dependencies are valid as they must not
+ * be jumped by the act of pre-empting.
+ *
+ * Anything that is neither queued nor flying is uninteresting.
+ */
+static inline bool i915_scheduler_is_dependency_valid(
+			struct i915_scheduler_queue_entry *node, uint32_t idx)
+{
+	struct i915_scheduler_queue_entry *dep;
+
+	dep = node->dep_list[idx];
+	if (!dep)
+		return false;
+
+	if (I915_SQS_IS_QUEUED(dep))
+		return true;
+
+	if (I915_SQS_IS_FLYING(dep)) {
+		if (node->params.ring != dep->params.ring)
+			return true;
+	}
+
+	return false;
+}
+
+static uint32_t i915_scheduler_count_flying(struct i915_scheduler *scheduler,
+					    struct intel_engine_cs *ring)
+{
+	struct i915_scheduler_queue_entry *node;
+	uint32_t                          flying = 0;
+
+	list_for_each_entry(node, &scheduler->node_queue[ring->id], link)
+		if (I915_SQS_IS_FLYING(node))
+			flying++;
+
+	return flying;
+}
+
+/* Add a popped node back in to the queue. For example, because the ring was
+ * hung when execfinal() was called and thus the ring submission needs to be
+ * retried later. */
+static void i915_scheduler_node_requeue(struct i915_scheduler_queue_entry *node)
+{
+	BUG_ON(!node);
+	BUG_ON(!I915_SQS_IS_FLYING(node));
+
+	node->status = i915_sqs_queued;
+	node->params.request->seqno = 0;
+}
+
+/* Give up on a popped node completely. For example, because it is causing the
+ * ring to hang or is using some resource that no longer exists. */
+static void i915_scheduler_node_kill(struct i915_scheduler_queue_entry *node)
+{
+	BUG_ON(!node);
+	BUG_ON(!I915_SQS_IS_FLYING(node));
+
+	node->status = i915_sqs_dead;
+}
+
+/*
+ * A sequence number has popped out of the hardware and the request handling
+ * code has mapped it back to a request and will mark that request complete.
+ * It also calls this function to notify the scheduler about the completion
+ * so the scheduler's node can be updated appropriately.
+ * Returns true if the request is scheduler managed, false if not.
+ */
+bool i915_scheduler_notify_request(struct drm_i915_gem_request *req)
+{
+	struct drm_i915_private *dev_priv  = to_i915(req->ring->dev);
+	struct i915_scheduler   *scheduler = dev_priv->scheduler;
+	/* XXX: Need to map back from request to node */
+	struct i915_scheduler_queue_entry *node = NULL;
+	unsigned long       flags;
+
+	if (!node)
+		return false;
+
+	spin_lock_irqsave(&scheduler->lock, flags);
+
+	WARN_ON(!I915_SQS_IS_FLYING(node));
+
+	/* Node was in flight so mark it as complete. */
+	if (req->cancelled)
+		node->status = i915_sqs_dead;
+	else
+		node->status = i915_sqs_complete;
+
+	spin_unlock_irqrestore(&scheduler->lock, flags);
+
+	/*
+	 * XXX: If the in-flight list is now empty then new work should be
+	 * submitted. However, this function is called from interrupt context
+	 * and thus cannot acquire mutex locks and other such things that are
+	 * necessary for fresh submission.
+	 */
+
+	return true;
+}
+
+int i915_scheduler_remove(struct intel_engine_cs *ring)
+{
+	struct drm_i915_private *dev_priv = ring->dev->dev_private;
+	struct i915_scheduler   *scheduler = dev_priv->scheduler;
+	struct i915_scheduler_queue_entry  *node, *node_next;
+	unsigned long       flags;
+	int                 flying = 0, queued = 0;
+	int                 ret = 0;
+	bool                do_submit;
+	uint32_t            min_seqno;
+	struct list_head    remove;
+
+	if (list_empty(&scheduler->node_queue[ring->id]))
+		return 0;
+
+	spin_lock_irqsave(&scheduler->lock, flags);
+
+	/* /i915_scheduler_dump_locked(ring, "remove/pre");/ */
+
+	/*
+	 * In the case where the system is idle, starting 'min_seqno' from a big
+	 * number will cause all nodes to be removed as they are now back to
+	 * being in-order. However, this will be a problem if the last one to
+	 * complete was actually out-of-order as the ring seqno value will be
+	 * lower than one or more completed buffers. Thus code looking for the
+	 * completion of said buffers will wait forever.
+	 * Instead, use the hardware seqno as the starting point. This means
+	 * that some buffers might be kept around even in a completely idle
+	 * system but it should guarantee that no-one ever gets confused when
+	 * waiting for buffer completion.
+	 */
+	min_seqno = ring->get_seqno(ring, true);
+
+	list_for_each_entry(node, &scheduler->node_queue[ring->id], link) {
+		if (I915_SQS_IS_QUEUED(node))
+			queued++;
+		else if (I915_SQS_IS_FLYING(node))
+			flying++;
+		else if (I915_SQS_IS_COMPLETE(node))
+			continue;
+
+		if (node->params.request->seqno == 0)
+			continue;
+
+		if (!i915_seqno_passed(node->params.request->seqno, min_seqno))
+			min_seqno = node->params.request->seqno;
+	}
+
+	INIT_LIST_HEAD(&remove);
+	list_for_each_entry_safe(node, node_next, &scheduler->node_queue[ring->id], link) {
+		/*
+		 * Only remove completed nodes which have a lower seqno than
+		 * all pending nodes. While there is the possibility of the
+		 * ring's seqno counting backwards, all higher buffers must
+		 * be remembered so that the 'i915_seqno_passed()' test can
+		 * report that they have in fact passed.
+		 *
+		 * NB: This is not true for 'dead' nodes. The GPU reset causes
+		 * the software seqno to restart from its initial value. Thus
+		 * the dead nodes must be removed even though their seqno values
+		 * are potentially vastly greater than the current ring seqno.
+		 */
+		if (!I915_SQS_IS_COMPLETE(node))
+			continue;
+
+		if (node->status != i915_sqs_dead) {
+			if (i915_seqno_passed(node->params.request->seqno, min_seqno) &&
+			    (node->params.request->seqno != min_seqno))
+				continue;
+		}
+
+		list_del(&node->link);
+		list_add(&node->link, &remove);
+
+		/* Strip the dependency info while the mutex is still locked */
+		i915_scheduler_remove_dependent(scheduler, node);
+
+		continue;
+	}
+
+	/*
+	 * No idea why but this seems to cause problems occasionally.
+	 * Note that the 'irq_put' code is internally reference counted
+	 * and spin_locked so it should be safe to call.
+	 */
+	/*if ((scheduler->flags[ring->id] & i915_sf_interrupts_enabled) &&
+	    (first_flight[ring->id] == NULL)) {
+		ring->irq_put(ring);
+		scheduler->flags[ring->id] &= ~i915_sf_interrupts_enabled;
+	}*/
+
+	/* Launch more packets now? */
+	do_submit = (queued > 0) && (flying < scheduler->min_flying);
+
+	spin_unlock_irqrestore(&scheduler->lock, flags);
+
+	if (!do_submit && list_empty(&remove))
+		return ret;
+
+	mutex_lock(&ring->dev->struct_mutex);
+
+	if (do_submit)
+		ret = i915_scheduler_submit(ring, true);
+
+	while (!list_empty(&remove)) {
+		node = list_first_entry(&remove, typeof(*node), link);
+		list_del(&node->link);
+
+		/* The batch buffer must be unpinned before it is unreferenced
+		 * otherwise the unpin fails with a missing vma!? */
+		if (node->params.dispatch_flags & I915_DISPATCH_SECURE)
+			i915_gem_execbuff_release_batch_obj(node->params.batch_obj);
+
+		/* Free everything that is owned by the node: */
+		i915_gem_request_unreference(node->params.request);
+		kfree(node->params.cliprects);
+		kfree(node->dep_list);
+		kfree(node);
+	}
+
+	mutex_unlock(&ring->dev->struct_mutex);
+
+	return ret;
+}
+
+static void i915_scheduler_priority_bump_clear(struct i915_scheduler *scheduler)
+{
+	struct i915_scheduler_queue_entry *node;
+	int i;
+
+	/*
+	 * Ensure circular dependencies don't cause problems and that a bump
+	 * by object usage only bumps each using buffer once:
+	 */
+	for (i = 0; i < I915_NUM_RINGS; i++) {
+		list_for_each_entry(node, &scheduler->node_queue[i], link)
+			node->bumped = false;
+	}
+}
+
+static int i915_scheduler_priority_bump(struct i915_scheduler *scheduler,
+					struct i915_scheduler_queue_entry *target,
+					uint32_t bump)
+{
+	uint32_t new_priority;
+	int      i, count;
+
+	if (target->priority >= scheduler->priority_level_max)
+		return 1;
+
+	if (target->bumped)
+		return 0;
+
+	new_priority = target->priority + bump;
+	if ((new_priority <= target->priority) ||
+	    (new_priority > scheduler->priority_level_max))
+		target->priority = scheduler->priority_level_max;
+	else
+		target->priority = new_priority;
+
+	count = 1;
+	target->bumped = true;
+
+	for (i = 0; i < target->num_deps; i++) {
+		if (!target->dep_list[i])
+			continue;
+
+		if (target->dep_list[i]->bumped)
+			continue;
+
+		count += i915_scheduler_priority_bump(scheduler,
+						      target->dep_list[i],
+						      bump);
+	}
+
+	return count;
+}
+
+static int i915_scheduler_pop_from_queue_locked(struct intel_engine_cs *ring,
+				    struct i915_scheduler_queue_entry **pop_node,
+				    unsigned long *flags)
+{
+	struct drm_i915_private            *dev_priv = ring->dev->dev_private;
+	struct i915_scheduler              *scheduler = dev_priv->scheduler;
+	struct i915_scheduler_queue_entry  *best;
+	struct i915_scheduler_queue_entry  *node;
+	int     ret;
+	int     i;
+	bool	any_queued;
+	bool	has_local, has_remote, only_remote;
+
+	*pop_node = NULL;
+	ret = -ENODATA;
+
+	any_queued = false;
+	only_remote = false;
+	best = NULL;
+
+	list_for_each_entry(node, &scheduler->node_queue[ring->id], link) {
+		if (!I915_SQS_IS_QUEUED(node))
+			continue;
+		any_queued = true;
+
+		has_local  = false;
+		has_remote = false;
+		for (i = 0; i < node->num_deps; i++) {
+			if (!i915_scheduler_is_dependency_valid(node, i))
+				continue;
+
+			if (node->dep_list[i]->params.ring == node->params.ring)
+				has_local = true;
+			else
+				has_remote = true;
+		}
+
+		if (has_remote && !has_local)
+			only_remote = true;
+
+		if (!has_local && !has_remote) {
+			if (!best ||
+			    (node->priority > best->priority))
+				best = node;
+		}
+	}
+
+	if (best) {
+		list_del(&best->link);
+
+		INIT_LIST_HEAD(&best->link);
+		best->status  = i915_sqs_popped;
+
+		ret = 0;
+	} else {
+		/* Can only get here if:
+		 * (a) there are no buffers in the queue
+		 * (b) all queued buffers are dependent on other buffers
+		 *     e.g. on a buffer that is in flight on a different ring
+		 */
+		if (only_remote) {
+			/* The only dependent buffers are on another ring. */
+			ret = -EAGAIN;
+		} else if (any_queued) {
+			/* It seems that something has gone horribly wrong! */
+			DRM_ERROR("Broken dependency tracking on ring %d!\n",
+				  (int) ring->id);
+		}
+	}
+
+	/* i915_scheduler_dump_queue_pop(ring, best); */
+
+	*pop_node = best;
+	return ret;
+}
+
+static int i915_scheduler_submit(struct intel_engine_cs *ring, bool was_locked)
+{
+	struct drm_device   *dev = ring->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct i915_scheduler   *scheduler = dev_priv->scheduler;
+	struct i915_scheduler_queue_entry  *node;
+	unsigned long       flags;
+	int                 ret = 0, count = 0;
+
+	if (!was_locked) {
+		ret = i915_mutex_lock_interruptible(dev);
+		if (ret)
+			return ret;
+	}
+
+	BUG_ON(!mutex_is_locked(&dev->struct_mutex));
+
+	spin_lock_irqsave(&scheduler->lock, flags);
+
+	/* First time around, complain if anything unexpected occurs: */
+	ret = i915_scheduler_pop_from_queue_locked(ring, &node, &flags);
+	if (ret) {
+		spin_unlock_irqrestore(&scheduler->lock, flags);
+
+		if (!was_locked)
+			mutex_unlock(&dev->struct_mutex);
+
+		return ret;
+	}
+
+	do {
+		BUG_ON(!node);
+		BUG_ON(node->params.ring != ring);
+		BUG_ON(node->status != i915_sqs_popped);
+		count++;
+
+		/* The call to pop above will have removed the node from the
+		 * list. So add it back in and mark it as in flight. */
+		i915_scheduler_fly_node(node);
+
+		scheduler->flags[ring->id] |= i915_sf_submitting;
+		spin_unlock_irqrestore(&scheduler->lock, flags);
+		ret = dev_priv->gt.execbuf_final(&node->params);
+		spin_lock_irqsave(&scheduler->lock, flags);
+		scheduler->flags[ring->id] &= ~i915_sf_submitting;
+
+		if (ret) {
+			int requeue = 1;
+
+			/* Oh dear! Either the node is broken or the ring is
+			 * busy. So need to kill the node or requeue it and try
+			 * again later as appropriate. */
+
+			switch (-ret) {
+			case ENODEV:
+			case ENOENT:
+				/* Fatal errors. Kill the node. */
+				requeue = -1;
+			break;
+
+			case EAGAIN:
+			case EBUSY:
+			case EIO:
+			case ENOMEM:
+			case ERESTARTSYS:
+			case EINTR:
+				/* Supposedly recoverable errors. */
+			break;
+
+			default:
+				DRM_DEBUG_DRIVER("<%s> Got unexpected error from execfinal(): %d!\n",
+						 ring->name, ret);
+				/* Assume it is recoverable and hope for the best. */
+			break;
+			}
+
+			/* Check that the watchdog/reset code has not nuked
+			 * the node while we weren't looking: */
+			if (node->status == i915_sqs_dead)
+				requeue = 0;
+
+			if (requeue == 1) {
+				i915_scheduler_node_requeue(node);
+				/* No point spinning if the ring is currently
+				 * unavailable so just give up and come back
+				 * later. */
+				break;
+			} else if (requeue == -1)
+				i915_scheduler_node_kill(node);
+		}
+
+		/* Keep launching until the sky is sufficiently full. */
+		if (i915_scheduler_count_flying(scheduler, ring) >=
+						scheduler->min_flying)
+			break;
+
+		ret = i915_scheduler_pop_from_queue_locked(ring, &node, &flags);
+	} while (ret == 0);
+
+	spin_unlock_irqrestore(&scheduler->lock, flags);
+
+	if (!was_locked)
+		mutex_unlock(&dev->struct_mutex);
+
+	/* Don't complain about not being able to submit extra entries */
+	if (ret == -ENODATA)
+		ret = 0;
+
+	return (ret < 0) ? ret : count;
+}
+
+static int i915_scheduler_remove_dependent(struct i915_scheduler *scheduler,
+					   struct i915_scheduler_queue_entry *remove)
+{
+	struct i915_scheduler_queue_entry  *node;
+	int     i, r;
+	int     count = 0;
+
+	for (i = 0; i < remove->num_deps; i++)
+		if ((remove->dep_list[i]) &&
+		    (!I915_SQS_IS_COMPLETE(remove->dep_list[i])))
+			count++;
+	BUG_ON(count);
+
+	for (r = 0; r < I915_NUM_RINGS; r++) {
+		list_for_each_entry(node, &scheduler->node_queue[r], link) {
+			for (i = 0; i < node->num_deps; i++) {
+				if (node->dep_list[i] != remove)
+					continue;
+
+				node->dep_list[i] = NULL;
+			}
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
new file mode 100644
index 0000000..ee39b77
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) 2014 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _I915_SCHEDULER_H_
+#define _I915_SCHEDULER_H_
+
+enum i915_scheduler_queue_status {
+	/* Limbo: */
+	i915_sqs_none = 0,
+	/* Not yet submitted to hardware: */
+	i915_sqs_queued,
+	/* Popped from queue, ready to fly: */
+	i915_sqs_popped,
+	/* Sent to hardware for processing: */
+	i915_sqs_flying,
+	/* Finished processing on the hardware: */
+	i915_sqs_complete,
+	/* Killed by watchdog or catastrophic submission failure: */
+	i915_sqs_dead,
+	/* Limit value for use with arrays/loops */
+	i915_sqs_MAX
+};
+
+#define I915_SQS_IS_QUEUED(node)	(((node)->status == i915_sqs_queued))
+#define I915_SQS_IS_FLYING(node)	(((node)->status == i915_sqs_flying))
+#define I915_SQS_IS_COMPLETE(node)	(((node)->status == i915_sqs_complete) || \
+					 ((node)->status == i915_sqs_dead))
+
+struct i915_scheduler_obj_entry {
+	struct drm_i915_gem_object          *obj;
+};
+
+struct i915_scheduler_queue_entry {
+	struct i915_execbuffer_params       params;
+	/* -1023 = lowest priority, 0 = default, 1023 = highest */
+	int32_t                             priority;
+	struct i915_scheduler_obj_entry     *saved_objects;
+	int                                 num_objs;
+	bool                                bumped;
+	struct i915_scheduler_queue_entry   **dep_list;
+	int                                 num_deps;
+	enum i915_scheduler_queue_status    status;
+	unsigned long                       stamp;
+	struct list_head                    link;
+};
+
+struct i915_scheduler {
+	struct list_head    node_queue[I915_NUM_RINGS];
+	uint32_t            flags[I915_NUM_RINGS];
+	spinlock_t          lock;
+
+	/* Tuning parameters: */
+	int32_t             priority_level_min;
+	int32_t             priority_level_max;
+	int32_t             priority_level_preempt;
+	uint32_t            min_flying;
+};
+
+/* Flag bits for i915_scheduler::flags */
+enum {
+	i915_sf_interrupts_enabled  = (1 << 0),
+	i915_sf_submitting          = (1 << 1),
+};
+
+int         i915_scheduler_init(struct drm_device *dev);
+int         i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe);
+bool        i915_scheduler_notify_request(struct drm_i915_gem_request *req);
+
+#endif  /* _I915_SCHEDULER_H_ */
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2015-11-23 11:39 UTC|newest]

Thread overview: 143+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-23 11:38 [PATCH 00/39] GPU scheduler for i915 driver John.C.Harrison
2015-11-23 11:38 ` [PATCH 01/39] drm/i915: Add total count to context status debugfs output John.C.Harrison
2016-01-08  9:50   ` Joonas Lahtinen
2015-11-23 11:38 ` [PATCH 02/39] drm/i915: Updating assorted register and status page definitions John.C.Harrison
2016-01-08 12:26   ` Joonas Lahtinen
2016-01-11  7:47     ` Daniel Vetter
2015-11-23 11:38 ` [PATCH 03/39] drm/i915: Explicit power enable during deferred context initialisation John.C.Harrison
2016-01-08 12:35   ` Joonas Lahtinen
2015-11-23 11:38 ` [PATCH 04/39] drm/i915: Prelude to splitting i915_gem_do_execbuffer in two John.C.Harrison
2015-11-23 11:39 ` [PATCH 05/39] drm/i915: Split i915_dem_do_execbuffer() in half John.C.Harrison
2015-12-11 13:15   ` [PATCH 05/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 06/39] drm/i915: Re-instate request->uniq because it is extremely useful John.C.Harrison
2015-11-23 11:39 ` John.C.Harrison [this message]
2015-12-11 13:16   ` [PATCH 08/40] drm/i915: Start of GPU scheduler John.C.Harrison
2015-11-23 11:39 ` [PATCH 08/39] drm/i915: Prepare retire_requests to handle out-of-order seqnos John.C.Harrison
2015-11-23 11:39 ` [PATCH 09/39] drm/i915: Disable hardware semaphores when GPU scheduler is enabled John.C.Harrison
2015-11-23 11:39 ` [PATCH 10/39] drm/i915: Force MMIO flips when scheduler enabled John.C.Harrison
2015-11-23 11:39 ` [PATCH 11/39] drm/i915: Added scheduler hook when closing DRM file handles John.C.Harrison
2015-12-11 13:19   ` [PATCH 12/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 12/39] drm/i915: Added scheduler hook into i915_gem_request_notify() John.C.Harrison
2015-11-23 11:39 ` [PATCH 13/39] drm/i915: Added deferred work handler for scheduler John.C.Harrison
2015-11-23 11:39 ` [PATCH 14/39] drm/i915: Redirect execbuffer_final() via scheduler John.C.Harrison
2015-11-23 11:39 ` [PATCH 15/39] drm/i915: Keep the reserved space mechanism happy John.C.Harrison
2015-12-11 13:19   ` [PATCH 16/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 16/39] drm/i915: Added tracking/locking of batch buffer objects John.C.Harrison
2015-12-11 13:19   ` [PATCH 17/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 17/39] drm/i915: Hook scheduler node clean up into retire requests John.C.Harrison
2015-12-11 13:19   ` [PATCH 18/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 18/39] drm/i915: Added scheduler support to __wait_request() calls John.C.Harrison
2015-12-11 13:20   ` [PATCH 19/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 19/39] drm/i915: Added scheduler support to page fault handler John.C.Harrison
2015-11-23 11:39 ` [PATCH 20/39] drm/i915: Added scheduler flush calls to ring throttle and idle functions John.C.Harrison
2015-12-11 13:20   ` [PATCH 21/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 21/39] drm/i915: Added a module parameter for allowing scheduler overrides John.C.Harrison
2015-11-23 11:39 ` [PATCH 22/39] drm/i915: Support for 'unflushed' ring idle John.C.Harrison
2015-11-23 11:39 ` [PATCH 23/39] drm/i915: Defer seqno allocation until actual hardware submission time John.C.Harrison
2015-12-11 13:20   ` [PATCH 24/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 24/39] drm/i915: Added immediate submission override to scheduler John.C.Harrison
2015-11-23 11:39 ` [PATCH 25/39] drm/i915: Add sync wait support " John.C.Harrison
2015-11-23 11:39 ` [PATCH 26/39] drm/i915: Connecting execbuff fences " John.C.Harrison
2015-11-23 11:39 ` [PATCH 27/39] drm/i915: Added trace points " John.C.Harrison
2015-12-11 13:20   ` [PATCH 28/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 28/39] drm/i915: Added scheduler queue throttling by DRM file handle John.C.Harrison
2015-12-11 13:21   ` [PATCH 29/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 29/39] drm/i915: Added debugfs interface to scheduler tuning parameters John.C.Harrison
2015-11-23 11:39 ` [PATCH 30/39] drm/i915: Added debug state dump facilities to scheduler John.C.Harrison
2015-12-11 13:21   ` [PATCH 31/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 31/39] drm/i915: Add early exit to execbuff_final() if insufficient ring space John.C.Harrison
2015-12-11 13:21   ` [PATCH 32/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 32/39] drm/i915: Added scheduler statistic reporting to debugfs John.C.Harrison
2015-12-11 13:21   ` [PATCH 33/40] " John.C.Harrison
2015-11-23 11:39 ` [PATCH 33/39] drm/i915: Added seqno values to scheduler status dump John.C.Harrison
2015-11-23 11:39 ` [PATCH 34/39] drm/i915: Add scheduler support functions for TDR John.C.Harrison
2015-11-23 11:39 ` [PATCH 35/39] drm/i915: GPU priority bumping to prevent starvation John.C.Harrison
2015-11-23 11:39 ` [PATCH 36/39] drm/i915: Scheduler state dump via debugfs John.C.Harrison
2015-11-23 11:39 ` [PATCH 37/39] drm/i915: Enable GPU scheduler by default John.C.Harrison
2015-11-23 11:39 ` [PATCH 38/39] drm/i915: Add scheduling priority to per-context parameters John.C.Harrison
2015-11-23 11:39 ` [PATCH 39/39] drm/i915: Allow scheduler to manage inter-ring object synchronisation John.C.Harrison
2015-12-11 13:16 ` [PATCH 06/40] drm/i915: Cache request pointer in *_submission_final() John.C.Harrison
2015-12-11 13:23 ` [PATCH 00/40] GPU scheduler for i915 driver John.C.Harrison
2016-01-11 18:42 ` [PATCH v4 00/38] " John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 01/38] drm/i915: Add total count to context status debugfs output John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 02/38] drm/i915: Explicit power enable during deferred context initialisation John.C.Harrison
2016-01-12  0:20     ` Chris Wilson
2016-01-12 11:11       ` John Harrison
2016-01-12 11:28         ` Chris Wilson
2016-01-12 11:50           ` John Harrison
2016-01-12 14:04             ` Daniel Vetter
2016-01-12 14:21               ` John Harrison
2016-01-12 15:35                 ` Daniel Vetter
2016-01-12 15:59                   ` Imre Deak
2016-01-12 16:11                     ` Daniel Vetter
2016-01-12 16:59                       ` Chris Wilson
2016-01-11 18:42   ` [PATCH v4 03/38] drm/i915: Prelude to splitting i915_gem_do_execbuffer in two John.C.Harrison
2016-02-04 17:01     ` Jesse Barnes
2016-02-12 16:18       ` John Harrison
2016-01-11 18:42   ` [PATCH v4 04/38] drm/i915: Split i915_dem_do_execbuffer() in half John.C.Harrison
2016-01-11 22:03     ` Chris Wilson
2016-02-04 17:08     ` Jesse Barnes
2016-01-11 18:42   ` [PATCH v4 05/38] drm/i915: Cache request pointer in *_submission_final() John.C.Harrison
2016-02-04 17:09     ` Jesse Barnes
2016-01-11 18:42   ` [PATCH v4 06/38] drm/i915: Re-instate request->uniq because it is extremely useful John.C.Harrison
2016-01-11 22:04     ` Chris Wilson
2016-01-12 11:16       ` John Harrison
2016-01-11 18:42   ` [PATCH v4 07/38] drm/i915: Start of GPU scheduler John.C.Harrison
2016-01-20 13:18     ` Joonas Lahtinen
2016-02-18 14:22       ` John Harrison
2016-02-19 10:13         ` Joonas Lahtinen
2016-01-11 18:42   ` [PATCH v4 08/38] drm/i915: Prepare retire_requests to handle out-of-order seqnos John.C.Harrison
2016-01-11 22:10     ` Chris Wilson
2016-02-04 17:14       ` Jesse Barnes
2016-01-11 18:42   ` [PATCH v4 09/38] drm/i915: Disable hardware semaphores when GPU scheduler is enabled John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 10/38] drm/i915: Force MMIO flips when scheduler enabled John.C.Harrison
2016-01-11 22:16     ` Chris Wilson
2016-01-12 11:19       ` John Harrison
2016-01-12 14:07         ` Daniel Vetter
2016-01-12 21:53           ` Chris Wilson
2016-01-13 12:37             ` John Harrison
2016-01-13 13:14               ` Chris Wilson
2016-01-11 18:42   ` [PATCH v4 11/38] drm/i915: Added scheduler hook when closing DRM file handles John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 12/38] drm/i915: Added scheduler hook into i915_gem_request_notify() John.C.Harrison
2016-01-11 22:14     ` Chris Wilson
2016-01-12 11:25       ` John Harrison
2016-01-11 18:42   ` [PATCH v4 13/38] drm/i915: Added deferred work handler for scheduler John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 14/38] drm/i915: Redirect execbuffer_final() via scheduler John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 15/38] drm/i915: Keep the reserved space mechanism happy John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 16/38] drm/i915: Added tracking/locking of batch buffer objects John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 17/38] drm/i915: Hook scheduler node clean up into retire requests John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 18/38] drm/i915: Added scheduler support to __wait_request() calls John.C.Harrison
2016-01-11 23:14     ` Chris Wilson
2016-01-12 11:28       ` John Harrison
2016-01-11 18:42   ` [PATCH v4 19/38] drm/i915: Added scheduler support to page fault handler John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 20/38] drm/i915: Added scheduler flush calls to ring throttle and idle functions John.C.Harrison
2016-01-11 22:20     ` Chris Wilson
2016-01-11 18:42   ` [PATCH v4 21/38] drm/i915: Added a module parameter for allowing scheduler overrides John.C.Harrison
2016-01-11 22:24     ` Chris Wilson
2016-01-12 11:34       ` John Harrison
2016-01-12 11:55         ` Chris Wilson
2016-01-11 18:42   ` [PATCH v4 22/38] drm/i915: Support for 'unflushed' ring idle John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 23/38] drm/i915: Defer seqno allocation until actual hardware submission time John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 24/38] drm/i915: Added immediate submission override to scheduler John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 25/38] drm/i915: Added trace points " John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 26/38] drm/i915: Added scheduler queue throttling by DRM file handle John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 27/38] drm/i915: Added debugfs interface to scheduler tuning parameters John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 28/38] drm/i915: Added debug state dump facilities to scheduler John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 29/38] drm/i915: Add early exit to execbuff_final() if insufficient ring space John.C.Harrison
2016-01-11 18:42   ` [PATCH v4 30/38] drm/i915: Added scheduler statistic reporting to debugfs John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 31/38] drm/i915: Added seqno values to scheduler status dump John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 32/38] drm/i915: Add scheduler support functions for TDR John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 33/38] drm/i915: GPU priority bumping to prevent starvation John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 34/38] drm/i915: Scheduler state dump via debugfs John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 35/38] drm/i915: Enable GPU scheduler by default John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 36/38] drm/i915: Add scheduling priority to per-context parameters John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 37/38] drm/i915: Add support for retro-actively banning batch buffers John.C.Harrison
2016-01-11 18:43   ` [PATCH v4 38/38] drm/i915: Allow scheduler to manage inter-ring object synchronisation John.C.Harrison
2016-01-11 22:07     ` Chris Wilson
2016-01-12 11:38       ` John Harrison
2016-01-11 18:43   ` [PATCH] igt/gem_ctx_param_basic: Updated to support scheduler priority interface John.C.Harrison
2016-01-11 23:52   ` [PATCH v4 00/38] GPU scheduler for i915 driver Chris Wilson
2016-01-12  4:37   ` Tian, Kevin
2016-01-12 11:43     ` John Harrison
2016-01-12 13:49       ` Dave Gordon
2016-01-13  2:33         ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1448278774-31376-8-git-send-email-John.C.Harrison@Intel.com \
    --to=john.c.harrison@intel.com \
    --cc=Intel-GFX@Lists.FreeDesktop.Org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.