All of lore.kernel.org
 help / color / mirror / Atom feed
From: John.C.Harrison@Intel.com
To: Intel-GFX@Lists.FreeDesktop.Org
Subject: [PATCH v5 19/35] drm/i915: Added scheduler flush calls to ring throttle and idle functions
Date: Thu, 18 Feb 2016 14:27:07 +0000	[thread overview]
Message-ID: <1455805644-6450-20-git-send-email-John.C.Harrison@Intel.com> (raw)
In-Reply-To: <1455805644-6450-1-git-send-email-John.C.Harrison@Intel.com>

From: John Harrison <John.C.Harrison@Intel.com>

When requesting that all GPU work is completed, it is now necessary to
get the scheduler involved in order to flush out work that queued and
not yet submitted.

v2: Updated to add support for flushing the scheduler queue by time
stamp rather than just doing a blanket flush.

v3: Moved submit_max_priority() to this patch from an earlier patch
is it is no longer required in the other.

v4: Corrected the format of a comment to keep the style checker happy.
Downgraded a BUG_ON to a WARN_ON as the latter is preferred.

v5: Shuffled functions around to remove forward prototypes, removed
similarly offensive white space and added documentation. Re-worked the
mutex locking around the submit function. [Joonas Lahtinen]

Used lighter weight spinlocks.

For: VIZ-1587
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c       |  24 ++++-
 drivers/gpu/drm/i915/i915_scheduler.c | 178 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_scheduler.h |   3 +
 3 files changed, 204 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a47a495..d946f53 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3786,6 +3786,10 @@ int i915_gpu_idle(struct drm_device *dev)
 
 	/* Flush everything onto the inactive list. */
 	for_each_ring(ring, dev_priv, i) {
+		ret = i915_scheduler_flush(ring, true);
+		if (ret < 0)
+			return ret;
+
 		if (!i915.enable_execlists) {
 			struct drm_i915_gem_request *req;
 
@@ -4519,7 +4523,8 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
 	unsigned long recent_enough = jiffies - DRM_I915_THROTTLE_JIFFIES;
 	struct drm_i915_gem_request *request, *target = NULL;
 	unsigned reset_counter;
-	int ret;
+	int i, ret;
+	struct intel_engine_cs *ring;
 
 	ret = i915_gem_wait_for_error(&dev_priv->gpu_error);
 	if (ret)
@@ -4529,6 +4534,23 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
 	if (ret)
 		return ret;
 
+	for_each_ring(ring, dev_priv, i) {
+		/*
+		 * Flush out scheduler entries that are getting 'stale'. Note
+		 * that the following recent_enough test will only check
+		 * against the time at which the request was submitted to the
+		 * hardware (i.e. when it left the scheduler) not the time it
+		 * was submitted to the driver.
+		 *
+		 * Also, there is not much point worring about busy return
+		 * codes from the scheduler flush call. Even if more work
+		 * cannot be submitted right now for whatever reason, we
+		 * still want to throttle against stale work that has already
+		 * been submitted.
+		 */
+		i915_scheduler_flush_stamp(ring, recent_enough, false);
+	}
+
 	spin_lock(&file_priv->mm.lock);
 	list_for_each_entry(request, &file_priv->mm.request_list, client_list) {
 		if (time_after_eq(request->emitted_jiffies, recent_enough))
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index edab63d..8130a9c 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -304,6 +304,10 @@ static int i915_scheduler_pop_from_queue_locked(struct intel_engine_cs *ring,
  * attempting to acquire a mutex while holding a spin lock is a Bad Idea.
  * And releasing the one before acquiring the other leads to other code
  * being run and interfering.
+ *
+ * Hence any caller that does not already have the mutex lock for other
+ * reasons should call i915_scheduler_submit_unlocked() instead in order to
+ * obtain the lock first.
  */
 static int i915_scheduler_submit(struct intel_engine_cs *ring)
 {
@@ -428,6 +432,22 @@ error:
 	return ret;
 }
 
+static int i915_scheduler_submit_unlocked(struct intel_engine_cs *ring)
+{
+	struct drm_device *dev = ring->dev;
+	int ret;
+
+	ret = i915_mutex_lock_interruptible(dev);
+	if (ret)
+		return ret;
+
+	ret = i915_scheduler_submit(ring);
+
+	mutex_unlock(&dev->struct_mutex);
+
+	return ret;
+}
+
 static void i915_generate_dependencies(struct i915_scheduler *scheduler,
 				       struct i915_scheduler_queue_entry *node,
 				       uint32_t ring)
@@ -917,6 +937,164 @@ void i915_scheduler_work_handler(struct work_struct *work)
 		i915_scheduler_process_work(ring);
 }
 
+static int i915_scheduler_submit_max_priority(struct intel_engine_cs *ring,
+					      bool is_locked)
+{
+	struct i915_scheduler_queue_entry *node;
+	struct drm_i915_private *dev_priv = ring->dev->dev_private;
+	struct i915_scheduler *scheduler = dev_priv->scheduler;
+	int ret, count = 0;
+	bool found;
+
+	do {
+		found = false;
+		spin_lock_irq(&scheduler->lock);
+		list_for_each_entry(node, &scheduler->node_queue[ring->id], link) {
+			if (!I915_SQS_IS_QUEUED(node))
+				continue;
+
+			if (node->priority < scheduler->priority_level_max)
+				continue;
+
+			found = true;
+			break;
+		}
+		spin_unlock_irq(&scheduler->lock);
+
+		if (!found)
+			break;
+
+		if (is_locked)
+			ret = i915_scheduler_submit(ring);
+		else
+			ret = i915_scheduler_submit_unlocked(ring);
+		if (ret < 0)
+			return ret;
+
+		count += ret;
+	} while (found);
+
+	return count;
+}
+
+/**
+ * i915_scheduler_flush_stamp - force requests of a given age through the
+ * scheduler.
+ * @ring: Ring to be flushed
+ * @target: Jiffy based time stamp to flush up to
+ * @is_locked: Is the driver mutex lock held?
+ * DRM has a throttle by age of request facility. This requires waiting for
+ * outstanding work over a given age. This function helps that by forcing
+ * queued batch buffers over said age through the system.
+ * Returns zero on success or -EAGAIN if the scheduler is busy (e.g. waiting
+ * for a pre-emption event to complete) but the mutex lock is held which
+ * would prevent the scheduler's asynchronous processing from completing.
+ */
+int i915_scheduler_flush_stamp(struct intel_engine_cs *ring,
+			       unsigned long target,
+			       bool is_locked)
+{
+	struct i915_scheduler_queue_entry *node;
+	struct drm_i915_private *dev_priv;
+	struct i915_scheduler *scheduler;
+	int flush_count = 0;
+
+	if (!ring)
+		return -EINVAL;
+
+	dev_priv  = ring->dev->dev_private;
+	scheduler = dev_priv->scheduler;
+
+	if (!scheduler)
+		return 0;
+
+	if (is_locked && (scheduler->flags[ring->id] & i915_sf_submitting)) {
+		/*
+		 * Scheduler is busy already submitting another batch,
+		 * come back later rather than going recursive...
+		 */
+		return -EAGAIN;
+	}
+
+	spin_lock_irq(&scheduler->lock);
+	i915_scheduler_priority_bump_clear(scheduler);
+	list_for_each_entry(node, &scheduler->node_queue[ring->id], link) {
+		if (!I915_SQS_IS_QUEUED(node))
+			continue;
+
+		if (node->stamp > target)
+			continue;
+
+		flush_count = i915_scheduler_priority_bump(scheduler,
+					node, scheduler->priority_level_max);
+	}
+	spin_unlock_irq(&scheduler->lock);
+
+	if (flush_count) {
+		DRM_DEBUG_DRIVER("<%s> Bumped %d entries\n", ring->name, flush_count);
+		flush_count = i915_scheduler_submit_max_priority(ring, is_locked);
+	}
+
+	return flush_count;
+}
+
+/**
+ * i915_scheduler_flush - force all requests through the scheduler.
+ * @ring: Ring to be flushed
+ * @is_locked: Is the driver mutex lock held?
+ * For various reasons it is sometimes necessary to the scheduler out, e.g.
+ * due to ring reset.
+ * Returns zero on success or -EAGAIN if the scheduler is busy (e.g. waiting
+ * for a pre-emption event to complete) but the mutex lock is held which
+ * would prevent the scheduler's asynchronous processing from completing.
+ */
+int i915_scheduler_flush(struct intel_engine_cs *ring, bool is_locked)
+{
+	struct i915_scheduler_queue_entry *node;
+	struct drm_i915_private *dev_priv;
+	struct i915_scheduler *scheduler;
+	bool found;
+	int ret;
+	uint32_t count = 0;
+
+	if (!ring)
+		return -EINVAL;
+
+	dev_priv  = ring->dev->dev_private;
+	scheduler = dev_priv->scheduler;
+
+	if (!scheduler)
+		return 0;
+
+	WARN_ON(is_locked && (scheduler->flags[ring->id] & i915_sf_submitting));
+
+	do {
+		found = false;
+		spin_lock_irq(&scheduler->lock);
+		list_for_each_entry(node, &scheduler->node_queue[ring->id], link) {
+			if (!I915_SQS_IS_QUEUED(node))
+				continue;
+
+			found = true;
+			break;
+		}
+		spin_unlock_irq(&scheduler->lock);
+
+		if (found) {
+			if (is_locked)
+				ret = i915_scheduler_submit(ring);
+			else
+				ret = i915_scheduler_submit_unlocked(ring);
+			if (ret < 0)
+				return ret;
+
+			count += ret;
+		}
+	} while (found);
+
+	return count;
+}
+
 /**
  * i915_scheduler_is_request_tracked - return info to say what the scheduler's
  * connection to this request is (if any).
diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
index a88adce..839b048 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.h
+++ b/drivers/gpu/drm/i915/i915_scheduler.h
@@ -94,6 +94,9 @@ int i915_scheduler_queue_execbuffer(struct i915_scheduler_queue_entry *qe);
 bool i915_scheduler_notify_request(struct drm_i915_gem_request *req);
 void i915_scheduler_wakeup(struct drm_device *dev);
 void i915_scheduler_work_handler(struct work_struct *work);
+int i915_scheduler_flush(struct intel_engine_cs *ring, bool is_locked);
+int i915_scheduler_flush_stamp(struct intel_engine_cs *ring,
+			       unsigned long stamp, bool is_locked);
 bool i915_scheduler_is_request_tracked(struct drm_i915_gem_request *req,
 				       bool *completed, bool *busy);
 
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2016-02-18 14:27 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-18 14:26 [PATCH v5 00/35] GPU scheduler for i915 driver John.C.Harrison
2016-02-18 14:26 ` [PATCH v5 01/35] drm/i915: Add total count to context status debugfs output John.C.Harrison
2016-02-18 14:26 ` [PATCH v5 02/35] drm/i915: Prelude to splitting i915_gem_do_execbuffer in two John.C.Harrison
2016-02-18 14:26 ` [PATCH v5 03/35] drm/i915: Split i915_dem_do_execbuffer() in half John.C.Harrison
2016-02-18 14:26 ` [PATCH v5 04/35] drm/i915: Cache request pointer in *_submission_final() John.C.Harrison
2016-02-18 14:26 ` [PATCH v5 05/35] drm/i915: Re-instate request->uniq because it is extremely useful John.C.Harrison
2016-02-18 14:26 ` [PATCH v5 06/35] drm/i915: Start of GPU scheduler John.C.Harrison
2016-02-19 13:03   ` Joonas Lahtinen
2016-02-19 17:03     ` John Harrison
2016-02-26  9:13       ` Joonas Lahtinen
2016-02-26 14:18         ` John Harrison
2016-02-18 14:26 ` [PATCH v5 07/35] drm/i915: Prepare retire_requests to handle out-of-order seqnos John.C.Harrison
2016-02-19 19:23   ` Jesse Barnes
2016-02-18 14:26 ` [PATCH v5 08/35] drm/i915: Disable hardware semaphores when GPU scheduler is enabled John.C.Harrison
2016-02-19 19:27   ` Jesse Barnes
2016-02-18 14:26 ` [PATCH v5 09/35] drm/i915: Force MMIO flips when scheduler enabled John.C.Harrison
2016-02-19 19:28   ` Jesse Barnes
2016-02-19 19:53     ` Ville Syrjälä
2016-02-19 20:01       ` Jesse Barnes
2016-02-22  9:41         ` Lankhorst, Maarten
2016-02-22 12:53           ` John Harrison
2016-02-20  9:22     ` Chris Wilson
2016-02-22 20:42       ` Jesse Barnes
2016-02-23 11:16         ` Chris Wilson
2016-02-18 14:26 ` [PATCH v5 10/35] drm/i915: Added scheduler hook when closing DRM file handles John.C.Harrison
2016-03-01  8:59   ` Joonas Lahtinen
2016-03-01 14:52     ` John Harrison
2016-02-18 14:26 ` [PATCH v5 11/35] drm/i915: Added scheduler hook into i915_gem_request_notify() John.C.Harrison
2016-03-01  9:10   ` Joonas Lahtinen
2016-02-18 14:27 ` [PATCH v5 12/35] drm/i915: Added deferred work handler for scheduler John.C.Harrison
2016-03-01  9:16   ` Joonas Lahtinen
2016-03-01 15:12     ` John Harrison
2016-02-18 14:27 ` [PATCH v5 13/35] drm/i915: Redirect execbuffer_final() via scheduler John.C.Harrison
2016-02-19 19:33   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 14/35] drm/i915: Keep the reserved space mechanism happy John.C.Harrison
2016-02-19 19:36   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 15/35] drm/i915: Added tracking/locking of batch buffer objects John.C.Harrison
2016-02-19 19:42   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 16/35] drm/i915: Hook scheduler node clean up into retire requests John.C.Harrison
2016-02-19 19:44   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 17/35] drm/i915: Added scheduler support to __wait_request() calls John.C.Harrison
2016-03-01 10:02   ` Joonas Lahtinen
2016-03-11 11:47     ` John Harrison
2016-02-18 14:27 ` [PATCH v5 18/35] drm/i915: Added scheduler support to page fault handler John.C.Harrison
2016-02-19 19:45   ` Jesse Barnes
2016-02-18 14:27 ` John.C.Harrison [this message]
2016-03-07 11:31   ` [PATCH v5 19/35] drm/i915: Added scheduler flush calls to ring throttle and idle functions Joonas Lahtinen
2016-03-11 16:22     ` John Harrison
2016-02-18 14:27 ` [PATCH v5 20/35] drm/i915: Add scheduler hook to GPU reset John.C.Harrison
2016-02-23 20:27   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 21/35] drm/i915: Added a module parameter to allow the scheduler to be disabled John.C.Harrison
2016-02-23 20:29   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 22/35] drm/i915: Support for 'unflushed' ring idle John.C.Harrison
2016-02-23 20:35   ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 23/35] drm/i915: Defer seqno allocation until actual hardware submission time John.C.Harrison
2016-03-07 12:16   ` Joonas Lahtinen
2016-02-18 14:27 ` [PATCH v5 24/35] drm/i915: Added trace points to scheduler John.C.Harrison
2016-02-23 20:42   ` Jesse Barnes
2016-02-23 20:42   ` Jesse Barnes
2016-02-26 15:55     ` John Harrison
2016-02-26 17:12       ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 25/35] drm/i915: Added scheduler queue throttling by DRM file handle John.C.Harrison
2016-02-23 21:02   ` Jesse Barnes
2016-03-01 15:52     ` John Harrison
2016-02-18 14:27 ` [PATCH v5 26/35] drm/i915: Added debugfs interface to scheduler tuning parameters John.C.Harrison
2016-02-23 21:06   ` Jesse Barnes
2016-03-11 16:28     ` John Harrison
2016-03-11 17:25       ` Jesse Barnes
2016-02-18 14:27 ` [PATCH v5 27/35] drm/i915: Added debug state dump facilities to scheduler John.C.Harrison
2016-03-07 12:31   ` Joonas Lahtinen
2016-03-11 16:38     ` John Harrison
2016-03-15 10:53       ` Joonas Lahtinen
2016-02-18 14:27 ` [PATCH v5 28/35] drm/i915: Add early exit to execbuff_final() if insufficient ring space John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 29/35] drm/i915: Added scheduler statistic reporting to debugfs John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 30/35] drm/i915: Add scheduler support functions for TDR John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 31/35] drm/i915: Scheduler state dump via debugfs John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 32/35] drm/i915: Enable GPU scheduler by default John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 33/35] drm/i915: Add scheduling priority to per-context parameters John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 34/35] drm/i915: Add support for retro-actively banning batch buffers John.C.Harrison
2016-02-18 14:27 ` [PATCH v5 35/35] drm/i915: Allow scheduler to manage inter-ring object synchronisation John.C.Harrison
2016-02-18 14:27 ` [PATCH 01/20] igt/gem_ctx_param_basic: Updated to support scheduler priority interface John.C.Harrison
2016-02-18 15:30 ` ✗ Fi.CI.BAT: failure for GPU scheduler for i915 driver Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1455805644-6450-20-git-send-email-John.C.Harrison@Intel.com \
    --to=john.c.harrison@intel.com \
    --cc=Intel-GFX@Lists.FreeDesktop.Org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.