All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking
@ 2017-10-20  9:59 Chris Wilson
  2017-10-20  9:59 ` [PATCH 2/2] drm/i915: Filter out spurious execlists context-switch interrupts Chris Wilson
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Chris Wilson @ 2017-10-20  9:59 UTC (permalink / raw)
  To: intel-gfx

In the idle worker we drop the prolonged GT wakeref used to cover such
essentials as interrupt delivery. (When a CS interrupt arrives, we also
assert that the GT is awake.) However, it turns out that 10ms is not
long enough to be assured that the last CS interrupt has been delivered,
so bump that to 200ms, and move the entirety of that wait to before we
take the struct_mutex to avoid blocking. As this is now a potentially
long wait, restore the earlier behaviour of bailing out early when a new
request arrives.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 31 ++++++++++++++++++++-----------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 026cb52ece0b..d3a638613857 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3281,8 +3281,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
 {
 	struct drm_i915_private *dev_priv =
 		container_of(work, typeof(*dev_priv), gt.idle_work.work);
-	struct drm_device *dev = &dev_priv->drm;
 	bool rearm_hangcheck;
+	ktime_t end;
 
 	if (!READ_ONCE(dev_priv->gt.awake))
 		return;
@@ -3291,14 +3291,22 @@ i915_gem_idle_work_handler(struct work_struct *work)
 	 * Wait for last execlists context complete, but bail out in case a
 	 * new request is submitted.
 	 */
-	wait_for(intel_engines_are_idle(dev_priv), 10);
-	if (READ_ONCE(dev_priv->gt.active_requests))
-		return;
+	end = ktime_add_ms(ktime_get(), 200);
+	do {
+		if (READ_ONCE(dev_priv->gt.active_requests) ||
+		    work_pending(work))
+			return;
+
+		if (intel_engines_are_idle(dev_priv))
+			break;
+
+		usleep_range(100, 500);
+	} while (ktime_before(ktime_get(), end));
 
 	rearm_hangcheck =
 		cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
 
-	if (!mutex_trylock(&dev->struct_mutex)) {
+	if (!mutex_trylock(&dev_priv->drm.struct_mutex)) {
 		/* Currently busy, come back later */
 		mod_delayed_work(dev_priv->wq,
 				 &dev_priv->gt.idle_work,
@@ -3310,13 +3318,14 @@ i915_gem_idle_work_handler(struct work_struct *work)
 	 * New request retired after this work handler started, extend active
 	 * period until next instance of the work.
 	 */
-	if (work_pending(work))
+	if (dev_priv->gt.active_requests || work_pending(work))
 		goto out_unlock;
 
-	if (dev_priv->gt.active_requests)
-		goto out_unlock;
-
-	if (wait_for(intel_engines_are_idle(dev_priv), 10))
+	/*
+	 * We are committed now to parking the engines, make sure there
+	 * will be no more interrupts arriving later.
+	 */
+	if (!intel_engines_are_idle(dev_priv))
 		DRM_ERROR("Timeout waiting for engines to idle\n");
 
 	intel_engines_mark_idle(dev_priv);
@@ -3330,7 +3339,7 @@ i915_gem_idle_work_handler(struct work_struct *work)
 		gen6_rps_idle(dev_priv);
 	intel_runtime_pm_put(dev_priv);
 out_unlock:
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&dev_priv->drm.struct_mutex);
 
 out_rearm:
 	if (rearm_hangcheck) {
-- 
2.15.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread
* [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking
@ 2017-10-23 20:06 Chris Wilson
  0 siblings, 0 replies; 20+ messages in thread
From: Chris Wilson @ 2017-10-23 20:06 UTC (permalink / raw)
  To: intel-gfx

In the idle worker we drop the prolonged GT wakeref used to cover such
essentials as interrupt delivery. (When a CS interrupt arrives, we also
assert that the GT is awake.) However, it turns out that 10ms is not
long enough to be assured that the last CS interrupt has been delivered,
so bump that to 200ms, and move the entirety of that wait to before we
take the struct_mutex to avoid blocking. As this is now a potentially
long wait, restore the earlier behaviour of bailing out early when a new
request arrives.

v2: Break out the repeated check for new requests into its own little
helper to try and improve the self-commentary.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c | 37 ++++++++++++++++++++++++++-----------
 1 file changed, 26 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 026cb52ece0b..bb0e85043e01 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3276,13 +3276,20 @@ i915_gem_retire_work_handler(struct work_struct *work)
 	}
 }
 
+static inline bool
+new_requests_since_last_retire(const struct drm_i915_private *i915)
+{
+	return (READ_ONCE(i915->gt.active_requests) ||
+		work_pending(&i915->gt.idle_work.work));
+}
+
 static void
 i915_gem_idle_work_handler(struct work_struct *work)
 {
 	struct drm_i915_private *dev_priv =
 		container_of(work, typeof(*dev_priv), gt.idle_work.work);
-	struct drm_device *dev = &dev_priv->drm;
 	bool rearm_hangcheck;
+	ktime_t end;
 
 	if (!READ_ONCE(dev_priv->gt.awake))
 		return;
@@ -3291,14 +3298,21 @@ i915_gem_idle_work_handler(struct work_struct *work)
 	 * Wait for last execlists context complete, but bail out in case a
 	 * new request is submitted.
 	 */
-	wait_for(intel_engines_are_idle(dev_priv), 10);
-	if (READ_ONCE(dev_priv->gt.active_requests))
-		return;
+	end = ktime_add_ms(ktime_get(), 200);
+	do {
+		if (new_requests_since_last_retire(dev_priv))
+			return;
+
+		if (intel_engines_are_idle(dev_priv))
+			break;
+
+		usleep_range(100, 500);
+	} while (ktime_before(ktime_get(), end));
 
 	rearm_hangcheck =
 		cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
 
-	if (!mutex_trylock(&dev->struct_mutex)) {
+	if (!mutex_trylock(&dev_priv->drm.struct_mutex)) {
 		/* Currently busy, come back later */
 		mod_delayed_work(dev_priv->wq,
 				 &dev_priv->gt.idle_work,
@@ -3310,13 +3324,14 @@ i915_gem_idle_work_handler(struct work_struct *work)
 	 * New request retired after this work handler started, extend active
 	 * period until next instance of the work.
 	 */
-	if (work_pending(work))
-		goto out_unlock;
-
-	if (dev_priv->gt.active_requests)
+	if (new_requests_since_last_retire(dev_priv))
 		goto out_unlock;
 
-	if (wait_for(intel_engines_are_idle(dev_priv), 10))
+	/*
+	 * We are committed now to parking the engines, make sure there
+	 * will be no more interrupts arriving later.
+	 */
+	if (!intel_engines_are_idle(dev_priv))
 		DRM_ERROR("Timeout waiting for engines to idle\n");
 
 	intel_engines_mark_idle(dev_priv);
@@ -3330,7 +3345,7 @@ i915_gem_idle_work_handler(struct work_struct *work)
 		gen6_rps_idle(dev_priv);
 	intel_runtime_pm_put(dev_priv);
 out_unlock:
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&dev_priv->drm.struct_mutex);
 
 out_rearm:
 	if (rearm_hangcheck) {
-- 
2.15.0.rc1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-10-23 20:07 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-20  9:59 [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking Chris Wilson
2017-10-20  9:59 ` [PATCH 2/2] drm/i915: Filter out spurious execlists context-switch interrupts Chris Wilson
2017-10-20 11:48   ` [PATCH v2] " Chris Wilson
2017-10-20 13:21   ` [PATCH 2/2] " Mika Kuoppala
2017-10-20 13:24     ` Chris Wilson
2017-10-20 13:31       ` Mika Kuoppala
2017-10-20 13:47         ` Chris Wilson
2017-10-20 13:59   ` Mika Kuoppala
2017-10-20 14:24     ` Chris Wilson
2017-10-20 10:23 ` ✓ Fi.CI.BAT: success for series starting with [1/2] drm/i915: Bump wait-times for the final CS interrupt before parking Patchwork
2017-10-20 11:38 ` ✓ Fi.CI.IGT: " Patchwork
2017-10-20 12:19 ` ✓ Fi.CI.BAT: success for series starting with [1/2] drm/i915: Bump wait-times for the final CS interrupt before parking (rev2) Patchwork
2017-10-20 13:11 ` [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking Mika Kuoppala
2017-10-20 13:19   ` Chris Wilson
2017-10-20 13:23     ` Mika Kuoppala
2017-10-20 13:52       ` Chris Wilson
2017-10-20 13:47 ` ✓ Fi.CI.IGT: success for series starting with [1/2] drm/i915: Bump wait-times for the final CS interrupt before parking (rev2) Patchwork
2017-10-23 11:52 ` [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking Mika Kuoppala
2017-10-23 12:00   ` Chris Wilson
2017-10-23 20:06 Chris Wilson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.