All of lore.kernel.org
 help / color / mirror / Atom feed
* The vma leak fix from yonder
@ 2016-06-03 16:36 Chris Wilson
  2016-06-03 16:36 ` [PATCH 01/62] drm/i915: Only start retire worker when idle Chris Wilson
                   ` (63 more replies)
  0 siblings, 64 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Just to see if anyone is awake this series takes us to the VMA leak fix.
Just the tip of the iceberg when it comes to VMA fixes...
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* [PATCH 01/62] drm/i915: Only start retire worker when idle
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-07 11:31   ` Joonas Lahtinen
  2016-06-03 16:36 ` [PATCH 02/62] drm/i915: Do not keep postponing the idle-work Chris Wilson
                   ` (62 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

The retire worker is a low frequency task that makes sure we retire
outstanding requests if userspace is being lax. We only need to start it
once as it remains active until the GPU is idle, so do a cheap test
before the more expensive queue_work(). A consequence of this is that we
need correct locking in the worker to make the hot path of request
submission cheap. To keep the symmetry and keep hangcheck strictly bound
by the GPU's wakelock, we move the cancel_sync(hangcheck) to the idle
worker before dropping the wakelock.

v2: Guard against RCU fouling the breadcrumbs bottom-half whilst we kick
the waiter.
v3: Remove the wakeref assertion squelching (now we hold a wakeref for
the hangcheck, any rpm error there is genuine).
v4: To prevent excess work when retiring requests, we split the busy
flag into two, a boolean to denote whether we hold the wakeref and a
bitmask of active engines.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
References: https://bugs.freedesktop.org/show_bug.cgi?id=88437
---
 drivers/gpu/drm/i915/i915_debugfs.c        |   5 +-
 drivers/gpu/drm/i915/i915_drv.c            |   2 -
 drivers/gpu/drm/i915/i915_drv.h            |  56 +++++++-------
 drivers/gpu/drm/i915/i915_gem.c            | 114 ++++++++++++++++++-----------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   6 ++
 drivers/gpu/drm/i915/i915_irq.c            |  15 +---
 drivers/gpu/drm/i915/intel_display.c       |  26 -------
 drivers/gpu/drm/i915/intel_pm.c            |   2 +-
 drivers/gpu/drm/i915/intel_ringbuffer.h    |   4 +-
 9 files changed, 115 insertions(+), 115 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 72dae6fb0aa2..dd6cf222e8f5 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2437,7 +2437,8 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
 	struct drm_file *file;
 
 	seq_printf(m, "RPS enabled? %d\n", dev_priv->rps.enabled);
-	seq_printf(m, "GPU busy? %d\n", dev_priv->mm.busy);
+	seq_printf(m, "GPU busy? %s [%x]\n",
+		   yesno(dev_priv->gt.awake), dev_priv->gt.active_engines);
 	seq_printf(m, "CPU waiting? %d\n", count_irq_waiters(dev_priv));
 	seq_printf(m, "Frequency requested %d; min hard:%d, soft:%d; max soft:%d, hard:%d\n",
 		   intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq),
@@ -2777,7 +2778,7 @@ static int i915_runtime_pm_status(struct seq_file *m, void *unused)
 	if (!HAS_RUNTIME_PM(dev_priv))
 		seq_puts(m, "Runtime power management not supported\n");
 
-	seq_printf(m, "GPU idle: %s\n", yesno(!dev_priv->mm.busy));
+	seq_printf(m, "GPU idle: %s\n", yesno(!dev_priv->gt.awake));
 	seq_printf(m, "IRQs disabled: %s\n",
 		   yesno(!intel_irqs_enabled(dev_priv)));
 #ifdef CONFIG_PM
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 3c8c75c77574..5f7208d2fdbf 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2697,8 +2697,6 @@ static int intel_runtime_suspend(struct device *device)
 	i915_gem_release_all_mmaps(dev_priv);
 	mutex_unlock(&dev->struct_mutex);
 
-	cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
-
 	intel_guc_suspend(dev);
 
 	intel_suspend_gt_powersave(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 88d9242398ce..3f075adf9e84 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1305,37 +1305,11 @@ struct i915_gem_mm {
 	struct list_head fence_list;
 
 	/**
-	 * We leave the user IRQ off as much as possible,
-	 * but this means that requests will finish and never
-	 * be retired once the system goes idle. Set a timer to
-	 * fire periodically while the ring is running. When it
-	 * fires, go retire requests.
-	 */
-	struct delayed_work retire_work;
-
-	/**
-	 * When we detect an idle GPU, we want to turn on
-	 * powersaving features. So once we see that there
-	 * are no more requests outstanding and no more
-	 * arrive within a small period of time, we fire
-	 * off the idle_work.
-	 */
-	struct delayed_work idle_work;
-
-	/**
 	 * Are we in a non-interruptible section of code like
 	 * modesetting?
 	 */
 	bool interruptible;
 
-	/**
-	 * Is the GPU currently considered idle, or busy executing userspace
-	 * requests?  Whilst idle, we attempt to power down the hardware and
-	 * display clocks. In order to reduce the effect on performance, there
-	 * is a slight delay before we do so.
-	 */
-	bool busy;
-
 	/* the indicator for dispatch video commands on two BSD rings */
 	unsigned int bsd_ring_dispatch_index;
 
@@ -2034,6 +2008,34 @@ struct drm_i915_private {
 		int (*init_engines)(struct drm_device *dev);
 		void (*cleanup_engine)(struct intel_engine_cs *engine);
 		void (*stop_engine)(struct intel_engine_cs *engine);
+
+		/**
+		 * Is the GPU currently considered idle, or busy executing
+		 * userspace requests? Whilst idle, we allow runtime power
+		 * management to power down the hardware and display clocks.
+		 * In order to reduce the effect on performance, there
+		 * is a slight delay before we do so.
+		 */
+		unsigned active_engines;
+		bool awake;
+
+		/**
+		 * We leave the user IRQ off as much as possible,
+		 * but this means that requests will finish and never
+		 * be retired once the system goes idle. Set a timer to
+		 * fire periodically while the ring is running. When it
+		 * fires, go retire requests.
+		 */
+		struct delayed_work retire_work;
+
+		/**
+		 * When we detect an idle GPU, we want to turn on
+		 * powersaving features. So once we see that there
+		 * are no more requests outstanding and no more
+		 * arrive within a small period of time, we fire
+		 * off the idle_work.
+		 */
+		struct delayed_work idle_work;
 	} gt;
 
 	/* perform PHY state sanity checks? */
@@ -3247,7 +3249,7 @@ int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);
 struct drm_i915_gem_request *
 i915_gem_find_active_request(struct intel_engine_cs *engine);
 
-bool i915_gem_retire_requests(struct drm_i915_private *dev_priv);
+void i915_gem_retire_requests(struct drm_i915_private *dev_priv);
 void i915_gem_retire_requests_ring(struct intel_engine_cs *engine);
 
 static inline u32 i915_reset_counter(struct i915_gpu_error *error)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f4e550ddaa5d..5a7131b749a2 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2554,6 +2554,26 @@ i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
 	return 0;
 }
 
+static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
+			       const struct intel_engine_cs *engine)
+{
+	dev_priv->gt.active_engines |= intel_engine_flag(engine);
+	if (dev_priv->gt.awake)
+		return;
+
+	intel_runtime_pm_get_noresume(dev_priv);
+	dev_priv->gt.awake = true;
+
+	intel_enable_gt_powersave(dev_priv);
+	i915_update_gfx_val(dev_priv);
+	if (INTEL_INFO(dev_priv)->gen >= 6)
+		gen6_rps_busy(dev_priv);
+
+	queue_delayed_work(dev_priv->wq,
+			   &dev_priv->gt.retire_work,
+			   round_jiffies_up_relative(HZ));
+}
+
 /*
  * NB: This function is not allowed to fail. Doing so would mean the the
  * request is not being tracked for completion but the work itself is
@@ -2640,12 +2660,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	}
 	/* Not allowed to fail! */
 	WARN(ret, "emit|add_request failed: %d!\n", ret);
-
-	queue_delayed_work(dev_priv->wq,
-			   &dev_priv->mm.retire_work,
-			   round_jiffies_up_relative(HZ));
-	intel_mark_busy(dev_priv);
-
 	/* Sanity check that the reserved size was large enough. */
 	ret = intel_ring_get_tail(ringbuf) - request_start;
 	if (ret < 0)
@@ -2654,6 +2668,8 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 		  "Not enough space reserved (%d bytes) "
 		  "for adding the request (%d bytes)\n",
 		  reserved_tail, ret);
+
+	i915_gem_mark_busy(dev_priv, engine);
 }
 
 static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
@@ -2968,46 +2984,47 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 	WARN_ON(i915_verify_lists(engine->dev));
 }
 
-bool
-i915_gem_retire_requests(struct drm_i915_private *dev_priv)
+void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
 {
 	struct intel_engine_cs *engine;
-	bool idle = true;
+
+	if (dev_priv->gt.active_engines == 0)
+		return;
+
+	GEM_BUG_ON(!dev_priv->gt.awake);
 
 	for_each_engine(engine, dev_priv) {
 		i915_gem_retire_requests_ring(engine);
-		idle &= list_empty(&engine->request_list);
-		if (i915.enable_execlists) {
-			spin_lock_bh(&engine->execlist_lock);
-			idle &= list_empty(&engine->execlist_queue);
-			spin_unlock_bh(&engine->execlist_lock);
-		}
+		if (list_empty(&engine->request_list))
+			dev_priv->gt.active_engines &= ~intel_engine_flag(engine);
 	}
 
-	if (idle)
+	if (dev_priv->gt.active_engines == 0)
 		mod_delayed_work(dev_priv->wq,
-				 &dev_priv->mm.idle_work,
+				 &dev_priv->gt.idle_work,
 				 msecs_to_jiffies(100));
-
-	return idle;
 }
 
 static void
 i915_gem_retire_work_handler(struct work_struct *work)
 {
 	struct drm_i915_private *dev_priv =
-		container_of(work, typeof(*dev_priv), mm.retire_work.work);
+		container_of(work, typeof(*dev_priv), gt.retire_work.work);
 	struct drm_device *dev = dev_priv->dev;
-	bool idle;
 
 	/* Come back later if the device is busy... */
-	idle = false;
 	if (mutex_trylock(&dev->struct_mutex)) {
-		idle = i915_gem_retire_requests(dev_priv);
+		i915_gem_retire_requests(dev_priv);
 		mutex_unlock(&dev->struct_mutex);
 	}
-	if (!idle)
-		queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work,
+
+	/* Keep the retire handler running until we are finally idle.
+	 * We do not need to do this test under locking as in the worst-case
+	 * we queue the retire worker once too often.
+	 */
+	if (READ_ONCE(dev_priv->gt.awake))
+		queue_delayed_work(dev_priv->wq,
+				   &dev_priv->gt.retire_work,
 				   round_jiffies_up_relative(HZ));
 }
 
@@ -3015,25 +3032,36 @@ static void
 i915_gem_idle_work_handler(struct work_struct *work)
 {
 	struct drm_i915_private *dev_priv =
-		container_of(work, typeof(*dev_priv), mm.idle_work.work);
+		container_of(work, typeof(*dev_priv), gt.idle_work.work);
 	struct drm_device *dev = dev_priv->dev;
 	struct intel_engine_cs *engine;
 
-	for_each_engine(engine, dev_priv)
-		if (!list_empty(&engine->request_list))
-			return;
+	if (!READ_ONCE(dev_priv->gt.awake))
+		return;
 
-	/* we probably should sync with hangcheck here, using cancel_work_sync.
-	 * Also locking seems to be fubar here, engine->request_list is protected
-	 * by dev->struct_mutex. */
+	mutex_lock(&dev->struct_mutex);
+	if (dev_priv->gt.active_engines)
+		goto out;
 
-	intel_mark_idle(dev_priv);
+	for_each_engine(engine, dev_priv)
+		i915_gem_batch_pool_fini(&engine->batch_pool);
 
-	if (mutex_trylock(&dev->struct_mutex)) {
-		for_each_engine(engine, dev_priv)
-			i915_gem_batch_pool_fini(&engine->batch_pool);
+	GEM_BUG_ON(!dev_priv->gt.awake);
+	dev_priv->gt.awake = false;
 
-		mutex_unlock(&dev->struct_mutex);
+	if (INTEL_INFO(dev_priv)->gen >= 6)
+		gen6_rps_idle(dev_priv);
+	intel_runtime_pm_put(dev_priv);
+out:
+	mutex_unlock(&dev->struct_mutex);
+
+	if (!dev_priv->gt.awake &&
+	    cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work)) {
+		unsigned stuck = intel_kick_waiters(dev_priv);
+		if (unlikely(stuck)) {
+			DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
+			dev_priv->gpu_error.missed_irq_rings |= stuck;
+		}
 	}
 }
 
@@ -4154,7 +4182,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
 
 	ret = __i915_wait_request(target, true, NULL, NULL);
 	if (ret == 0)
-		queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);
+		queue_delayed_work(dev_priv->wq, &dev_priv->gt.retire_work, 0);
 
 	i915_gem_request_unreference(target);
 
@@ -4672,13 +4700,13 @@ i915_gem_suspend(struct drm_device *dev)
 	mutex_unlock(&dev->struct_mutex);
 
 	cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
-	cancel_delayed_work_sync(&dev_priv->mm.retire_work);
-	flush_delayed_work(&dev_priv->mm.idle_work);
+	cancel_delayed_work_sync(&dev_priv->gt.retire_work);
+	flush_delayed_work(&dev_priv->gt.idle_work);
 
 	/* Assert that we sucessfully flushed all the work and
 	 * reset the GPU back to its idle, low power state.
 	 */
-	WARN_ON(dev_priv->mm.busy);
+	WARN_ON(dev_priv->gt.awake);
 
 	return 0;
 
@@ -4982,9 +5010,9 @@ i915_gem_load_init(struct drm_device *dev)
 		init_engine_lists(&dev_priv->engine[i]);
 	for (i = 0; i < I915_MAX_NUM_FENCES; i++)
 		INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list);
-	INIT_DELAYED_WORK(&dev_priv->mm.retire_work,
+	INIT_DELAYED_WORK(&dev_priv->gt.retire_work,
 			  i915_gem_retire_work_handler);
-	INIT_DELAYED_WORK(&dev_priv->mm.idle_work,
+	INIT_DELAYED_WORK(&dev_priv->gt.idle_work,
 			  i915_gem_idle_work_handler);
 	init_waitqueue_head(&dev_priv->gpu_error.wait_queue);
 	init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 8097698b9622..d3297dab0298 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1477,6 +1477,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 		dispatch_flags |= I915_DISPATCH_RS;
 	}
 
+	/* Take a local wakeref for preparing to dispatch the execbuf as
+	 * we expect to access the hardware fairly frequently in the
+	 * process. Upon first dispatch, we acquire another prolonged
+	 * wakeref that we hold until the GPU has been idle for at least
+	 * 100ms.
+	 */
 	intel_runtime_pm_get(dev_priv);
 
 	ret = i915_mutex_lock_interruptible(dev);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index f74f5727ea77..7a2dc8f1f64e 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3102,12 +3102,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 	if (!i915.enable_hangcheck)
 		return;
 
-	/*
-	 * The hangcheck work is synced during runtime suspend, we don't
-	 * require a wakeref. TODO: instead of disabling the asserts make
-	 * sure that we hold a reference when this work is running.
-	 */
-	DISABLE_RPM_WAKEREF_ASSERTS(dev_priv);
+	if (!READ_ONCE(dev_priv->gt.awake))
+		return;
 
 	/* As enabling the GPU requires fairly extensive mmio access,
 	 * periodically arm the mmio checker to see if we are triggering
@@ -3215,17 +3211,12 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 		}
 	}
 
-	if (rings_hung) {
+	if (rings_hung)
 		i915_handle_error(dev_priv, rings_hung, "Engine(s) hung");
-		goto out;
-	}
 
 	/* Reset timer in case GPU hangs without another request being added */
 	if (busy_count)
 		i915_queue_hangcheck(dev_priv);
-
-out:
-	ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
 }
 
 static void ibx_irq_reset(struct drm_device *dev)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index bb09ee6d1a3f..14e41fdd8112 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -10969,32 +10969,6 @@ struct drm_display_mode *intel_crtc_mode_get(struct drm_device *dev,
 	return mode;
 }
 
-void intel_mark_busy(struct drm_i915_private *dev_priv)
-{
-	if (dev_priv->mm.busy)
-		return;
-
-	intel_runtime_pm_get(dev_priv);
-	intel_enable_gt_powersave(dev_priv);
-	i915_update_gfx_val(dev_priv);
-	if (INTEL_GEN(dev_priv) >= 6)
-		gen6_rps_busy(dev_priv);
-	dev_priv->mm.busy = true;
-}
-
-void intel_mark_idle(struct drm_i915_private *dev_priv)
-{
-	if (!dev_priv->mm.busy)
-		return;
-
-	dev_priv->mm.busy = false;
-
-	if (INTEL_GEN(dev_priv) >= 6)
-		gen6_rps_idle(dev_priv);
-
-	intel_runtime_pm_put(dev_priv);
-}
-
 static void intel_crtc_destroy(struct drm_crtc *crtc)
 {
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 712bd0debb91..35bb9a23cd2d 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4850,7 +4850,7 @@ void gen6_rps_boost(struct drm_i915_private *dev_priv,
 	/* This is intentionally racy! We peek at the state here, then
 	 * validate inside the RPS worker.
 	 */
-	if (!(dev_priv->mm.busy &&
+	if (!(dev_priv->gt.awake &&
 	      dev_priv->rps.enabled &&
 	      dev_priv->rps.cur_freq < dev_priv->rps.max_freq_softlimit))
 		return;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 166f1a3829b0..d0cd9a1aa80e 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -372,13 +372,13 @@ struct intel_engine_cs {
 };
 
 static inline bool
-intel_engine_initialized(struct intel_engine_cs *engine)
+intel_engine_initialized(const struct intel_engine_cs *engine)
 {
 	return engine->i915 != NULL;
 }
 
 static inline unsigned
-intel_engine_flag(struct intel_engine_cs *engine)
+intel_engine_flag(const struct intel_engine_cs *engine)
 {
 	return 1 << engine->id;
 }
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 02/62] drm/i915: Do not keep postponing the idle-work
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
  2016-06-03 16:36 ` [PATCH 01/62] drm/i915: Only start retire worker when idle Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-07 11:34   ` Joonas Lahtinen
  2016-06-03 16:36 ` [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl Chris Wilson
                   ` (61 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Rather than persistently postponing the idle-work everytime somebody
calls i915_gem_retire_requests() (potentially ensuring that we never
reach the idle state), queue the work the first time we detect all
requests are complete. Then if in 100ms, more requests have been queued,
we will abort the idle-worker and wait again until all the new requests
have been completed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5a7131b749a2..e27c9331b84b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3000,9 +3000,9 @@ void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
 	}
 
 	if (dev_priv->gt.active_engines == 0)
-		mod_delayed_work(dev_priv->wq,
-				 &dev_priv->gt.idle_work,
-				 msecs_to_jiffies(100));
+		queue_delayed_work(dev_priv->wq,
+				   &dev_priv->gt.idle_work,
+				   msecs_to_jiffies(100));
 }
 
 static void
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
  2016-06-03 16:36 ` [PATCH 01/62] drm/i915: Only start retire worker when idle Chris Wilson
  2016-06-03 16:36 ` [PATCH 02/62] drm/i915: Do not keep postponing the idle-work Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-07 11:39   ` Joonas Lahtinen
  2016-06-03 16:36 ` [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter Chris Wilson
                   ` (60 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

We know, by design, that whilst the GPU is active (and thus we are
throttling) the retire_worker is queued. Therefore attempting to requeue
it with queue_delayed_work() is a no-op and we can safely remove it.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e27c9331b84b..da44715c894f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4181,9 +4181,6 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
 		return 0;
 
 	ret = __i915_wait_request(target, true, NULL, NULL);
-	if (ret == 0)
-		queue_delayed_work(dev_priv->wq, &dev_priv->gt.retire_work, 0);
-
 	i915_gem_request_unreference(target);
 
 	return ret;
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (2 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-08  9:04   ` Daniel Vetter
  2016-06-03 16:36 ` [PATCH 05/62] drm/i915: Add background commentary to "waitboosting" Chris Wilson
                   ` (59 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx; +Cc: Jesse Barnes

Ideally, we want to automagically have the GPU respond to the
instantaneous load by reclocking itself. However, reclocking occurs
relatively slowly, and to the client waiting for a result from the GPU,
too late. To compensate and reduce the client latency, we allow the
first wait from a client to boost the GPU clocks to maximum. This
overcomes the lag in autoreclocking, at the expense of forcing the GPU
clocks too high. So to offset the excessive power usage, we currently
allow a client to only boost the clocks once before we detect the GPU
is idle again. This works reasonably for say the first frame in a
benchmark, but for many more synchronous workloads (like OpenCL) we find
the GPU clocks remain too low. By noting a wait which would idle the GPU
(i.e. we just waited upon the last known request), we can give that
client the idle boost credit (for their next wait) without the 100ms
delay required for us to detect the GPU idle state. The intention is to
boost clients that are stalling in the process of feeding the GPU more
work (and who in doing so let the GPU idle), without granting boost
credits to clients that are throttling themselves (such as compositors).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: "Zou, Nanhai" <nanhai.zou@intel.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
---
 drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index da44715c894f..bec02baef190 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1310,6 +1310,22 @@ complete:
 			*timeout = 0;
 	}
 
+	if (rps && req->seqno == req->engine->last_submitted_seqno) {
+		/* The GPU is now idle and this client has stalled.
+		 * Since no other client has submitted a request in the
+		 * meantime, assume that this client is the only one
+		 * supplying work to the GPU but is unable to keep that
+		 * work supplied because it is waiting. Since the GPU is
+		 * then never kept fully busy, RPS autoclocking will
+		 * keep the clocks relatively low, causing further delays.
+		 * Compensate by giving the synchronous client credit for
+		 * a waitboost next time.
+		 */
+		spin_lock(&req->i915->rps.client_lock);
+		list_del_init(&rps->link);
+		spin_unlock(&req->i915->rps.client_lock);
+	}
+
 	return ret;
 }
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 05/62] drm/i915: Add background commentary to "waitboosting"
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (3 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles Chris Wilson
                   ` (58 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Describe the intent of boosting the GPU frequency to maximum before
waiting on the GPU.

RPS waitboosting was introduced with

commit b29c19b645287f7062e17d70fa4e9781a01a5d88
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Wed Sep 25 17:34:56 2013 +0100

    drm/i915: Boost RPS frequency for CPU stalls

but lacked a concise comment in the code to explain itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_gem.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index bec02baef190..0f487e3b920c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1237,6 +1237,21 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
 
 	trace_i915_gem_request_wait_begin(req);
 
+	/* This client is about to stall waiting for the GPU. In many cases
+	 * this is undesirable and limits the throughput of the system, as
+	 * many clients cannot continue processing user input/output whilst
+	 * blocked. RPS autotuning may take tens of milliseconds to respond
+	 * to the GPU load and thus incurs additional latency for the client.
+	 * We can circumvent that by promoting the GPU frequency to maximum
+	 * before we wait. This makes the GPU throttle up much more quickly
+	 * (good for benchmarks and user experience, e.g. window animations),
+	 * but at a cost of spending more power processing the workload
+	 * (bad for battery). Not all clients even want their results
+	 * immediately and for them we should just let the GPU select its own
+	 * frequency to maximise efficiency. To prevent a single client from
+	 * forcing the clocks too high for the whole system, we only allow
+	 * each client to waitboost once in a busy period.
+	 */
 	if (INTEL_INFO(req->i915)->gen >= 6)
 		gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (4 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 05/62] drm/i915: Add background commentary to "waitboosting" Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-16  8:49   ` Michał Winiarski
  2016-06-03 16:36 ` [PATCH 07/62] drm/i915: Remove temporary RPM wakeref assert disables Chris Wilson
                   ` (57 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx; +Cc: Jesse Barnes

Make sure that the RPS bottom-half is flushed before we set the idle
frequency when we decide the GPU is idle. This should prevent any races
with the bottom-half and setting the idle frequency, and ensures that
the bottom-half is bounded by the GPU's rpm reference taken for when it
is active (i.e. between gen6_rps_busy() and gen6_rps_idle()).

v2: Avoid recursively using the i915->wq - RPS does not touch the
struct_mutex so has no place being on the ordered i915->wq.
v3: Enable/disable interrupts for RPS busy/idle in order to prevent
further HW access from RPS outside of the wakeref.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
---
 drivers/gpu/drm/i915/i915_drv.c |  3 ---
 drivers/gpu/drm/i915/i915_irq.c | 32 ++++++++++++--------------------
 drivers/gpu/drm/i915/intel_pm.c | 14 ++++++++++----
 3 files changed, 22 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 5f7208d2fdbf..7ba040141722 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2699,7 +2699,6 @@ static int intel_runtime_suspend(struct device *device)
 
 	intel_guc_suspend(dev);
 
-	intel_suspend_gt_powersave(dev_priv);
 	intel_runtime_pm_disable_interrupts(dev_priv);
 
 	ret = 0;
@@ -2813,8 +2812,6 @@ static int intel_runtime_resume(struct device *device)
 	if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
 		intel_hpd_init(dev_priv);
 
-	intel_autoenable_gt_powersave(dev_priv);
-
 	enable_rpm_wakeref_asserts(dev_priv);
 
 	if (ret)
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 7a2dc8f1f64e..34e25fc2b90a 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -351,9 +351,8 @@ void gen6_reset_rps_interrupts(struct drm_i915_private *dev_priv)
 void gen6_enable_rps_interrupts(struct drm_i915_private *dev_priv)
 {
 	spin_lock_irq(&dev_priv->irq_lock);
-
-	WARN_ON(dev_priv->rps.pm_iir);
-	WARN_ON(I915_READ(gen6_pm_iir(dev_priv)) & dev_priv->pm_rps_events);
+	WARN_ON_ONCE(dev_priv->rps.pm_iir);
+	WARN_ON_ONCE(I915_READ(gen6_pm_iir(dev_priv)) & dev_priv->pm_rps_events);
 	dev_priv->rps.interrupts_enabled = true;
 	I915_WRITE(gen6_pm_ier(dev_priv), I915_READ(gen6_pm_ier(dev_priv)) |
 				dev_priv->pm_rps_events);
@@ -371,11 +370,6 @@ void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv)
 {
 	spin_lock_irq(&dev_priv->irq_lock);
 	dev_priv->rps.interrupts_enabled = false;
-	spin_unlock_irq(&dev_priv->irq_lock);
-
-	cancel_work_sync(&dev_priv->rps.work);
-
-	spin_lock_irq(&dev_priv->irq_lock);
 
 	I915_WRITE(GEN6_PMINTRMSK, gen6_sanitize_rps_pm_mask(dev_priv, ~0));
 
@@ -384,8 +378,15 @@ void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv)
 				~dev_priv->pm_rps_events);
 
 	spin_unlock_irq(&dev_priv->irq_lock);
-
 	synchronize_irq(dev_priv->dev->irq);
+
+	/* Now that we will not be generating any more work, flush any
+	 * outsanding tasks. As we are called on the RPS idle path,
+	 * we will reset the GPU to minimum frequencies, so the current
+	 * state of the worker can be discarded.
+	 */
+	cancel_work_sync(&dev_priv->rps.work);
+	gen6_reset_rps_interrupts(dev_priv);
 }
 
 /**
@@ -1082,13 +1083,6 @@ static void gen6_pm_rps_work(struct work_struct *work)
 		return;
 	}
 
-	/*
-	 * The RPS work is synced during runtime suspend, we don't require a
-	 * wakeref. TODO: instead of disabling the asserts make sure that we
-	 * always hold an RPM reference while the work is running.
-	 */
-	DISABLE_RPM_WAKEREF_ASSERTS(dev_priv);
-
 	pm_iir = dev_priv->rps.pm_iir;
 	dev_priv->rps.pm_iir = 0;
 	/* Make sure not to corrupt PMIMR state used by ringbuffer on GEN6 */
@@ -1101,7 +1095,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
 	WARN_ON(pm_iir & ~dev_priv->pm_rps_events);
 
 	if ((pm_iir & dev_priv->pm_rps_events) == 0 && !client_boost)
-		goto out;
+		return;
 
 	mutex_lock(&dev_priv->rps.hw_lock);
 
@@ -1156,8 +1150,6 @@ static void gen6_pm_rps_work(struct work_struct *work)
 	intel_set_rps(dev_priv, new_delay);
 
 	mutex_unlock(&dev_priv->rps.hw_lock);
-out:
-	ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
 }
 
 
@@ -1597,7 +1589,7 @@ static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir)
 		gen6_disable_pm_irq(dev_priv, pm_iir & dev_priv->pm_rps_events);
 		if (dev_priv->rps.interrupts_enabled) {
 			dev_priv->rps.pm_iir |= pm_iir & dev_priv->pm_rps_events;
-			queue_work(dev_priv->wq, &dev_priv->rps.work);
+			schedule_work(&dev_priv->rps.work);
 		}
 		spin_unlock(&dev_priv->irq_lock);
 	}
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 35bb9a23cd2d..923ec6884a5e 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4820,12 +4820,21 @@ void gen6_rps_busy(struct drm_i915_private *dev_priv)
 			gen6_rps_reset_ei(dev_priv);
 		I915_WRITE(GEN6_PMINTRMSK,
 			   gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+
+		gen6_enable_rps_interrupts(dev_priv);
 	}
 	mutex_unlock(&dev_priv->rps.hw_lock);
 }
 
 void gen6_rps_idle(struct drm_i915_private *dev_priv)
 {
+	/* Flush our bottom-half so that it does not race with us
+	 * setting the idle frequency and so that it is bounded by
+	 * our rpm wakeref. And then disable the interrupts to stop any
+	 * futher RPS reclocking whilst we are asleep.
+	 */
+	gen6_disable_rps_interrupts(dev_priv);
+
 	mutex_lock(&dev_priv->rps.hw_lock);
 	if (dev_priv->rps.enabled) {
 		if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
@@ -4866,7 +4875,7 @@ void gen6_rps_boost(struct drm_i915_private *dev_priv,
 		spin_lock_irq(&dev_priv->irq_lock);
 		if (dev_priv->rps.interrupts_enabled) {
 			dev_priv->rps.client_boost = true;
-			queue_work(dev_priv->wq, &dev_priv->rps.work);
+			schedule_work(&dev_priv->rps.work);
 		}
 		spin_unlock_irq(&dev_priv->irq_lock);
 
@@ -6594,9 +6603,6 @@ void intel_enable_gt_powersave(struct drm_i915_private *dev_priv)
 	WARN_ON(dev_priv->rps.efficient_freq < dev_priv->rps.min_freq);
 	WARN_ON(dev_priv->rps.efficient_freq > dev_priv->rps.max_freq);
 
-	if (INTEL_GEN(dev_priv) >= 6)
-		gen6_enable_rps_interrupts(dev_priv);
-
 	dev_priv->rps.enabled = true;
 	mutex_unlock(&dev_priv->rps.hw_lock);
 }
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 07/62] drm/i915: Remove temporary RPM wakeref assert disables
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (5 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface Chris Wilson
                   ` (56 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Now that the last couple of hacks have been removed from the runtime
powermanagement users, we can fully enable the asserts.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_drv.h | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 3f39004fbc6a..a29618dc7e98 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1616,13 +1616,6 @@ enable_rpm_wakeref_asserts(struct drm_i915_private *dev_priv)
 	atomic_dec(&dev_priv->pm.wakeref_count);
 }
 
-/* TODO: convert users of these to rely instead on proper RPM refcounting */
-#define DISABLE_RPM_WAKEREF_ASSERTS(dev_priv)	\
-	disable_rpm_wakeref_asserts(dev_priv)
-
-#define ENABLE_RPM_WAKEREF_ASSERTS(dev_priv)	\
-	enable_rpm_wakeref_asserts(dev_priv)
-
 void intel_runtime_pm_get(struct drm_i915_private *dev_priv);
 bool intel_runtime_pm_get_if_in_use(struct drm_i915_private *dev_priv);
 void intel_runtime_pm_get_noresume(struct drm_i915_private *dev_priv);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (6 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 07/62] drm/i915: Remove temporary RPM wakeref assert disables Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-08 11:50   ` Arun Siluvery
  2016-06-03 16:36 ` [PATCH 09/62] drm/i915: Record the ringbuffer associated with the request Chris Wilson
                   ` (55 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Now that we have (near) universal GPU recovery code, we can inject a
real hang from userspace and not need any fakery. Not only does this
mean that the testing is far more realistic, but we can simplify the
kernel in the process.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c     | 35 --------------------------
 drivers/gpu/drm/i915/i915_drv.c         | 17 ++-----------
 drivers/gpu/drm/i915/i915_drv.h         | 19 --------------
 drivers/gpu/drm/i915/i915_gem.c         | 44 ++++++++++-----------------------
 drivers/gpu/drm/i915/intel_lrc.c        |  3 ---
 drivers/gpu/drm/i915/intel_ringbuffer.c |  8 ------
 drivers/gpu/drm/i915/intel_ringbuffer.h |  1 -
 7 files changed, 15 insertions(+), 112 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index dd6cf222e8f5..8f576b443ff6 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -4821,40 +4821,6 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_wedged_fops,
 			"%llu\n");
 
 static int
-i915_ring_stop_get(void *data, u64 *val)
-{
-	struct drm_device *dev = data;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-
-	*val = dev_priv->gpu_error.stop_rings;
-
-	return 0;
-}
-
-static int
-i915_ring_stop_set(void *data, u64 val)
-{
-	struct drm_device *dev = data;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	int ret;
-
-	DRM_DEBUG_DRIVER("Stopping rings 0x%08llx\n", val);
-
-	ret = mutex_lock_interruptible(&dev->struct_mutex);
-	if (ret)
-		return ret;
-
-	dev_priv->gpu_error.stop_rings = val;
-	mutex_unlock(&dev->struct_mutex);
-
-	return 0;
-}
-
-DEFINE_SIMPLE_ATTRIBUTE(i915_ring_stop_fops,
-			i915_ring_stop_get, i915_ring_stop_set,
-			"0x%08llx\n");
-
-static int
 i915_ring_missed_irq_get(void *data, u64 *val)
 {
 	struct drm_device *dev = data;
@@ -5457,7 +5423,6 @@ static const struct i915_debugfs_files {
 	{"i915_max_freq", &i915_max_freq_fops},
 	{"i915_min_freq", &i915_min_freq_fops},
 	{"i915_cache_sharing", &i915_cache_sharing_fops},
-	{"i915_ring_stop", &i915_ring_stop_fops},
 	{"i915_ring_missed_irq", &i915_ring_missed_irq_fops},
 	{"i915_ring_test_irq", &i915_ring_test_irq_fops},
 	{"i915_gem_drop_caches", &i915_drop_caches_fops},
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 7ba040141722..f2ac0cae929b 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2125,24 +2125,11 @@ int i915_reset(struct drm_i915_private *dev_priv)
 		goto error;
 	}
 
+	pr_notice("drm/i915: Resetting chip after gpu hang\n");
+
 	i915_gem_reset(dev);
 
 	ret = intel_gpu_reset(dev_priv, ALL_ENGINES);
-
-	/* Also reset the gpu hangman. */
-	if (error->stop_rings != 0) {
-		DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
-		error->stop_rings = 0;
-		if (ret == -ENODEV) {
-			DRM_INFO("Reset not implemented, but ignoring "
-				 "error for simulated gpu hangs\n");
-			ret = 0;
-		}
-	}
-
-	if (i915_stop_ring_allow_warn(dev_priv))
-		pr_notice("drm/i915: Resetting chip after gpu hang\n");
-
 	if (ret) {
 		if (ret != -ENODEV)
 			DRM_ERROR("Failed to reset chip: %i\n", ret);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 3f075adf9e84..a48c0f4e1d42 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1393,13 +1393,6 @@ struct i915_gpu_error {
 	 */
 	wait_queue_head_t reset_queue;
 
-	/* Userspace knobs for gpu hang simulation;
-	 * combines both a ring mask, and extra flags
-	 */
-	u32 stop_rings;
-#define I915_STOP_RING_ALLOW_BAN       (1 << 31)
-#define I915_STOP_RING_ALLOW_WARN      (1 << 30)
-
 	/* For missed irq/seqno simulation. */
 	unsigned long test_irq_rings;
 };
@@ -3292,18 +3285,6 @@ static inline u32 i915_reset_count(struct i915_gpu_error *error)
 	return ((i915_reset_counter(error) & ~I915_WEDGED) + 1) / 2;
 }
 
-static inline bool i915_stop_ring_allow_ban(struct drm_i915_private *dev_priv)
-{
-	return dev_priv->gpu_error.stop_rings == 0 ||
-		dev_priv->gpu_error.stop_rings & I915_STOP_RING_ALLOW_BAN;
-}
-
-static inline bool i915_stop_ring_allow_warn(struct drm_i915_private *dev_priv)
-{
-	return dev_priv->gpu_error.stop_rings == 0 ||
-		dev_priv->gpu_error.stop_rings & I915_STOP_RING_ALLOW_WARN;
-}
-
 void i915_gem_reset(struct drm_device *dev);
 bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
 int __must_check i915_gem_init(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0f487e3b920c..f48f54193972 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2703,44 +2703,30 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	i915_gem_mark_busy(dev_priv, engine);
 }
 
-static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
-				   const struct i915_gem_context *ctx)
+static bool i915_context_is_banned(const struct i915_gem_context *ctx)
 {
 	unsigned long elapsed;
 
-	elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
-
 	if (ctx->hang_stats.banned)
 		return true;
 
+	elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
 	if (ctx->hang_stats.ban_period_seconds &&
 	    elapsed <= ctx->hang_stats.ban_period_seconds) {
-		if (!i915_gem_context_is_default(ctx)) {
-			DRM_DEBUG("context hanging too fast, banning!\n");
-			return true;
-		} else if (i915_stop_ring_allow_ban(dev_priv)) {
-			if (i915_stop_ring_allow_warn(dev_priv))
-				DRM_ERROR("gpu hanging too fast, banning!\n");
-			return true;
-		}
+		DRM_DEBUG("context hanging too fast, banning!\n");
+		return true;
 	}
 
 	return false;
 }
 
-static void i915_set_reset_status(struct drm_i915_private *dev_priv,
-				  struct i915_gem_context *ctx,
+static void i915_set_reset_status(struct i915_gem_context *ctx,
 				  const bool guilty)
 {
-	struct i915_ctx_hang_stats *hs;
-
-	if (WARN_ON(!ctx))
-		return;
-
-	hs = &ctx->hang_stats;
+	struct i915_ctx_hang_stats *hs = &ctx->hang_stats;
 
 	if (guilty) {
-		hs->banned = i915_context_is_banned(dev_priv, ctx);
+		hs->banned = i915_context_is_banned(ctx);
 		hs->batch_active++;
 		hs->guilty_ts = get_seconds();
 	} else {
@@ -2867,27 +2853,23 @@ i915_gem_find_active_request(struct intel_engine_cs *engine)
 	return NULL;
 }
 
-static void i915_gem_reset_engine_status(struct drm_i915_private *dev_priv,
-				       struct intel_engine_cs *engine)
+static void i915_gem_reset_engine_status(struct intel_engine_cs *engine)
 {
 	struct drm_i915_gem_request *request;
 	bool ring_hung;
 
 	request = i915_gem_find_active_request(engine);
-
 	if (request == NULL)
 		return;
 
 	ring_hung = engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG;
 
-	i915_set_reset_status(dev_priv, request->ctx, ring_hung);
-
+	i915_set_reset_status(request->ctx, ring_hung);
 	list_for_each_entry_continue(request, &engine->request_list, list)
-		i915_set_reset_status(dev_priv, request->ctx, false);
+		i915_set_reset_status(request->ctx, false);
 }
 
-static void i915_gem_reset_engine_cleanup(struct drm_i915_private *dev_priv,
-					struct intel_engine_cs *engine)
+static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
 {
 	struct intel_ringbuffer *buffer;
 
@@ -2957,10 +2939,10 @@ void i915_gem_reset(struct drm_device *dev)
 	 * their reference to the objects, the inspection must be done first.
 	 */
 	for_each_engine(engine, dev_priv)
-		i915_gem_reset_engine_status(dev_priv, engine);
+		i915_gem_reset_engine_status(engine);
 
 	for_each_engine(engine, dev_priv)
-		i915_gem_reset_engine_cleanup(dev_priv, engine);
+		i915_gem_reset_engine_cleanup(engine);
 
 	i915_gem_context_reset(dev);
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9e19b2c5b3ae..0742a849acce 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -764,9 +764,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 	intel_logical_ring_emit(ringbuf, MI_NOOP);
 	intel_logical_ring_advance(ringbuf);
 
-	if (intel_engine_stopped(engine))
-		return 0;
-
 	/* We keep the previous context alive until we retire the following
 	 * request. This ensures that any the context object is still pinned
 	 * for any residual writes the HW makes into it on the context switch
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 161c0792b1bf..327ad7fdf118 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -58,18 +58,10 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
 					    ringbuf->tail, ringbuf->size);
 }
 
-bool intel_engine_stopped(struct intel_engine_cs *engine)
-{
-	struct drm_i915_private *dev_priv = engine->i915;
-	return dev_priv->gpu_error.stop_rings & intel_engine_flag(engine);
-}
-
 static void __intel_ring_advance(struct intel_engine_cs *engine)
 {
 	struct intel_ringbuffer *ringbuf = engine->buffer;
 	ringbuf->tail &= ringbuf->size - 1;
-	if (intel_engine_stopped(engine))
-		return;
 	engine->write_tail(engine, ringbuf->tail);
 }
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index d0cd9a1aa80e..6017367e94fb 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -480,7 +480,6 @@ static inline void intel_ring_advance(struct intel_engine_cs *engine)
 }
 int __intel_ring_space(int head, int tail, int size);
 void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
-bool intel_engine_stopped(struct intel_engine_cs *engine);
 
 int __must_check intel_engine_idle(struct intel_engine_cs *engine);
 void intel_ring_init_seqno(struct intel_engine_cs *engine, u32 seqno);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 09/62] drm/i915: Record the ringbuffer associated with the request
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (7 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 10/62] drm/i915: Allow userspace to request no-error-capture upon GPU hangs Chris Wilson
                   ` (54 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

The request tells us where to read the ringbuf from, so use that
information to simplify the error capture. If no request was active at
the time of the hang, the ring is idle and there is no information
inside the ring pertaining to the hang.

Note carefully that this will reduce the amount of information stored in
the error state - any ring without an active request will not be
recorded.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
---
 drivers/gpu/drm/i915/i915_gpu_error.c | 28 ++++++++--------------------
 1 file changed, 8 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 81341fc4e61a..cf444ddec66e 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1076,7 +1076,6 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct intel_engine_cs *engine = &dev_priv->engine[i];
-		struct intel_ringbuffer *rbuf;
 
 		error->ring[i].pid = -1;
 
@@ -1091,6 +1090,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 		request = i915_gem_find_active_request(engine);
 		if (request) {
 			struct i915_address_space *vm;
+			struct intel_ringbuffer *rb;
 
 			vm = request->ctx && request->ctx->ppgtt ?
 				&request->ctx->ppgtt->base :
@@ -1121,26 +1121,14 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 				}
 				rcu_read_unlock();
 			}
-		}
 
-		if (i915.enable_execlists) {
-			/* TODO: This is only a small fix to keep basic error
-			 * capture working, but we need to add more information
-			 * for it to be useful (e.g. dump the context being
-			 * executed).
-			 */
-			if (request)
-				rbuf = request->ctx->engine[engine->id].ringbuf;
-			else
-				rbuf = dev_priv->kernel_context->engine[engine->id].ringbuf;
-		} else
-			rbuf = engine->buffer;
-
-		error->ring[i].cpu_ring_head = rbuf->head;
-		error->ring[i].cpu_ring_tail = rbuf->tail;
-
-		error->ring[i].ringbuffer =
-			i915_error_ggtt_object_create(dev_priv, rbuf->obj);
+			rb = request->ringbuf;
+			error->ring[i].cpu_ring_head = rb->head;
+			error->ring[i].cpu_ring_tail = rb->tail;
+			error->ring[i].ringbuffer =
+				i915_error_ggtt_object_create(dev_priv,
+							      rb->obj);
+		}
 
 		error->ring[i].hws_page =
 			i915_error_ggtt_object_create(dev_priv,
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 10/62] drm/i915: Allow userspace to request no-error-capture upon GPU hangs
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (8 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 09/62] drm/i915: Record the ringbuffer associated with the request Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 11/62] drm/i915: Clean up GPU hang message Chris Wilson
                   ` (53 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

igt likes to inject GPU hangs into its command streams. However, as we
expect these hangs, we don't actually want them recorded in the dmesg
output or stored in the i915_error_state (usually). To accomodate this
allow userspace to set a flag on the context that any hang emanating
from that context will not be recorded. We still do the error capture
(otherwise how do we find the guilty context and know its intent?) as
part of the reason for random GPU hang injection is to exercise the race
conditions between the error capture and normal execution.

v2: Split out the request->ringbuf error capture changes.
v3: Move the flag defines next to the intel_context->flags definition

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  4 +++-
 drivers/gpu/drm/i915/i915_gem_context.c | 13 +++++++++++++
 drivers/gpu/drm/i915/i915_gpu_error.c   | 14 +++++++++-----
 include/uapi/drm/i915_drm.h             |  1 +
 4 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a48c0f4e1d42..15a0c6bdf500 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -472,6 +472,7 @@ struct drm_i915_error_state {
 	struct timeval time;
 
 	char error_msg[128];
+	bool simulated;
 	int iommu;
 	u32 reset_count;
 	u32 suspend_count;
@@ -870,9 +871,10 @@ struct i915_gem_context {
 
 	/* Unique identifier for this context, used by the hw for tracking */
 	unsigned long flags;
+#define CONTEXT_NO_ZEROMAP		(1 << 0)
+#define CONTEXT_NO_ERROR_CAPTURE	(1 << 1)
 	unsigned hw_id;
 	u32 user_handle;
-#define CONTEXT_NO_ZEROMAP		(1<<0)
 
 	u32 ggtt_alignment;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index e36e4bb29357..d01b3893eac0 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -988,6 +988,9 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
 		else
 			args->value = to_i915(dev)->ggtt.base.total;
 		break;
+	case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
+		args->value = !!(ctx->flags & CONTEXT_NO_ERROR_CAPTURE);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
@@ -1033,6 +1036,16 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
 			ctx->flags |= args->value ? CONTEXT_NO_ZEROMAP : 0;
 		}
 		break;
+	case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
+		if (args->size) {
+			ret = -EINVAL;
+		} else {
+			if (args->value)
+				ctx->flags |= CONTEXT_NO_ERROR_CAPTURE;
+			else
+				ctx->flags &= ~CONTEXT_NO_ERROR_CAPTURE;
+		}
+		break;
 	default:
 		ret = -EINVAL;
 		break;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index cf444ddec66e..a066dcfcdd38 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1122,6 +1122,8 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 				rcu_read_unlock();
 			}
 
+			error->simulated |= request->ctx->flags & CONTEXT_NO_ERROR_CAPTURE;
+
 			rb = request->ringbuf;
 			error->ring[i].cpu_ring_head = rb->head;
 			error->ring[i].cpu_ring_tail = rb->tail;
@@ -1421,12 +1423,14 @@ void i915_capture_error_state(struct drm_i915_private *dev_priv,
 	i915_error_capture_msg(dev_priv, error, engine_mask, error_msg);
 	DRM_INFO("%s\n", error->error_msg);
 
-	spin_lock_irqsave(&dev_priv->gpu_error.lock, flags);
-	if (dev_priv->gpu_error.first_error == NULL) {
-		dev_priv->gpu_error.first_error = error;
-		error = NULL;
+	if (!error->simulated) {
+		spin_lock_irqsave(&dev_priv->gpu_error.lock, flags);
+		if (dev_priv->gpu_error.first_error == NULL) {
+			dev_priv->gpu_error.first_error = error;
+			error = NULL;
+		}
+		spin_unlock_irqrestore(&dev_priv->gpu_error.lock, flags);
 	}
-	spin_unlock_irqrestore(&dev_priv->gpu_error.lock, flags);
 
 	if (error) {
 		i915_error_state_free(&error->ref);
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index c17d63d8b543..d6c668e58426 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1171,6 +1171,7 @@ struct drm_i915_gem_context_param {
 #define I915_CONTEXT_PARAM_BAN_PERIOD	0x1
 #define I915_CONTEXT_PARAM_NO_ZEROMAP	0x2
 #define I915_CONTEXT_PARAM_GTT_SIZE	0x3
+#define I915_CONTEXT_PARAM_NO_ERROR_CAPTURE	0x4
 	__u64 value;
 };
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 11/62] drm/i915: Clean up GPU hang message
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (9 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 10/62] drm/i915: Allow userspace to request no-error-capture upon GPU hangs Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-14  8:13   ` Mika Kuoppala
  2016-06-03 16:36 ` [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one Chris Wilson
                   ` (52 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Remove some redundant kernel messages as we deduce a hung GPU and
capture the error state.

v2: Fix "hang" vs "no progress" message whilst I was there

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_irq.c | 41 ++++++++++++++++++++++++++---------------
 1 file changed, 26 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 34e25fc2b90a..860235d1e0bf 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3083,9 +3083,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 		container_of(work, typeof(*dev_priv),
 			     gpu_error.hangcheck_work.work);
 	struct intel_engine_cs *engine;
-	enum intel_engine_id id;
-	int busy_count = 0, rings_hung = 0;
-	bool stuck[I915_NUM_ENGINES] = { 0 };
+	unsigned hung = 0, stuck = 0;
+	int busy_count = 0;
 #define BUSY 1
 #define KICK 5
 #define HUNG 20
@@ -3103,7 +3102,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 	 */
 	intel_uncore_arm_unclaimed_mmio_detection(dev_priv);
 
-	for_each_engine_id(engine, dev_priv, id) {
+	for_each_engine(engine, dev_priv) {
 		bool busy = intel_engine_has_waiter(engine);
 		u64 acthd;
 		u32 seqno;
@@ -3166,10 +3165,15 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 					break;
 				case HANGCHECK_HUNG:
 					engine->hangcheck.score += HUNG;
-					stuck[id] = true;
 					break;
 				}
 			}
+
+			if (engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG) {
+				hung |= intel_engine_flag(engine);
+				if (engine->hangcheck.action != HANGCHECK_HUNG)
+					stuck |= intel_engine_flag(engine);
+			}
 		} else {
 			engine->hangcheck.action = HANGCHECK_ACTIVE;
 
@@ -3194,17 +3198,24 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 		busy_count += busy;
 	}
 
-	for_each_engine_id(engine, dev_priv, id) {
-		if (engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG) {
-			DRM_INFO("%s on %s\n",
-				 stuck[id] ? "stuck" : "no progress",
-				 engine->name);
-			rings_hung |= intel_engine_flag(engine);
-		}
-	}
+	if (hung) {
+		char msg[80];
+		int len;
 
-	if (rings_hung)
-		i915_handle_error(dev_priv, rings_hung, "Engine(s) hung");
+		/* If some rings hung but others were still busy, only
+		 * blame the hanging rings in the synopsis.
+		 */
+		if (stuck != hung)
+			hung &= ~stuck;
+		len = snprintf(msg, sizeof(msg),
+			       "%s on ", stuck == hung ? "No progress" : "Hang");
+		for_each_engine_masked(engine, dev_priv, hung)
+			len += snprintf(msg + len, sizeof(msg) - len,
+					"%s, ", engine->name);
+		msg[len-2] = '\0';
+
+		return i915_handle_error(dev_priv, hung, msg);
+	}
 
 	/* Reset timer in case GPU hangs without another request being added */
 	if (busy_count)
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (10 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 11/62] drm/i915: Clean up GPU hang message Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-08 11:14   ` Arun Siluvery
  2016-06-03 16:36 ` [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence Chris Wilson
                   ` (51 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

As we only ever keep the first error state around, we can avoid some
work that can be quite intrusive if we don't record the error the second
time around. This does move the race whereby the user could discard one
error state as the second is being captured, but that race exists in the
current code and we hope that recapturing error state is only done for
debugging.

Note that as we discard the error state for simulated errors, igt that
exercise error capture continue to function.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/Makefile           |   1 +
 drivers/gpu/drm/i915/i915_drv.h         | 210 +---------
 drivers/gpu/drm/i915/i915_gem.c         | 653 +------------------------------
 drivers/gpu/drm/i915/i915_gem_request.c | 659 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_request.h | 245 ++++++++++++
 drivers/gpu/drm/i915/i915_gpu_error.c   |   3 +
 6 files changed, 916 insertions(+), 855 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_gem_request.c
 create mode 100644 drivers/gpu/drm/i915/i915_gem_request.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f20007440821..14cef1d2343c 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -32,6 +32,7 @@ i915-y += i915_cmd_parser.o \
 	  i915_gem_gtt.o \
 	  i915_gem.o \
 	  i915_gem_render_state.o \
+	  i915_gem_request.o \
 	  i915_gem_shrinker.o \
 	  i915_gem_stolen.o \
 	  i915_gem_tiling.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 15a0c6bdf500..939cd45043c7 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -60,6 +60,7 @@
 #include "i915_gem.h"
 #include "i915_gem_gtt.h"
 #include "i915_gem_render_state.h"
+#include "i915_gem_request.h"
 
 /* General customization:
  */
@@ -2339,172 +2340,6 @@ static inline struct scatterlist *__sg_next(struct scatterlist *sg)
 	     (((__iter).curr += PAGE_SIZE) < (__iter).max) ||		\
 	     ((__iter) = __sgt_iter(__sg_next((__iter).sgp), false), 0))
 
-/**
- * Request queue structure.
- *
- * The request queue allows us to note sequence numbers that have been emitted
- * and may be associated with active buffers to be retired.
- *
- * By keeping this list, we can avoid having to do questionable sequence
- * number comparisons on buffer last_read|write_seqno. It also allows an
- * emission time to be associated with the request for tracking how far ahead
- * of the GPU the submission is.
- *
- * The requests are reference counted, so upon creation they should have an
- * initial reference taken using kref_init
- */
-struct drm_i915_gem_request {
-	struct kref ref;
-
-	/** On Which ring this request was generated */
-	struct drm_i915_private *i915;
-	struct intel_engine_cs *engine;
-	unsigned reset_counter;
-	struct intel_signal_node signaling;
-
-	 /** GEM sequence number associated with the previous request,
-	  * when the HWS breadcrumb is equal to this the GPU is processing
-	  * this request.
-	  */
-	u32 previous_seqno;
-
-	 /** GEM sequence number associated with this request,
-	  * when the HWS breadcrumb is equal or greater than this the GPU
-	  * has finished processing this request.
-	  */
-	u32 seqno;
-
-	/** Position in the ringbuffer of the start of the request */
-	u32 head;
-
-	/**
-	 * Position in the ringbuffer of the start of the postfix.
-	 * This is required to calculate the maximum available ringbuffer
-	 * space without overwriting the postfix.
-	 */
-	 u32 postfix;
-
-	/** Position in the ringbuffer of the end of the whole request */
-	u32 tail;
-
-	/** Preallocate space in the ringbuffer for the emitting the request */
-	u32 reserved_space;
-
-	/**
-	 * Context and ring buffer related to this request
-	 * Contexts are refcounted, so when this request is associated with a
-	 * context, we must increment the context's refcount, to guarantee that
-	 * it persists while any request is linked to it. Requests themselves
-	 * are also refcounted, so the request will only be freed when the last
-	 * reference to it is dismissed, and the code in
-	 * i915_gem_request_free() will then decrement the refcount on the
-	 * context.
-	 */
-	struct i915_gem_context *ctx;
-	struct intel_ringbuffer *ringbuf;
-
-	/**
-	 * Context related to the previous request.
-	 * As the contexts are accessed by the hardware until the switch is
-	 * completed to a new context, the hardware may still be writing
-	 * to the context object after the breadcrumb is visible. We must
-	 * not unpin/unbind/prune that object whilst still active and so
-	 * we keep the previous context pinned until the following (this)
-	 * request is retired.
-	 */
-	struct i915_gem_context *previous_context;
-
-	/** Batch buffer related to this request if any (used for
-	    error state dump only) */
-	struct drm_i915_gem_object *batch_obj;
-
-	/** Time at which this request was emitted, in jiffies. */
-	unsigned long emitted_jiffies;
-
-	/** global list entry for this request */
-	struct list_head list;
-
-	struct drm_i915_file_private *file_priv;
-	/** file_priv list entry for this request */
-	struct list_head client_list;
-
-	/** process identifier submitting this request */
-	struct pid *pid;
-
-	/**
-	 * The ELSP only accepts two elements at a time, so we queue
-	 * context/tail pairs on a given queue (ring->execlist_queue) until the
-	 * hardware is available. The queue serves a double purpose: we also use
-	 * it to keep track of the up to 2 contexts currently in the hardware
-	 * (usually one in execution and the other queued up by the GPU): We
-	 * only remove elements from the head of the queue when the hardware
-	 * informs us that an element has been completed.
-	 *
-	 * All accesses to the queue are mediated by a spinlock
-	 * (ring->execlist_lock).
-	 */
-
-	/** Execlist link in the submission queue.*/
-	struct list_head execlist_link;
-
-	/** Execlists no. of times this request has been sent to the ELSP */
-	int elsp_submitted;
-
-	/** Execlists context hardware id. */
-	unsigned ctx_hw_id;
-};
-
-struct drm_i915_gem_request * __must_check
-i915_gem_request_alloc(struct intel_engine_cs *engine,
-		       struct i915_gem_context *ctx);
-void i915_gem_request_free(struct kref *req_ref);
-int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
-				   struct drm_file *file);
-
-static inline uint32_t
-i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
-{
-	return req ? req->seqno : 0;
-}
-
-static inline struct intel_engine_cs *
-i915_gem_request_get_engine(struct drm_i915_gem_request *req)
-{
-	return req ? req->engine : NULL;
-}
-
-static inline struct drm_i915_gem_request *
-i915_gem_request_reference(struct drm_i915_gem_request *req)
-{
-	if (req)
-		kref_get(&req->ref);
-	return req;
-}
-
-static inline void
-i915_gem_request_unreference(struct drm_i915_gem_request *req)
-{
-	kref_put(&req->ref, i915_gem_request_free);
-}
-
-static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
-					   struct drm_i915_gem_request *src)
-{
-	if (src)
-		i915_gem_request_reference(src);
-
-	if (*pdst)
-		i915_gem_request_unreference(*pdst);
-
-	*pdst = src;
-}
-
-/*
- * XXX: i915_gem_request_completed should be here but currently needs the
- * definition of i915_seqno_passed() which is below. It will be moved in
- * a later patch when the call to i915_seqno_passed() is obsoleted...
- */
-
 /*
  * A command that requires special handling by the command parser.
  */
@@ -3208,37 +3043,6 @@ void i915_gem_track_fb(struct drm_i915_gem_object *old,
 		       struct drm_i915_gem_object *new,
 		       unsigned frontbuffer_bits);
 
-/**
- * Returns true if seq1 is later than seq2.
- */
-static inline bool
-i915_seqno_passed(uint32_t seq1, uint32_t seq2)
-{
-	return (int32_t)(seq1 - seq2) >= 0;
-}
-
-static inline bool i915_gem_request_started(const struct drm_i915_gem_request *req)
-{
-	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
-				 req->previous_seqno);
-}
-
-static inline bool i915_gem_request_completed(const struct drm_i915_gem_request *req)
-{
-	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
-				 req->seqno);
-}
-
-bool __i915_spin_request(const struct drm_i915_gem_request *request,
-			 int state, unsigned long timeout_us);
-static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
-				     int state, unsigned long timeout_us)
-{
-	return (i915_gem_request_started(request) &&
-		__i915_spin_request(request, state, timeout_us));
-}
-
-int __must_check i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno);
 int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);
 
 struct drm_i915_gem_request *
@@ -3296,18 +3100,6 @@ void i915_gem_init_swizzling(struct drm_device *dev);
 void i915_gem_cleanup_engines(struct drm_device *dev);
 int __must_check i915_gem_wait_for_idle(struct drm_i915_private *dev_priv);
 int __must_check i915_gem_suspend(struct drm_device *dev);
-void __i915_add_request(struct drm_i915_gem_request *req,
-			struct drm_i915_gem_object *batch_obj,
-			bool flush_caches);
-#define i915_add_request(req) \
-	__i915_add_request(req, NULL, true)
-#define i915_add_request_no_flush(req) \
-	__i915_add_request(req, NULL, false)
-int __i915_wait_request(struct drm_i915_gem_request *req,
-			bool interruptible,
-			s64 *timeout,
-			struct intel_rps_client *rps);
-int __must_check i915_wait_request(struct drm_i915_gem_request *req);
 int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
 int __must_check
 i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f48f54193972..95782cf85dcc 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1105,361 +1105,6 @@ put_rpm:
 	return ret;
 }
 
-static int
-i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
-{
-	if (__i915_terminally_wedged(reset_counter))
-		return -EIO;
-
-	if (__i915_reset_in_progress(reset_counter)) {
-		/* Non-interruptible callers can't handle -EAGAIN, hence return
-		 * -EIO unconditionally for these. */
-		if (!interruptible)
-			return -EIO;
-
-		return -EAGAIN;
-	}
-
-	return 0;
-}
-
-static unsigned long local_clock_us(unsigned *cpu)
-{
-	unsigned long t;
-
-	/* Cheaply and approximately convert from nanoseconds to microseconds.
-	 * The result and subsequent calculations are also defined in the same
-	 * approximate microseconds units. The principal source of timing
-	 * error here is from the simple truncation.
-	 *
-	 * Note that local_clock() is only defined wrt to the current CPU;
-	 * the comparisons are no longer valid if we switch CPUs. Instead of
-	 * blocking preemption for the entire busywait, we can detect the CPU
-	 * switch and use that as indicator of system load and a reason to
-	 * stop busywaiting, see busywait_stop().
-	 */
-	*cpu = get_cpu();
-	t = local_clock() >> 10;
-	put_cpu();
-
-	return t;
-}
-
-static bool busywait_stop(unsigned long timeout, unsigned cpu)
-{
-	unsigned this_cpu;
-
-	if (time_after(local_clock_us(&this_cpu), timeout))
-		return true;
-
-	return this_cpu != cpu;
-}
-
-bool __i915_spin_request(const struct drm_i915_gem_request *req,
-			 int state, unsigned long timeout_us)
-{
-	unsigned cpu;
-
-	/* When waiting for high frequency requests, e.g. during synchronous
-	 * rendering split between the CPU and GPU, the finite amount of time
-	 * required to set up the irq and wait upon it limits the response
-	 * rate. By busywaiting on the request completion for a short while we
-	 * can service the high frequency waits as quick as possible. However,
-	 * if it is a slow request, we want to sleep as quickly as possible.
-	 * The tradeoff between waiting and sleeping is roughly the time it
-	 * takes to sleep on a request, on the order of a microsecond.
-	 */
-
-	timeout_us += local_clock_us(&cpu);
-	do {
-		if (i915_gem_request_completed(req))
-			return true;
-
-		if (signal_pending_state(state, current))
-			break;
-
-		if (busywait_stop(timeout_us, cpu))
-			break;
-
-		cpu_relax_lowlatency();
-	} while (!need_resched());
-
-	return false;
-}
-
-/**
- * __i915_wait_request - wait until execution of request has finished
- * @req: duh!
- * @interruptible: do an interruptible wait (normally yes)
- * @timeout: in - how long to wait (NULL forever); out - how much time remaining
- *
- * Note: It is of utmost importance that the passed in seqno and reset_counter
- * values have been read by the caller in an smp safe manner. Where read-side
- * locks are involved, it is sufficient to read the reset_counter before
- * unlocking the lock that protects the seqno. For lockless tricks, the
- * reset_counter _must_ be read before, and an appropriate smp_rmb must be
- * inserted.
- *
- * Returns 0 if the request was found within the alloted time. Else returns the
- * errno with remaining time filled in timeout argument.
- */
-int __i915_wait_request(struct drm_i915_gem_request *req,
-			bool interruptible,
-			s64 *timeout,
-			struct intel_rps_client *rps)
-{
-	int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
-	DEFINE_WAIT(reset);
-	struct intel_wait wait;
-	unsigned long timeout_remain;
-	int ret = 0;
-
-	might_sleep();
-
-	if (list_empty(&req->list))
-		return 0;
-
-	if (i915_gem_request_completed(req))
-		return 0;
-
-	timeout_remain = MAX_SCHEDULE_TIMEOUT;
-	if (timeout) {
-		if (WARN_ON(*timeout < 0))
-			return -EINVAL;
-
-		if (*timeout == 0)
-			return -ETIME;
-
-		/* Record current time in case interrupted, or wedged */
-		timeout_remain = nsecs_to_jiffies_timeout(*timeout);
-		*timeout += ktime_get_raw_ns();
-	}
-
-	trace_i915_gem_request_wait_begin(req);
-
-	/* This client is about to stall waiting for the GPU. In many cases
-	 * this is undesirable and limits the throughput of the system, as
-	 * many clients cannot continue processing user input/output whilst
-	 * blocked. RPS autotuning may take tens of milliseconds to respond
-	 * to the GPU load and thus incurs additional latency for the client.
-	 * We can circumvent that by promoting the GPU frequency to maximum
-	 * before we wait. This makes the GPU throttle up much more quickly
-	 * (good for benchmarks and user experience, e.g. window animations),
-	 * but at a cost of spending more power processing the workload
-	 * (bad for battery). Not all clients even want their results
-	 * immediately and for them we should just let the GPU select its own
-	 * frequency to maximise efficiency. To prevent a single client from
-	 * forcing the clocks too high for the whole system, we only allow
-	 * each client to waitboost once in a busy period.
-	 */
-	if (INTEL_INFO(req->i915)->gen >= 6)
-		gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
-
-	/* Optimistic spin for the next ~jiffie before touching IRQs */
-	if (i915_spin_request(req, state, 5))
-		goto complete;
-
-	intel_wait_init(&wait, req->seqno);
-	set_current_state(state);
-	if (intel_engine_add_wait(req->engine, &wait))
-		/* In order to check that we haven't missed the interrupt
-		 * as we enabled it, we need to kick ourselves to do a
-		 * coherent check on the seqno before we sleep.
-		 */
-		goto wakeup;
-
-	add_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
-	for (;;) {
-		if (signal_pending_state(state, current)) {
-			ret = -ERESTARTSYS;
-			break;
-		}
-
-		/* Ensure that even if the GPU hangs, we get woken up. */
-		i915_queue_hangcheck(req->i915);
-
-		timeout_remain = io_schedule_timeout(timeout_remain);
-		if (timeout_remain == 0) {
-			ret = -ETIME;
-			break;
-		}
-
-		if (intel_wait_complete(&wait))
-			break;
-
-wakeup:
-		set_current_state(state);
-
-		/* Carefully check if the request is complete, giving time
-		 * for the seqno to be visible following the interrupt.
-		 * We also have to check in case we are kicked by the GPU
-		 * reset in order to drop the struct_mutex.
-		 */
-		if (__i915_request_irq_complete(req))
-			break;
-
-		/* Only spin if we know the GPU is processing this request */
-		if (i915_spin_request(req, state, 2))
-			break;
-	}
-	remove_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
-
-	intel_engine_remove_wait(req->engine, &wait);
-	__set_current_state(TASK_RUNNING);
-complete:
-	trace_i915_gem_request_wait_end(req);
-
-	if (timeout) {
-		*timeout -= ktime_get_raw_ns();
-		if (*timeout < 0)
-			*timeout = 0;
-
-		/*
-		 * Apparently ktime isn't accurate enough and occasionally has a
-		 * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
-		 * things up to make the test happy. We allow up to 1 jiffy.
-		 *
-		 * This is a regrssion from the timespec->ktime conversion.
-		 */
-		if (ret == -ETIME && *timeout < jiffies_to_usecs(1)*1000)
-			*timeout = 0;
-	}
-
-	if (rps && req->seqno == req->engine->last_submitted_seqno) {
-		/* The GPU is now idle and this client has stalled.
-		 * Since no other client has submitted a request in the
-		 * meantime, assume that this client is the only one
-		 * supplying work to the GPU but is unable to keep that
-		 * work supplied because it is waiting. Since the GPU is
-		 * then never kept fully busy, RPS autoclocking will
-		 * keep the clocks relatively low, causing further delays.
-		 * Compensate by giving the synchronous client credit for
-		 * a waitboost next time.
-		 */
-		spin_lock(&req->i915->rps.client_lock);
-		list_del_init(&rps->link);
-		spin_unlock(&req->i915->rps.client_lock);
-	}
-
-	return ret;
-}
-
-int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
-				   struct drm_file *file)
-{
-	struct drm_i915_file_private *file_priv;
-
-	WARN_ON(!req || !file || req->file_priv);
-
-	if (!req || !file)
-		return -EINVAL;
-
-	if (req->file_priv)
-		return -EINVAL;
-
-	file_priv = file->driver_priv;
-
-	spin_lock(&file_priv->mm.lock);
-	req->file_priv = file_priv;
-	list_add_tail(&req->client_list, &file_priv->mm.request_list);
-	spin_unlock(&file_priv->mm.lock);
-
-	req->pid = get_pid(task_pid(current));
-
-	return 0;
-}
-
-static inline void
-i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
-{
-	struct drm_i915_file_private *file_priv = request->file_priv;
-
-	if (!file_priv)
-		return;
-
-	spin_lock(&file_priv->mm.lock);
-	list_del(&request->client_list);
-	request->file_priv = NULL;
-	spin_unlock(&file_priv->mm.lock);
-
-	put_pid(request->pid);
-	request->pid = NULL;
-}
-
-static void i915_gem_request_retire(struct drm_i915_gem_request *request)
-{
-	trace_i915_gem_request_retire(request);
-
-	/* We know the GPU must have read the request to have
-	 * sent us the seqno + interrupt, so use the position
-	 * of tail of the request to update the last known position
-	 * of the GPU head.
-	 *
-	 * Note this requires that we are always called in request
-	 * completion order.
-	 */
-	request->ringbuf->last_retired_head = request->postfix;
-
-	list_del_init(&request->list);
-	i915_gem_request_remove_from_client(request);
-
-	if (request->previous_context) {
-		if (i915.enable_execlists)
-			intel_lr_context_unpin(request->previous_context,
-					       request->engine);
-	}
-
-	i915_gem_context_unreference(request->ctx);
-	i915_gem_request_unreference(request);
-}
-
-static void
-__i915_gem_request_retire__upto(struct drm_i915_gem_request *req)
-{
-	struct intel_engine_cs *engine = req->engine;
-	struct drm_i915_gem_request *tmp;
-
-	lockdep_assert_held(&engine->i915->dev->struct_mutex);
-
-	if (list_empty(&req->list))
-		return;
-
-	do {
-		tmp = list_first_entry(&engine->request_list,
-				       typeof(*tmp), list);
-
-		i915_gem_request_retire(tmp);
-	} while (tmp != req);
-
-	WARN_ON(i915_verify_lists(engine->dev));
-}
-
-/**
- * Waits for a request to be signaled, and cleans up the
- * request and object lists appropriately for that event.
- */
-int
-i915_wait_request(struct drm_i915_gem_request *req)
-{
-	struct drm_i915_private *dev_priv = req->i915;
-	bool interruptible;
-	int ret;
-
-	interruptible = dev_priv->mm.interruptible;
-
-	BUG_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
-
-	ret = __i915_wait_request(req, interruptible, NULL, NULL);
-	if (ret)
-		return ret;
-
-	/* If the GPU hung, we want to keep the requests to find the guilty. */
-	if (req->reset_counter == i915_reset_counter(&dev_priv->gpu_error))
-		__i915_gem_request_retire__upto(req);
-
-	return 0;
-}
-
 /**
  * Ensures that all rendering to the object has completed and the object is
  * safe to unbind from the GTT or access from the CPU.
@@ -1514,7 +1159,7 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
 		i915_gem_object_retire__write(obj);
 
 	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
-		__i915_gem_request_retire__upto(req);
+		i915_gem_request_retire_upto(req);
 }
 
 /* A nonblocking variant of the above wait. This is a highly dangerous routine
@@ -2515,194 +2160,6 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 	drm_gem_object_unreference(&obj->base);
 }
 
-static int
-i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
-{
-	struct intel_engine_cs *engine;
-	int ret;
-
-	/* Carefully retire all requests without writing to the rings */
-	for_each_engine(engine, dev_priv) {
-		ret = intel_engine_idle(engine);
-		if (ret)
-			return ret;
-	}
-	i915_gem_retire_requests(dev_priv);
-
-	/* If the seqno wraps around, we need to clear the breadcrumb rbtree */
-	if (!i915_seqno_passed(seqno, dev_priv->next_seqno)) {
-		while (intel_kick_waiters(dev_priv) ||
-		       intel_kick_signalers(dev_priv))
-			yield();
-	}
-
-	/* Finally reset hw state */
-	for_each_engine(engine, dev_priv)
-		intel_ring_init_seqno(engine, seqno);
-
-	return 0;
-}
-
-int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
-{
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	int ret;
-
-	if (seqno == 0)
-		return -EINVAL;
-
-	/* HWS page needs to be set less than what we
-	 * will inject to ring
-	 */
-	ret = i915_gem_init_seqno(dev_priv, seqno - 1);
-	if (ret)
-		return ret;
-
-	/* Carefully set the last_seqno value so that wrap
-	 * detection still works
-	 */
-	dev_priv->next_seqno = seqno;
-	dev_priv->last_seqno = seqno - 1;
-	if (dev_priv->last_seqno == 0)
-		dev_priv->last_seqno--;
-
-	return 0;
-}
-
-int
-i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
-{
-	/* reserve 0 for non-seqno */
-	if (dev_priv->next_seqno == 0) {
-		int ret = i915_gem_init_seqno(dev_priv, 0);
-		if (ret)
-			return ret;
-
-		dev_priv->next_seqno = 1;
-	}
-
-	*seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
-	return 0;
-}
-
-static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
-			       const struct intel_engine_cs *engine)
-{
-	dev_priv->gt.active_engines |= intel_engine_flag(engine);
-	if (dev_priv->gt.awake)
-		return;
-
-	intel_runtime_pm_get_noresume(dev_priv);
-	dev_priv->gt.awake = true;
-
-	intel_enable_gt_powersave(dev_priv);
-	i915_update_gfx_val(dev_priv);
-	if (INTEL_INFO(dev_priv)->gen >= 6)
-		gen6_rps_busy(dev_priv);
-
-	queue_delayed_work(dev_priv->wq,
-			   &dev_priv->gt.retire_work,
-			   round_jiffies_up_relative(HZ));
-}
-
-/*
- * NB: This function is not allowed to fail. Doing so would mean the the
- * request is not being tracked for completion but the work itself is
- * going to happen on the hardware. This would be a Bad Thing(tm).
- */
-void __i915_add_request(struct drm_i915_gem_request *request,
-			struct drm_i915_gem_object *obj,
-			bool flush_caches)
-{
-	struct intel_engine_cs *engine;
-	struct drm_i915_private *dev_priv;
-	struct intel_ringbuffer *ringbuf;
-	u32 request_start;
-	u32 reserved_tail;
-	int ret;
-
-	if (WARN_ON(request == NULL))
-		return;
-
-	engine = request->engine;
-	dev_priv = request->i915;
-	ringbuf = request->ringbuf;
-
-	/*
-	 * To ensure that this call will not fail, space for its emissions
-	 * should already have been reserved in the ring buffer. Let the ring
-	 * know that it is time to use that space up.
-	 */
-	request_start = intel_ring_get_tail(ringbuf);
-	reserved_tail = request->reserved_space;
-	request->reserved_space = 0;
-
-	/*
-	 * Emit any outstanding flushes - execbuf can fail to emit the flush
-	 * after having emitted the batchbuffer command. Hence we need to fix
-	 * things up similar to emitting the lazy request. The difference here
-	 * is that the flush _must_ happen before the next request, no matter
-	 * what.
-	 */
-	if (flush_caches) {
-		if (i915.enable_execlists)
-			ret = logical_ring_flush_all_caches(request);
-		else
-			ret = intel_ring_flush_all_caches(request);
-		/* Not allowed to fail! */
-		WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
-	}
-
-	trace_i915_gem_request_add(request);
-
-	request->head = request_start;
-
-	/* Whilst this request exists, batch_obj will be on the
-	 * active_list, and so will hold the active reference. Only when this
-	 * request is retired will the the batch_obj be moved onto the
-	 * inactive_list and lose its active reference. Hence we do not need
-	 * to explicitly hold another reference here.
-	 */
-	request->batch_obj = obj;
-
-	/* Seal the request and mark it as pending execution. Note that
-	 * we may inspect this state, without holding any locks, during
-	 * hangcheck. Hence we apply the barrier to ensure that we do not
-	 * see a more recent value in the hws than we are tracking.
-	 */
-	request->emitted_jiffies = jiffies;
-	request->previous_seqno = engine->last_submitted_seqno;
-	smp_store_mb(engine->last_submitted_seqno, request->seqno);
-	list_add_tail(&request->list, &engine->request_list);
-
-	/* Record the position of the start of the request so that
-	 * should we detect the updated seqno part-way through the
-	 * GPU processing the request, we never over-estimate the
-	 * position of the head.
-	 */
-	request->postfix = intel_ring_get_tail(ringbuf);
-
-	if (i915.enable_execlists)
-		ret = engine->emit_request(request);
-	else {
-		ret = engine->add_request(request);
-
-		request->tail = intel_ring_get_tail(ringbuf);
-	}
-	/* Not allowed to fail! */
-	WARN(ret, "emit|add_request failed: %d!\n", ret);
-	/* Sanity check that the reserved size was large enough. */
-	ret = intel_ring_get_tail(ringbuf) - request_start;
-	if (ret < 0)
-		ret += ringbuf->size;
-	WARN_ONCE(ret > reserved_tail,
-		  "Not enough space reserved (%d bytes) "
-		  "for adding the request (%d bytes)\n",
-		  reserved_tail, ret);
-
-	i915_gem_mark_busy(dev_priv, engine);
-}
-
 static bool i915_context_is_banned(const struct i915_gem_context *ctx)
 {
 	unsigned long elapsed;
@@ -2734,102 +2191,6 @@ static void i915_set_reset_status(struct i915_gem_context *ctx,
 	}
 }
 
-void i915_gem_request_free(struct kref *req_ref)
-{
-	struct drm_i915_gem_request *req = container_of(req_ref,
-						 typeof(*req), ref);
-	kmem_cache_free(req->i915->requests, req);
-}
-
-static inline int
-__i915_gem_request_alloc(struct intel_engine_cs *engine,
-			 struct i915_gem_context *ctx,
-			 struct drm_i915_gem_request **req_out)
-{
-	struct drm_i915_private *dev_priv = engine->i915;
-	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
-	struct drm_i915_gem_request *req;
-	int ret;
-
-	if (!req_out)
-		return -EINVAL;
-
-	*req_out = NULL;
-
-	/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
-	 * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
-	 * and restart.
-	 */
-	ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
-	if (ret)
-		return ret;
-
-	req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
-	if (req == NULL)
-		return -ENOMEM;
-
-	ret = i915_gem_get_seqno(engine->i915, &req->seqno);
-	if (ret)
-		goto err;
-
-	kref_init(&req->ref);
-	req->i915 = dev_priv;
-	req->engine = engine;
-	req->reset_counter = reset_counter;
-	req->ctx  = ctx;
-	i915_gem_context_reference(req->ctx);
-
-	/*
-	 * Reserve space in the ring buffer for all the commands required to
-	 * eventually emit this request. This is to guarantee that the
-	 * i915_add_request() call can't fail. Note that the reserve may need
-	 * to be redone if the request is not actually submitted straight
-	 * away, e.g. because a GPU scheduler has deferred it.
-	 */
-	req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
-
-	if (i915.enable_execlists)
-		ret = intel_logical_ring_alloc_request_extras(req);
-	else
-		ret = intel_ring_alloc_request_extras(req);
-	if (ret)
-		goto err_ctx;
-
-	*req_out = req;
-	return 0;
-
-err_ctx:
-	i915_gem_context_unreference(ctx);
-err:
-	kmem_cache_free(dev_priv->requests, req);
-	return ret;
-}
-
-/**
- * i915_gem_request_alloc - allocate a request structure
- *
- * @engine: engine that we wish to issue the request on.
- * @ctx: context that the request will be associated with.
- *       This can be NULL if the request is not directly related to
- *       any specific user context, in which case this function will
- *       choose an appropriate context to use.
- *
- * Returns a pointer to the allocated request if successful,
- * or an error code if not.
- */
-struct drm_i915_gem_request *
-i915_gem_request_alloc(struct intel_engine_cs *engine,
-		       struct i915_gem_context *ctx)
-{
-	struct drm_i915_gem_request *req;
-	int err;
-
-	if (ctx == NULL)
-		ctx = engine->i915->kernel_context;
-	err = __i915_gem_request_alloc(engine, ctx, &req);
-	return err ? ERR_PTR(err) : req;
-}
-
 struct drm_i915_gem_request *
 i915_gem_find_active_request(struct intel_engine_cs *engine)
 {
@@ -2903,14 +2264,14 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
 	 * implicit references on things like e.g. ppgtt address spaces through
 	 * the request.
 	 */
-	while (!list_empty(&engine->request_list)) {
+	if (!list_empty(&engine->request_list)) {
 		struct drm_i915_gem_request *request;
 
-		request = list_first_entry(&engine->request_list,
-					   struct drm_i915_gem_request,
-					   list);
+		request = list_last_entry(&engine->request_list,
+					  struct drm_i915_gem_request,
+					  list);
 
-		i915_gem_request_retire(request);
+		i915_gem_request_retire_upto(request);
 	}
 
 	/* Having flushed all requests from all queues, we know that all
@@ -2974,7 +2335,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 		if (!i915_gem_request_completed(request))
 			break;
 
-		i915_gem_request_retire(request);
+		i915_gem_request_retire_upto(request);
 	}
 
 	/* Move any buffers on the active list that are no longer referenced
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
new file mode 100644
index 000000000000..34b2f151cdfc
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -0,0 +1,659 @@
+/*
+ * Copyright © 2008-2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_drv.h"
+
+static int i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
+{
+	if (__i915_terminally_wedged(reset_counter))
+		return -EIO;
+
+	if (__i915_reset_in_progress(reset_counter)) {
+		/* Non-interruptible callers can't handle -EAGAIN, hence return
+		 * -EIO unconditionally for these. */
+		if (!interruptible)
+			return -EIO;
+
+		return -EAGAIN;
+	}
+
+	return 0;
+}
+
+static int i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
+{
+	struct intel_engine_cs *engine;
+	int ret;
+
+	/* Carefully retire all requests without writing to the rings */
+	for_each_engine(engine, dev_priv) {
+		ret = intel_engine_idle(engine);
+		if (ret)
+			return ret;
+	}
+	i915_gem_retire_requests(dev_priv);
+
+	/* If the seqno wraps around, we need to clear the breadcrumb rbtree */
+	if (!i915_seqno_passed(seqno, dev_priv->next_seqno)) {
+		while (intel_kick_waiters(dev_priv) ||
+		       intel_kick_signalers(dev_priv))
+			yield();
+	}
+
+	/* Finally reset hw state */
+	for_each_engine(engine, dev_priv)
+		intel_ring_init_seqno(engine, seqno);
+
+	return 0;
+}
+
+int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	int ret;
+
+	if (seqno == 0)
+		return -EINVAL;
+
+	/* HWS page needs to be set less than what we
+	 * will inject to ring
+	 */
+	ret = i915_gem_init_seqno(dev_priv, seqno - 1);
+	if (ret)
+		return ret;
+
+	/* Carefully set the last_seqno value so that wrap
+	 * detection still works
+	 */
+	dev_priv->next_seqno = seqno;
+	dev_priv->last_seqno = seqno - 1;
+	if (dev_priv->last_seqno == 0)
+		dev_priv->last_seqno--;
+
+	return 0;
+}
+
+static int i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
+{
+	/* reserve 0 for non-seqno */
+	if (unlikely(dev_priv->next_seqno == 0)) {
+		int ret = i915_gem_init_seqno(dev_priv, 0);
+		if (ret)
+			return ret;
+
+		dev_priv->next_seqno = 1;
+	}
+
+	*seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
+	return 0;
+}
+
+static inline int
+__i915_gem_request_alloc(struct intel_engine_cs *engine,
+			 struct i915_gem_context *ctx,
+			 struct drm_i915_gem_request **req_out)
+{
+	struct drm_i915_private *dev_priv = engine->i915;
+	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+	struct drm_i915_gem_request *req;
+	int ret;
+
+	if (!req_out)
+		return -EINVAL;
+
+	*req_out = NULL;
+
+	/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
+	 * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
+	 * and restart.
+	 */
+	ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
+	if (ret)
+		return ret;
+
+	req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
+	if (req == NULL)
+		return -ENOMEM;
+
+	ret = i915_gem_get_seqno(dev_priv, &req->seqno);
+	if (ret)
+		goto err;
+
+	kref_init(&req->ref);
+	req->i915 = dev_priv;
+	req->engine = engine;
+	req->reset_counter = reset_counter;
+	req->ctx = ctx;
+	i915_gem_context_reference(ctx);
+
+	/*
+	 * Reserve space in the ring buffer for all the commands required to
+	 * eventually emit this request. This is to guarantee that the
+	 * i915_add_request() call can't fail. Note that the reserve may need
+	 * to be redone if the request is not actually submitted straight
+	 * away, e.g. because a GPU scheduler has deferred it.
+	 */
+	req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
+
+	if (i915.enable_execlists)
+		ret = intel_logical_ring_alloc_request_extras(req);
+	else
+		ret = intel_ring_alloc_request_extras(req);
+	if (ret)
+		goto err_ctx;
+
+	*req_out = req;
+	return 0;
+
+err_ctx:
+	i915_gem_context_unreference(ctx);
+err:
+	kmem_cache_free(dev_priv->requests, req);
+	return ret;
+}
+
+/**
+ * i915_gem_request_alloc - allocate a request structure
+ *
+ * @engine: engine that we wish to issue the request on.
+ * @ctx: context that the request will be associated with.
+ *       This can be NULL if the request is not directly related to
+ *       any specific user context, in which case this function will
+ *       choose an appropriate context to use.
+ *
+ * Returns a pointer to the allocated request if successful,
+ * or an error code if not.
+ */
+struct drm_i915_gem_request *
+i915_gem_request_alloc(struct intel_engine_cs *engine,
+		       struct i915_gem_context *ctx)
+{
+	struct drm_i915_gem_request *req;
+	int err;
+
+	if (ctx == NULL)
+		ctx = engine->i915->kernel_context;
+	err = __i915_gem_request_alloc(engine, ctx, &req);
+	return err ? ERR_PTR(err) : req;
+}
+
+int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
+				   struct drm_file *file)
+{
+	struct drm_i915_private *dev_private;
+	struct drm_i915_file_private *file_priv;
+
+	WARN_ON(!req || !file || req->file_priv);
+
+	if (!req || !file)
+		return -EINVAL;
+
+	if (req->file_priv)
+		return -EINVAL;
+
+	dev_private = req->i915;
+	file_priv = file->driver_priv;
+
+	spin_lock(&file_priv->mm.lock);
+	req->file_priv = file_priv;
+	list_add_tail(&req->client_list, &file_priv->mm.request_list);
+	spin_unlock(&file_priv->mm.lock);
+
+	req->pid = get_pid(task_pid(current));
+
+	return 0;
+}
+
+static inline void
+i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
+{
+	struct drm_i915_file_private *file_priv = request->file_priv;
+
+	if (!file_priv)
+		return;
+
+	spin_lock(&file_priv->mm.lock);
+	list_del(&request->client_list);
+	request->file_priv = NULL;
+	spin_unlock(&file_priv->mm.lock);
+
+	put_pid(request->pid);
+	request->pid = NULL;
+}
+
+static void i915_gem_request_retire(struct drm_i915_gem_request *request)
+{
+	trace_i915_gem_request_retire(request);
+	list_del_init(&request->list);
+
+	/* We know the GPU must have read the request to have
+	 * sent us the seqno + interrupt, so use the position
+	 * of tail of the request to update the last known position
+	 * of the GPU head.
+	 *
+	 * Note this requires that we are always called in request
+	 * completion order.
+	 */
+	request->ringbuf->last_retired_head = request->postfix;
+
+	i915_gem_request_remove_from_client(request);
+
+	if (request->previous_context) {
+		if (i915.enable_execlists)
+			intel_lr_context_unpin(request->previous_context,
+					       request->engine);
+	}
+
+	i915_gem_context_unreference(request->ctx);
+	i915_gem_request_unreference(request);
+}
+
+void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
+{
+	struct intel_engine_cs *engine = req->engine;
+	struct drm_i915_gem_request *tmp;
+
+	lockdep_assert_held(&req->i915->dev->struct_mutex);
+
+	if (list_empty(&req->list))
+		return;
+
+	do {
+		tmp = list_first_entry(&engine->request_list,
+				       typeof(*tmp), list);
+
+		i915_gem_request_retire(tmp);
+	} while (tmp != req);
+
+	WARN_ON(i915_verify_lists(engine->dev));
+}
+
+static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
+			       const struct intel_engine_cs *engine)
+{
+	dev_priv->gt.active_engines |= intel_engine_flag(engine);
+	if (dev_priv->gt.awake)
+		return;
+
+	intel_runtime_pm_get_noresume(dev_priv);
+	dev_priv->gt.awake = true;
+
+	intel_enable_gt_powersave(dev_priv);
+	i915_update_gfx_val(dev_priv);
+	if (INTEL_INFO(dev_priv)->gen >= 6)
+		gen6_rps_busy(dev_priv);
+
+	queue_delayed_work(dev_priv->wq,
+			   &dev_priv->gt.retire_work,
+			   round_jiffies_up_relative(HZ));
+}
+
+/*
+ * NB: This function is not allowed to fail. Doing so would mean the the
+ * request is not being tracked for completion but the work itself is
+ * going to happen on the hardware. This would be a Bad Thing(tm).
+ */
+void __i915_add_request(struct drm_i915_gem_request *request,
+			struct drm_i915_gem_object *obj,
+			bool flush_caches)
+{
+	struct intel_engine_cs *engine;
+	struct drm_i915_private *dev_priv;
+	struct intel_ringbuffer *ringbuf;
+	u32 request_start;
+	u32 reserved_tail;
+	int ret;
+
+	if (WARN_ON(request == NULL))
+		return;
+
+	engine = request->engine;
+	dev_priv = request->i915;
+	ringbuf = request->ringbuf;
+
+	/*
+	 * To ensure that this call will not fail, space for its emissions
+	 * should already have been reserved in the ring buffer. Let the ring
+	 * know that it is time to use that space up.
+	 */
+	request_start = intel_ring_get_tail(ringbuf);
+	reserved_tail = request->reserved_space;
+	request->reserved_space = 0;
+
+	/*
+	 * Emit any outstanding flushes - execbuf can fail to emit the flush
+	 * after having emitted the batchbuffer command. Hence we need to fix
+	 * things up similar to emitting the lazy request. The difference here
+	 * is that the flush _must_ happen before the next request, no matter
+	 * what.
+	 */
+	if (flush_caches) {
+		if (i915.enable_execlists)
+			ret = logical_ring_flush_all_caches(request);
+		else
+			ret = intel_ring_flush_all_caches(request);
+		/* Not allowed to fail! */
+		WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
+	}
+
+	trace_i915_gem_request_add(request);
+
+	request->head = request_start;
+
+	/* Whilst this request exists, batch_obj will be on the
+	 * active_list, and so will hold the active reference. Only when this
+	 * request is retired will the the batch_obj be moved onto the
+	 * inactive_list and lose its active reference. Hence we do not need
+	 * to explicitly hold another reference here.
+	 */
+	request->batch_obj = obj;
+
+	/* Seal the request and mark it as pending execution. Note that
+	 * we may inspect this state, without holding any locks, during
+	 * hangcheck. Hence we apply the barrier to ensure that we do not
+	 * see a more recent value in the hws than we are tracking.
+	 */
+	request->emitted_jiffies = jiffies;
+	request->previous_seqno = engine->last_submitted_seqno;
+	smp_store_mb(engine->last_submitted_seqno, request->seqno);
+	list_add_tail(&request->list, &engine->request_list);
+
+	/* Record the position of the start of the request so that
+	 * should we detect the updated seqno part-way through the
+	 * GPU processing the request, we never over-estimate the
+	 * position of the head.
+	 */
+	request->postfix = intel_ring_get_tail(ringbuf);
+
+	if (i915.enable_execlists)
+		ret = engine->emit_request(request);
+	else {
+		ret = engine->add_request(request);
+
+		request->tail = intel_ring_get_tail(ringbuf);
+	}
+	/* Not allowed to fail! */
+	WARN(ret, "emit|add_request failed: %d!\n", ret);
+	/* Sanity check that the reserved size was large enough. */
+	ret = intel_ring_get_tail(ringbuf) - request_start;
+	if (ret < 0)
+		ret += ringbuf->size;
+	WARN_ONCE(ret > reserved_tail,
+		  "Not enough space reserved (%d bytes) "
+		  "for adding the request (%d bytes)\n",
+		  reserved_tail, ret);
+
+	i915_gem_mark_busy(dev_priv, engine);
+}
+
+static unsigned long local_clock_us(unsigned *cpu)
+{
+	unsigned long t;
+
+	/* Cheaply and approximately convert from nanoseconds to microseconds.
+	 * The result and subsequent calculations are also defined in the same
+	 * approximate microseconds units. The principal source of timing
+	 * error here is from the simple truncation.
+	 *
+	 * Note that local_clock() is only defined wrt to the current CPU;
+	 * the comparisons are no longer valid if we switch CPUs. Instead of
+	 * blocking preemption for the entire busywait, we can detect the CPU
+	 * switch and use that as indicator of system load and a reason to
+	 * stop busywaiting, see busywait_stop().
+	 */
+	*cpu = get_cpu();
+	t = local_clock() >> 10;
+	put_cpu();
+
+	return t;
+}
+
+static bool busywait_stop(unsigned long timeout, unsigned cpu)
+{
+	unsigned this_cpu;
+
+	if (time_after(local_clock_us(&this_cpu), timeout))
+		return true;
+
+	return this_cpu != cpu;
+}
+
+bool __i915_spin_request(const struct drm_i915_gem_request *req,
+			 int state, unsigned long timeout_us)
+{
+	unsigned cpu;
+
+	/* When waiting for high frequency requests, e.g. during synchronous
+	 * rendering split between the CPU and GPU, the finite amount of time
+	 * required to set up the irq and wait upon it limits the response
+	 * rate. By busywaiting on the request completion for a short while we
+	 * can service the high frequency waits as quick as possible. However,
+	 * if it is a slow request, we want to sleep as quickly as possible.
+	 * The tradeoff between waiting and sleeping is roughly the time it
+	 * takes to sleep on a request, on the order of a microsecond.
+	 */
+
+	timeout_us += local_clock_us(&cpu);
+	do {
+		if (i915_gem_request_completed(req))
+			return true;
+
+		if (signal_pending_state(state, current))
+			break;
+
+		if (busywait_stop(timeout_us, cpu))
+			break;
+
+		cpu_relax_lowlatency();
+	} while (!need_resched());
+
+	return false;
+}
+
+/**
+ * __i915_wait_request - wait until execution of request has finished
+ * @req: duh!
+ * @interruptible: do an interruptible wait (normally yes)
+ * @timeout: in - how long to wait (NULL forever); out - how much time remaining
+ *
+ * Note: It is of utmost importance that the passed in seqno and reset_counter
+ * values have been read by the caller in an smp safe manner. Where read-side
+ * locks are involved, it is sufficient to read the reset_counter before
+ * unlocking the lock that protects the seqno. For lockless tricks, the
+ * reset_counter _must_ be read before, and an appropriate smp_rmb must be
+ * inserted.
+ *
+ * Returns 0 if the request was found within the alloted time. Else returns the
+ * errno with remaining time filled in timeout argument.
+ */
+int __i915_wait_request(struct drm_i915_gem_request *req,
+			bool interruptible,
+			s64 *timeout,
+			struct intel_rps_client *rps)
+{
+	int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
+	DEFINE_WAIT(reset);
+	struct intel_wait wait;
+	unsigned long timeout_remain;
+	int ret = 0;
+
+	might_sleep();
+
+	if (list_empty(&req->list))
+		return 0;
+
+	if (i915_gem_request_completed(req))
+		return 0;
+
+	timeout_remain = MAX_SCHEDULE_TIMEOUT;
+	if (timeout) {
+		if (WARN_ON(*timeout < 0))
+			return -EINVAL;
+
+		if (*timeout == 0)
+			return -ETIME;
+
+		/* Record current time in case interrupted, or wedged */
+		timeout_remain = nsecs_to_jiffies_timeout(*timeout);
+		*timeout += ktime_get_raw_ns();
+	}
+
+	trace_i915_gem_request_wait_begin(req);
+
+	/* This client is about to stall waiting for the GPU. In many cases
+	 * this is undesirable and limits the throughput of the system, as
+	 * many clients cannot continue processing user input/output whilst
+	 * blocked. RPS autotuning may take tens of milliseconds to respond
+	 * to the GPU load and thus incurs additional latency for the client.
+	 * We can circumvent that by promoting the GPU frequency to maximum
+	 * before we wait. This makes the GPU throttle up much more quickly
+	 * (good for benchmarks and user experience, e.g. window animations),
+	 * but at a cost of spending more power processing the workload
+	 * (bad for battery). Not all clients even want their results
+	 * immediately and for them we should just let the GPU select its own
+	 * frequency to maximise efficiency. To prevent a single client from
+	 * forcing the clocks too high for the whole system, we only allow
+	 * each client to waitboost once in a busy period.
+	 */
+	if (INTEL_INFO(req->i915)->gen >= 6)
+		gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
+
+	/* Optimistic spin for the next ~jiffie before touching IRQs */
+	if (i915_spin_request(req, state, 5))
+		goto complete;
+
+	intel_wait_init(&wait, req->seqno);
+	set_current_state(state);
+	if (intel_engine_add_wait(req->engine, &wait))
+		/* In order to check that we haven't missed the interrupt
+		 * as we enabled it, we need to kick ourselves to do a
+		 * coherent check on the seqno before we sleep.
+		 */
+		goto wakeup;
+
+	add_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
+	for (;;) {
+		if (signal_pending_state(state, current)) {
+			ret = -ERESTARTSYS;
+			break;
+		}
+
+		/* Ensure that even if the GPU hangs, we get woken up. */
+		i915_queue_hangcheck(req->i915);
+
+		timeout_remain = io_schedule_timeout(timeout_remain);
+		if (timeout_remain == 0) {
+			ret = -ETIME;
+			break;
+		}
+
+		if (intel_wait_complete(&wait))
+			break;
+
+wakeup:
+		set_current_state(state);
+
+		/* Carefully check if the request is complete, giving time
+		 * for the seqno to be visible following the interrupt.
+		 * We also have to check in case we are kicked by the GPU
+		 * reset in order to drop the struct_mutex.
+		 */
+		if (__i915_request_irq_complete(req))
+			break;
+
+		/* Only spin if we know the GPU is processing this request */
+		if (i915_spin_request(req, state, 2))
+			break;
+	}
+	remove_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
+
+	intel_engine_remove_wait(req->engine, &wait);
+	__set_current_state(TASK_RUNNING);
+complete:
+	trace_i915_gem_request_wait_end(req);
+
+	if (timeout) {
+		*timeout -= ktime_get_raw_ns();
+		if (*timeout < 0)
+			*timeout = 0;
+
+		/*
+		 * Apparently ktime isn't accurate enough and occasionally has a
+		 * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
+		 * things up to make the test happy. We allow up to 1 jiffy.
+		 *
+		 * This is a regrssion from the timespec->ktime conversion.
+		 */
+		if (ret == -ETIME && *timeout < jiffies_to_usecs(1)*1000)
+			*timeout = 0;
+	}
+
+	if (rps && req->seqno == req->engine->last_submitted_seqno) {
+		/* The GPU is now idle and this client has stalled.
+		 * Since no other client has submitted a request in the
+		 * meantime, assume that this client is the only one
+		 * supplying work to the GPU but is unable to keep that
+		 * work supplied because it is waiting. Since the GPU is
+		 * then never kept fully busy, RPS autoclocking will
+		 * keep the clocks relatively low, causing further delays.
+		 * Compensate by giving the synchronous client credit for
+		 * a waitboost next time.
+		 */
+		spin_lock(&req->i915->rps.client_lock);
+		list_del_init(&rps->link);
+		spin_unlock(&req->i915->rps.client_lock);
+	}
+
+	return ret;
+}
+
+/**
+ * Waits for a request to be signaled, and cleans up the
+ * request and object lists appropriately for that event.
+ */
+int i915_wait_request(struct drm_i915_gem_request *req)
+{
+	int ret;
+
+	BUG_ON(req == NULL);
+	BUG_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));
+
+	ret = __i915_wait_request(req, req->i915->mm.interruptible,
+				  NULL, NULL);
+	if (ret)
+		return ret;
+
+	/* If the GPU hung, we want to keep the requests to find the guilty. */
+	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
+		i915_gem_request_retire_upto(req);
+
+	return 0;
+}
+
+void i915_gem_request_free(struct kref *req_ref)
+{
+	struct drm_i915_gem_request *req =
+	       	container_of(req_ref, typeof(*req), ref);
+	kmem_cache_free(req->i915->requests, req);
+}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
new file mode 100644
index 000000000000..166e0733d2d8
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -0,0 +1,245 @@
+/*
+ * Copyright © 2008-2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef I915_GEM_REQUEST_H
+#define I915_GEM_REQUEST_H
+
+/**
+ * Request queue structure.
+ *
+ * The request queue allows us to note sequence numbers that have been emitted
+ * and may be associated with active buffers to be retired.
+ *
+ * By keeping this list, we can avoid having to do questionable sequence
+ * number comparisons on buffer last_read|write_seqno. It also allows an
+ * emission time to be associated with the request for tracking how far ahead
+ * of the GPU the submission is.
+ *
+ * The requests are reference counted, so upon creation they should have an
+ * initial reference taken using kref_init
+ */
+struct drm_i915_gem_request {
+	struct kref ref;
+
+	/** On Which ring this request was generated */
+	struct drm_i915_private *i915;
+
+	/**
+	 * Context and ring buffer related to this request
+	 * Contexts are refcounted, so when this request is associated with a
+	 * context, we must increment the context's refcount, to guarantee that
+	 * it persists while any request is linked to it. Requests themselves
+	 * are also refcounted, so the request will only be freed when the last
+	 * reference to it is dismissed, and the code in
+	 * i915_gem_request_free() will then decrement the refcount on the
+	 * context.
+	 */
+	struct i915_gem_context *ctx;
+	struct intel_engine_cs *engine;
+	struct intel_ringbuffer *ringbuf;
+	struct intel_signal_node signaling;
+
+	unsigned reset_counter;
+
+	/** GEM sequence number associated with the previous request,
+	 * when the HWS breadcrumb is equal to this the GPU is processing
+	 * this request.
+	 */
+	u32 previous_seqno;
+
+	/** GEM sequence number associated with this request,
+	 * when the HWS breadcrumb is equal or greater than this the GPU
+	 * has finished processing this request.
+	 */
+	u32 seqno;
+
+	/** Position in the ringbuffer of the start of the request */
+	u32 head;
+
+	/**
+	 * Position in the ringbuffer of the start of the postfix.
+	 * This is required to calculate the maximum available ringbuffer
+	 * space without overwriting the postfix.
+	 */
+	u32 postfix;
+
+	/** Position in the ringbuffer of the end of the whole request */
+	u32 tail;
+
+	/** Preallocate space in the ringbuffer for the emitting the request */
+	u32 reserved_space;
+
+
+	/**
+	 * Context related to the previous request.
+	 * As the contexts are accessed by the hardware until the switch is
+	 * completed to a new context, the hardware may still be writing
+	 * to the context object after the breadcrumb is visible. We must
+	 * not unpin/unbind/prune that object whilst still active and so
+	 * we keep the previous context pinned until the following (this)
+	 * request is retired.
+	 */
+	struct i915_gem_context *previous_context;
+
+
+	/** Batch buffer related to this request if any (used for
+	 * error state dump only) */
+	struct drm_i915_gem_object *batch_obj;
+
+	/** Time at which this request was emitted, in jiffies. */
+	unsigned long emitted_jiffies;
+
+	/** global list entry for this request */
+	struct list_head list;
+
+	struct drm_i915_file_private *file_priv;
+	/** file_priv list entry for this request */
+	struct list_head client_list;
+
+	/** process identifier submitting this request */
+	struct pid *pid;
+
+	/**
+	 * The ELSP only accepts two elements at a time, so we queue
+	 * context/tail pairs on a given queue (ring->execlist_queue) until the
+	 * hardware is available. The queue serves a double purpose: we also use
+	 * it to keep track of the up to 2 contexts currently in the hardware
+	 * (usually one in execution and the other queued up by the GPU): We
+	 * only remove elements from the head of the queue when the hardware
+	 * informs us that an element has been completed.
+	 *
+	 * All accesses to the queue are mediated by a spinlock
+	 * (ring->execlist_lock).
+	 */
+
+	/** Execlist link in the submission queue.*/
+	struct list_head execlist_link;
+
+	/** Execlists no. of times this request has been sent to the ELSP */
+	int elsp_submitted;
+
+	/** Execlists context hardware id. */
+	unsigned ctx_hw_id;
+};
+
+static inline struct drm_i915_private *
+__request_to_i915(const struct drm_i915_gem_request *request)
+{
+	return request->i915;
+}
+
+struct drm_i915_gem_request * __must_check
+i915_gem_request_alloc(struct intel_engine_cs *engine,
+		       struct i915_gem_context *ctx);
+void i915_gem_request_free(struct kref *req_ref);
+int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
+				   struct drm_file *file);
+void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
+
+static inline uint32_t
+i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
+{
+	return req ? req->seqno : 0;
+}
+
+static inline struct intel_engine_cs *
+i915_gem_request_get_engine(struct drm_i915_gem_request *req)
+{
+	return req ? req->engine : NULL;
+}
+
+static inline struct drm_i915_gem_request *
+i915_gem_request_reference(struct drm_i915_gem_request *req)
+{
+	if (req)
+		kref_get(&req->ref);
+	return req;
+}
+
+static inline void
+i915_gem_request_unreference(struct drm_i915_gem_request *req)
+{
+	kref_put(&req->ref, i915_gem_request_free);
+}
+
+static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
+					   struct drm_i915_gem_request *src)
+{
+	if (src)
+		i915_gem_request_reference(src);
+
+	if (*pdst)
+		i915_gem_request_unreference(*pdst);
+
+	*pdst = src;
+}
+
+void __i915_add_request(struct drm_i915_gem_request *req,
+			struct drm_i915_gem_object *batch_obj,
+			bool flush_caches);
+#define i915_add_request(req) \
+	__i915_add_request(req, NULL, true)
+#define i915_add_request_no_flush(req) \
+	__i915_add_request(req, NULL, false)
+
+struct intel_rps_client;
+
+int __i915_wait_request(struct drm_i915_gem_request *req,
+			bool interruptible,
+			s64 *timeout,
+			struct intel_rps_client *rps);
+int __must_check i915_wait_request(struct drm_i915_gem_request *req);
+
+static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine);
+
+/**
+ * Returns true if seq1 is later than seq2.
+ */
+static inline bool
+i915_seqno_passed(uint32_t seq1, uint32_t seq2)
+{
+	return (int32_t)(seq1 - seq2) >= 0;
+}
+static inline bool i915_gem_request_started(const struct drm_i915_gem_request *req)
+{
+	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
+				 req->previous_seqno);
+}
+
+static inline bool i915_gem_request_completed(const struct drm_i915_gem_request *req)
+{
+	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
+				 req->seqno);
+}
+
+bool __i915_spin_request(const struct drm_i915_gem_request *request,
+			 int state, unsigned long timeout_us);
+static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
+				     int state, unsigned long timeout_us)
+{
+	return (i915_gem_request_started(request) &&
+		__i915_spin_request(request, state, timeout_us));
+}
+
+#endif /* I915_GEM_REQUEST_H */
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index a066dcfcdd38..3ba5302ce19f 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1400,6 +1400,9 @@ void i915_capture_error_state(struct drm_i915_private *dev_priv,
 	struct drm_i915_error_state *error;
 	unsigned long flags;
 
+	if (READ_ONCE(dev_priv->gpu_error.first_error))
+		return;
+
 	/* Account for pipe specific data like PIPE*STAT */
 	error = kzalloc(sizeof(*error), GFP_ATOMIC);
 	if (!error) {
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (11 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-08  9:14   ` Daniel Vetter
  2016-06-03 16:36 ` [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put Chris Wilson
                   ` (50 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Jesse Barnes

dma-buf provides a generic fence class for interoperation between
drivers. Internally we use the request structure as a fence, and so with
only a little bit of interfacing we can rebase those requests on top of
dma-buf fences. This will allow us, in the future, to pass those fences
back to userspace or between drivers.

v2: The fence_context needs to be globally unique, not just unique to
this device.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |   2 +-
 drivers/gpu/drm/i915/i915_gem_request.c    | 116 ++++++++++++++++++++++++++---
 drivers/gpu/drm/i915/i915_gem_request.h    |  33 ++++----
 drivers/gpu/drm/i915/i915_gpu_error.c      |   2 +-
 drivers/gpu/drm/i915/i915_guc_submission.c |   4 +-
 drivers/gpu/drm/i915/i915_trace.h          |  10 +--
 drivers/gpu/drm/i915/intel_breadcrumbs.c   |   7 +-
 drivers/gpu/drm/i915/intel_lrc.c           |   3 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c    |  11 +--
 drivers/gpu/drm/i915/intel_ringbuffer.h    |   1 +
 10 files changed, 143 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 8f576b443ff6..8e37315443f3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -768,7 +768,7 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
 			if (req->pid)
 				task = pid_task(req->pid, PIDTYPE_PID);
 			seq_printf(m, "    %x @ %d: %s [%d]\n",
-				   req->seqno,
+				   req->fence.seqno,
 				   (int) (jiffies - req->emitted_jiffies),
 				   task ? task->comm : "<unknown>",
 				   task ? task->pid : -1);
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 34b2f151cdfc..512b15153ac6 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -24,6 +24,98 @@
 
 #include "i915_drv.h"
 
+static inline struct drm_i915_gem_request *
+to_i915_request(struct fence *fence)
+{
+	return container_of(fence, struct drm_i915_gem_request, fence);
+}
+
+static const char *i915_fence_get_driver_name(struct fence *fence)
+{
+	return "i915";
+}
+
+static const char *i915_fence_get_timeline_name(struct fence *fence)
+{
+	/* Timelines are bound by eviction to a VM. However, since
+	 * we only have a global seqno at the moment, we only have
+	 * a single timeline. Note that each timeline will have
+	 * multiple execution contexts (fence contexts) as we allow
+	 * engines within a single timeline to execute in parallel.
+	 */
+	return "global";
+}
+
+static bool i915_fence_signaled(struct fence *fence)
+{
+	return i915_gem_request_completed(to_i915_request(fence));
+}
+
+static bool i915_fence_enable_signaling(struct fence *fence)
+{
+	if (i915_fence_signaled(fence))
+		return false;
+
+	return intel_engine_enable_signaling(to_i915_request(fence)) == 0;
+}
+
+static signed long i915_fence_wait(struct fence *fence,
+				   bool interruptible,
+				   signed long timeout_jiffies)
+{
+	s64 timeout_ns, *timeout;
+	int ret;
+
+	if (timeout_jiffies != MAX_SCHEDULE_TIMEOUT) {
+		timeout_ns = jiffies_to_nsecs(timeout_jiffies);
+		timeout = &timeout_ns;
+	} else
+		timeout = NULL;
+
+	ret = __i915_wait_request(to_i915_request(fence),
+				  interruptible, timeout,
+				  NULL);
+	if (ret == -ETIME)
+		return 0;
+
+	if (ret < 0)
+		return ret;
+
+	if (timeout_jiffies != MAX_SCHEDULE_TIMEOUT)
+		timeout_jiffies = nsecs_to_jiffies(timeout_ns);
+
+	return timeout_jiffies;
+}
+
+static void i915_fence_value_str(struct fence *fence, char *str, int size)
+{
+	snprintf(str, size, "%u", fence->seqno);
+}
+
+static void i915_fence_timeline_value_str(struct fence *fence, char *str,
+					  int size)
+{
+	snprintf(str, size, "%u",
+		 intel_engine_get_seqno(to_i915_request(fence)->engine));
+}
+
+static void i915_fence_release(struct fence *fence)
+{
+	struct drm_i915_gem_request *req = to_i915_request(fence);
+	kmem_cache_free(req->i915->requests, req);
+}
+
+static const struct fence_ops i915_fence_ops = {
+	.get_driver_name = i915_fence_get_driver_name,
+	.get_timeline_name = i915_fence_get_timeline_name,
+	.enable_signaling = i915_fence_enable_signaling,
+	.signaled = i915_fence_signaled,
+	.wait = i915_fence_wait,
+	.release = i915_fence_release,
+	.fence_value_str = i915_fence_value_str,
+	.timeline_value_str = i915_fence_timeline_value_str,
+};
+
 static int i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
 {
 	if (__i915_terminally_wedged(reset_counter))
@@ -117,6 +209,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
 	struct drm_i915_private *dev_priv = engine->i915;
 	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
 	struct drm_i915_gem_request *req;
+	u32 seqno;
 	int ret;
 
 	if (!req_out)
@@ -136,11 +229,17 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
 	if (req == NULL)
 		return -ENOMEM;
 
-	ret = i915_gem_get_seqno(dev_priv, &req->seqno);
+	ret = i915_gem_get_seqno(dev_priv, &seqno);
 	if (ret)
 		goto err;
 
-	kref_init(&req->ref);
+	spin_lock_init(&req->lock);
+	fence_init(&req->fence,
+		   &i915_fence_ops,
+		   &req->lock,
+		   engine->fence_context,
+		   seqno);
+
 	req->i915 = dev_priv;
 	req->engine = engine;
 	req->reset_counter = reset_counter;
@@ -376,7 +475,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	 */
 	request->emitted_jiffies = jiffies;
 	request->previous_seqno = engine->last_submitted_seqno;
-	smp_store_mb(engine->last_submitted_seqno, request->seqno);
+	smp_store_mb(engine->last_submitted_seqno, request->fence.seqno);
 	list_add_tail(&request->list, &engine->request_list);
 
 	/* Record the position of the start of the request so that
@@ -543,7 +642,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
 	if (i915_spin_request(req, state, 5))
 		goto complete;
 
-	intel_wait_init(&wait, req->seqno);
+	intel_wait_init(&wait, req->fence.seqno);
 	set_current_state(state);
 	if (intel_engine_add_wait(req->engine, &wait))
 		/* In order to check that we haven't missed the interrupt
@@ -609,7 +708,7 @@ complete:
 			*timeout = 0;
 	}
 
-	if (rps && req->seqno == req->engine->last_submitted_seqno) {
+	if (rps && req->fence.seqno == req->engine->last_submitted_seqno) {
 		/* The GPU is now idle and this client has stalled.
 		 * Since no other client has submitted a request in the
 		 * meantime, assume that this client is the only one
@@ -650,10 +749,3 @@ int i915_wait_request(struct drm_i915_gem_request *req)
 
 	return 0;
 }
-
-void i915_gem_request_free(struct kref *req_ref)
-{
-	struct drm_i915_gem_request *req =
-	       	container_of(req_ref, typeof(*req), ref);
-	kmem_cache_free(req->i915->requests, req);
-}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 166e0733d2d8..248aec2c09b7 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -25,6 +25,8 @@
 #ifndef I915_GEM_REQUEST_H
 #define I915_GEM_REQUEST_H
 
+#include <linux/fence.h>
+
 /**
  * Request queue structure.
  *
@@ -36,11 +38,11 @@
  * emission time to be associated with the request for tracking how far ahead
  * of the GPU the submission is.
  *
- * The requests are reference counted, so upon creation they should have an
- * initial reference taken using kref_init
+ * The requests are reference counted.
  */
 struct drm_i915_gem_request {
-	struct kref ref;
+	struct fence fence;
+	spinlock_t lock;
 
 	/** On Which ring this request was generated */
 	struct drm_i915_private *i915;
@@ -68,12 +70,6 @@ struct drm_i915_gem_request {
 	 */
 	u32 previous_seqno;
 
-	/** GEM sequence number associated with this request,
-	 * when the HWS breadcrumb is equal or greater than this the GPU
-	 * has finished processing this request.
-	 */
-	u32 seqno;
-
 	/** Position in the ringbuffer of the start of the request */
 	u32 head;
 
@@ -152,7 +148,6 @@ __request_to_i915(const struct drm_i915_gem_request *request)
 struct drm_i915_gem_request * __must_check
 i915_gem_request_alloc(struct intel_engine_cs *engine,
 		       struct i915_gem_context *ctx);
-void i915_gem_request_free(struct kref *req_ref);
 int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
 				   struct drm_file *file);
 void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
@@ -160,7 +155,7 @@ void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
 static inline uint32_t
 i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
 {
-	return req ? req->seqno : 0;
+	return req ? req->fence.seqno : 0;
 }
 
 static inline struct intel_engine_cs *
@@ -170,17 +165,23 @@ i915_gem_request_get_engine(struct drm_i915_gem_request *req)
 }
 
 static inline struct drm_i915_gem_request *
+to_request(struct fence *fence)
+{
+	/* We assume that NULL fence/request are interoperable */
+	BUILD_BUG_ON(offsetof(struct drm_i915_gem_request, fence) != 0);
+	return container_of(fence, struct drm_i915_gem_request, fence);
+}
+
+static inline struct drm_i915_gem_request *
 i915_gem_request_reference(struct drm_i915_gem_request *req)
 {
-	if (req)
-		kref_get(&req->ref);
-	return req;
+	return to_request(fence_get(&req->fence));
 }
 
 static inline void
 i915_gem_request_unreference(struct drm_i915_gem_request *req)
 {
-	kref_put(&req->ref, i915_gem_request_free);
+	fence_put(&req->fence);
 }
 
 static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
@@ -230,7 +231,7 @@ static inline bool i915_gem_request_started(const struct drm_i915_gem_request *r
 static inline bool i915_gem_request_completed(const struct drm_i915_gem_request *req)
 {
 	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
-				 req->seqno);
+				 req->fence.seqno);
 }
 
 bool __i915_spin_request(const struct drm_i915_gem_request *request,
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 3ba5302ce19f..5332bd32c555 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1181,7 +1181,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 			}
 
 			erq = &error->ring[i].requests[count++];
-			erq->seqno = request->seqno;
+			erq->seqno = request->fence.seqno;
 			erq->jiffies = request->emitted_jiffies;
 			erq->tail = request->postfix;
 		}
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index ac72451c571c..629111d42ce0 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -538,7 +538,7 @@ static void guc_add_workqueue_item(struct i915_guc_client *gc,
 							     rq->engine);
 
 	wqi->ring_tail = tail << WQ_RING_TAIL_SHIFT;
-	wqi->fence_id = rq->seqno;
+	wqi->fence_id = rq->fence.seqno;
 
 	kunmap_atomic(base);
 }
@@ -578,7 +578,7 @@ int i915_guc_submit(struct drm_i915_gem_request *rq)
 		client->b_fail += 1;
 
 	guc->submissions[engine_id] += 1;
-	guc->last_seqno[engine_id] = rq->seqno;
+	guc->last_seqno[engine_id] = rq->fence.seqno;
 
 	return b_ret;
 }
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index f59cf07184ae..0296a77b586a 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -465,7 +465,7 @@ TRACE_EVENT(i915_gem_ring_sync_to,
 			   __entry->dev = from->i915->dev->primary->index;
 			   __entry->sync_from = from->id;
 			   __entry->sync_to = to_req->engine->id;
-			   __entry->seqno = i915_gem_request_get_seqno(req);
+			   __entry->seqno = req->fence.seqno;
 			   ),
 
 	    TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u",
@@ -488,9 +488,9 @@ TRACE_EVENT(i915_gem_ring_dispatch,
 	    TP_fast_assign(
 			   __entry->dev = req->i915->dev->primary->index;
 			   __entry->ring = req->engine->id;
-			   __entry->seqno = req->seqno;
+			   __entry->seqno = req->fence.seqno;
 			   __entry->flags = flags;
-			   intel_engine_enable_signaling(req);
+			   fence_enable_sw_signaling(&req->fence);
 			   ),
 
 	    TP_printk("dev=%u, ring=%u, seqno=%u, flags=%x",
@@ -533,7 +533,7 @@ DECLARE_EVENT_CLASS(i915_gem_request,
 	    TP_fast_assign(
 			   __entry->dev = req->i915->dev->primary->index;
 			   __entry->ring = req->engine->id;
-			   __entry->seqno = req->seqno;
+			   __entry->seqno = req->fence.seqno;
 			   ),
 
 	    TP_printk("dev=%u, ring=%u, seqno=%u",
@@ -595,7 +595,7 @@ TRACE_EVENT(i915_gem_request_wait_begin,
 	    TP_fast_assign(
 			   __entry->dev = req->i915->dev->primary->index;
 			   __entry->ring = req->engine->id;
-			   __entry->seqno = req->seqno;
+			   __entry->seqno = req->fence.seqno;
 			   __entry->blocking =
 				     mutex_is_locked(&req->i915->dev->struct_mutex);
 			   ),
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index dc65a007fa20..05f62f706897 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -396,6 +396,7 @@ static int intel_breadcrumbs_signaler(void *arg)
 			 */
 			intel_engine_remove_wait(engine,
 						 &request->signaling.wait);
+			fence_signal(&request->fence);
 
 			/* Find the next oldest signal. Note that as we have
 			 * not been holding the lock, another client may
@@ -444,7 +445,7 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
 	}
 
 	request->signaling.wait.task = b->signaler;
-	request->signaling.wait.seqno = request->seqno;
+	request->signaling.wait.seqno = request->fence.seqno;
 	i915_gem_request_reference(request);
 
 	/* First add ourselves into the list of waiters, but register our
@@ -466,8 +467,8 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
 	p = &b->signals.rb_node;
 	while (*p) {
 		parent = *p;
-		if (i915_seqno_passed(request->seqno,
-				      to_signal(parent)->seqno)) {
+		if (i915_seqno_passed(request->fence.seqno,
+				      to_signal(parent)->fence.seqno)) {
 			p = &parent->rb_right;
 			first = false;
 		} else
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 0742a849acce..c7a9ebdb0811 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1731,7 +1731,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
 				intel_hws_seqno_address(request->engine) |
 				MI_FLUSH_DW_USE_GTT);
 	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, request->seqno);
+	intel_logical_ring_emit(ringbuf, request->fence.seqno);
 	intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
 	intel_logical_ring_emit(ringbuf, MI_NOOP);
 	return intel_logical_ring_advance_and_submit(request);
@@ -1964,6 +1964,7 @@ logical_ring_setup(struct drm_device *dev, enum intel_engine_id id)
 	engine->exec_id = info->exec_id;
 	engine->guc_id = info->guc_id;
 	engine->mmio_base = info->mmio_base;
+	engine->fence_context = fence_context_alloc(1);
 
 	engine->i915 = dev_priv;
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 327ad7fdf118..c3d6345aa2c1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1266,7 +1266,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 					   PIPE_CONTROL_CS_STALL);
 		intel_ring_emit(signaller, lower_32_bits(gtt_offset));
 		intel_ring_emit(signaller, upper_32_bits(gtt_offset));
-		intel_ring_emit(signaller, signaller_req->seqno);
+		intel_ring_emit(signaller, signaller_req->fence.seqno);
 		intel_ring_emit(signaller, 0);
 		intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
 					   MI_SEMAPHORE_TARGET(waiter->hw_id));
@@ -1304,7 +1304,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 		intel_ring_emit(signaller, lower_32_bits(gtt_offset) |
 					   MI_FLUSH_DW_USE_GTT);
 		intel_ring_emit(signaller, upper_32_bits(gtt_offset));
-		intel_ring_emit(signaller, signaller_req->seqno);
+		intel_ring_emit(signaller, signaller_req->fence.seqno);
 		intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
 					   MI_SEMAPHORE_TARGET(waiter->hw_id));
 		intel_ring_emit(signaller, 0);
@@ -1337,7 +1337,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 		if (i915_mmio_reg_valid(mbox_reg)) {
 			intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
 			intel_ring_emit_reg(signaller, mbox_reg);
-			intel_ring_emit(signaller, signaller_req->seqno);
+			intel_ring_emit(signaller, signaller_req->fence.seqno);
 		}
 	}
 
@@ -1373,7 +1373,7 @@ gen6_add_request(struct drm_i915_gem_request *req)
 	intel_ring_emit(engine, MI_STORE_DWORD_INDEX);
 	intel_ring_emit(engine,
 			I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
-	intel_ring_emit(engine, req->seqno);
+	intel_ring_emit(engine, req->fence.seqno);
 	intel_ring_emit(engine, MI_USER_INTERRUPT);
 	__intel_ring_advance(engine);
 
@@ -1623,7 +1623,7 @@ i9xx_add_request(struct drm_i915_gem_request *req)
 	intel_ring_emit(engine, MI_STORE_DWORD_INDEX);
 	intel_ring_emit(engine,
 		       	I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
-	intel_ring_emit(engine, req->seqno);
+	intel_ring_emit(engine, req->fence.seqno);
 	intel_ring_emit(engine, MI_USER_INTERRUPT);
 	__intel_ring_advance(engine);
 
@@ -2092,6 +2092,7 @@ static int intel_init_ring_buffer(struct drm_device *dev,
 	WARN_ON(engine->buffer);
 
 	engine->i915 = dev_priv;
+	engine->fence_context = fence_context_alloc(1);
 	INIT_LIST_HEAD(&engine->active_list);
 	INIT_LIST_HEAD(&engine->request_list);
 	INIT_LIST_HEAD(&engine->execlist_queue);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 6017367e94fb..b041fb6a6d01 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -158,6 +158,7 @@ struct intel_engine_cs {
 	unsigned int exec_id;
 	unsigned int hw_id;
 	unsigned int guc_id; /* XXX same as hw_id? */
+	unsigned fence_context;
 	u32		mmio_base;
 	struct intel_ringbuffer *buffer;
 	struct list_head buffers;
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (12 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-08  9:15   ` Daniel Vetter
  2016-06-03 16:36 ` [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference() Chris Wilson
                   ` (49 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Now that we derive requests from struct fence, swap over to its
nomenclature for references. It's shorter and more idiomatic across the
kernel.

s/i915_gem_request_reference/i915_gem_request_get/
s/i915_gem_request_unreference/i915_gem_request_put/

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c          | 14 +++++++-------
 drivers/gpu/drm/i915/i915_gem_request.c  |  2 +-
 drivers/gpu/drm/i915/i915_gem_request.h  |  8 ++++----
 drivers/gpu/drm/i915/i915_gem_userptr.c  |  4 ++--
 drivers/gpu/drm/i915/intel_breadcrumbs.c |  4 ++--
 drivers/gpu/drm/i915/intel_display.c     |  5 ++---
 drivers/gpu/drm/i915/intel_lrc.c         | 10 +++++-----
 drivers/gpu/drm/i915/intel_pm.c          |  5 ++---
 8 files changed, 25 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 95782cf85dcc..5f232fb1a2a4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1188,7 +1188,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 		if (req == NULL)
 			return 0;
 
-		requests[n++] = i915_gem_request_reference(req);
+		requests[n++] = i915_gem_request_get(req);
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
 			struct drm_i915_gem_request *req;
@@ -1197,7 +1197,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 			if (req == NULL)
 				continue;
 
-			requests[n++] = i915_gem_request_reference(req);
+			requests[n++] = i915_gem_request_get(req);
 		}
 	}
 
@@ -1210,7 +1210,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 	for (i = 0; i < n; i++) {
 		if (ret == 0)
 			i915_gem_object_retire_request(obj, requests[i]);
-		i915_gem_request_unreference(requests[i]);
+		i915_gem_request_put(requests[i]);
 	}
 
 	return ret;
@@ -2532,7 +2532,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		if (obj->last_read_req[i] == NULL)
 			continue;
 
-		req[n++] = i915_gem_request_reference(obj->last_read_req[i]);
+		req[n++] = i915_gem_request_get(obj->last_read_req[i]);
 	}
 
 	mutex_unlock(&dev->struct_mutex);
@@ -2542,7 +2542,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 			ret = __i915_wait_request(req[i], true,
 						  args->timeout_ns > 0 ? &args->timeout_ns : NULL,
 						  to_rps_client(file));
-		i915_gem_request_unreference(req[i]);
+		i915_gem_request_put(req[i]);
 	}
 	return ret;
 
@@ -3548,14 +3548,14 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
 		target = request;
 	}
 	if (target)
-		i915_gem_request_reference(target);
+		i915_gem_request_get(target);
 	spin_unlock(&file_priv->mm.lock);
 
 	if (target == NULL)
 		return 0;
 
 	ret = __i915_wait_request(target, true, NULL, NULL);
-	i915_gem_request_unreference(target);
+	i915_gem_request_put(target);
 
 	return ret;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 512b15153ac6..2ecaf9fa936a 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -365,7 +365,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
 	}
 
 	i915_gem_context_unreference(request->ctx);
-	i915_gem_request_unreference(request);
+	i915_gem_request_put(request);
 }
 
 void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 248aec2c09b7..b1bc96c9e31d 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -173,13 +173,13 @@ to_request(struct fence *fence)
 }
 
 static inline struct drm_i915_gem_request *
-i915_gem_request_reference(struct drm_i915_gem_request *req)
+i915_gem_request_get(struct drm_i915_gem_request *req)
 {
 	return to_request(fence_get(&req->fence));
 }
 
 static inline void
-i915_gem_request_unreference(struct drm_i915_gem_request *req)
+i915_gem_request_put(struct drm_i915_gem_request *req)
 {
 	fence_put(&req->fence);
 }
@@ -188,10 +188,10 @@ static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
 					   struct drm_i915_gem_request *src)
 {
 	if (src)
-		i915_gem_request_reference(src);
+		i915_gem_request_get(src);
 
 	if (*pdst)
-		i915_gem_request_unreference(*pdst);
+		i915_gem_request_put(*pdst);
 
 	*pdst = src;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 2314c88323e3..ba16e044fac6 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -78,7 +78,7 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
 		if (req == NULL)
 			continue;
 
-		requests[n++] = i915_gem_request_reference(req);
+		requests[n++] = i915_gem_request_get(req);
 	}
 
 	mutex_unlock(&dev->struct_mutex);
@@ -89,7 +89,7 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
 	mutex_lock(&dev->struct_mutex);
 
 	for (i = 0; i < n; i++)
-		i915_gem_request_unreference(requests[i]);
+		i915_gem_request_put(requests[i]);
 }
 
 static void cancel_userptr(struct work_struct *work)
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 05f62f706897..1d60149833e6 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -413,7 +413,7 @@ static int intel_breadcrumbs_signaler(void *arg)
 			rb_erase(&request->signaling.node, &b->signals);
 			spin_unlock(&b->lock);
 
-			i915_gem_request_unreference(request);
+			i915_gem_request_put(request);
 		} else {
 			if (kthread_should_stop())
 				break;
@@ -446,7 +446,7 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
 
 	request->signaling.wait.task = b->signaler;
 	request->signaling.wait.seqno = request->fence.seqno;
-	i915_gem_request_reference(request);
+	i915_gem_request_get(request);
 
 	/* First add ourselves into the list of waiters, but register our
 	 * bottom-half as the signaller thread. As per usual, only the oldest
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 14e41fdd8112..9b257126fa22 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11005,11 +11005,10 @@ static void intel_unpin_work_fn(struct work_struct *__work)
 	mutex_lock(&dev->struct_mutex);
 	intel_unpin_fb_obj(work->old_fb, primary->state->rotation);
 	drm_gem_object_unreference(&work->pending_flip_obj->base);
-
-	if (work->flip_queued_req)
-		i915_gem_request_assign(&work->flip_queued_req, NULL);
 	mutex_unlock(&dev->struct_mutex);
 
+	i915_gem_request_put(work->flip_queued_req);
+
 	intel_frontbuffer_flip_complete(dev, to_intel_plane(primary)->frontbuffer_bit);
 	intel_fbc_post_update(crtc);
 	drm_framebuffer_unreference(work->old_fb);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index c7a9ebdb0811..a25177016fb3 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -438,7 +438,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *engine)
 			 * will update tail past first request's workload */
 			cursor->elsp_submitted = req0->elsp_submitted;
 			list_del(&req0->execlist_link);
-			i915_gem_request_unreference(req0);
+			i915_gem_request_put(req0);
 			req0 = cursor;
 		} else {
 			req1 = cursor;
@@ -489,7 +489,7 @@ execlists_check_remove_request(struct intel_engine_cs *engine, u32 ctx_id)
 		return 0;
 
 	list_del(&head_req->execlist_link);
-	i915_gem_request_unreference(head_req);
+	i915_gem_request_put(head_req);
 
 	return 1;
 }
@@ -610,11 +610,11 @@ static void execlists_context_queue(struct drm_i915_gem_request *request)
 			WARN(tail_req->elsp_submitted != 0,
 				"More than 2 already-submitted reqs queued\n");
 			list_del(&tail_req->execlist_link);
-			i915_gem_request_unreference(tail_req);
+			i915_gem_request_put(tail_req);
 		}
 	}
 
-	i915_gem_request_reference(request);
+	i915_gem_request_get(request);
 	list_add_tail(&request->execlist_link, &engine->execlist_queue);
 	request->ctx_hw_id = request->ctx->hw_id;
 	if (num_elements == 0)
@@ -888,7 +888,7 @@ void intel_execlists_cancel_requests(struct intel_engine_cs *engine)
 
 	list_for_each_entry_safe(req, tmp, &cancel_list, execlist_link) {
 		list_del(&req->execlist_link);
-		i915_gem_request_unreference(req);
+		i915_gem_request_put(req);
 	}
 }
 
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 923ec6884a5e..ee247063c1b2 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7696,7 +7696,7 @@ static void __intel_rps_boost_work(struct work_struct *work)
 	if (!i915_gem_request_completed(req))
 		gen6_rps_boost(req->i915, NULL, req->emitted_jiffies);
 
-	i915_gem_request_unreference(req);
+	i915_gem_request_put(req);
 	kfree(boost);
 }
 
@@ -7714,8 +7714,7 @@ void intel_queue_rps_boost_for_request(struct drm_i915_gem_request *req)
 	if (boost == NULL)
 		return;
 
-	i915_gem_request_reference(req);
-	boost->req = req;
+	boost->req = i915_gem_request_get(req);
 
 	INIT_WORK(&boost->work, __intel_rps_boost_work);
 	queue_work(req->i915->wq, &boost->work);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference()
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (13 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-06 12:12   ` Joonas Lahtinen
  2016-06-03 16:36 ` [PATCH 16/62] drm/i915: Wrap drm_gem_object_lookup in i915_gem_object_lookup Chris Wilson
                   ` (48 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

As these are wrappers around kref_get/kref_put() it is preferable to
follow the naming convention and use the same verb get/put in our
wrapper names for manipulating a reference to the context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h            |  6 ++++--
 drivers/gpu/drm/i915/i915_gem_context.c    | 22 ++++++++++------------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  6 +++---
 drivers/gpu/drm/i915/i915_gem_request.c    |  7 +++----
 drivers/gpu/drm/i915/intel_lrc.c           |  4 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.c    |  4 ++--
 6 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 939cd45043c7..48d89b181246 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3247,12 +3247,14 @@ i915_gem_context_lookup(struct drm_i915_file_private *file_priv, u32 id)
 	return ctx;
 }
 
-static inline void i915_gem_context_reference(struct i915_gem_context *ctx)
+static inline struct i915_gem_context *
+i915_gem_context_get(struct i915_gem_context *ctx)
 {
 	kref_get(&ctx->ref);
+	return ctx;
 }
 
-static inline void i915_gem_context_unreference(struct i915_gem_context *ctx)
+static inline void i915_gem_context_put(struct i915_gem_context *ctx)
 {
 	lockdep_assert_held(&ctx->i915->drm.struct_mutex);
 	kref_put(&ctx->ref, i915_gem_context_free);
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index d01b3893eac0..b62862e31642 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -301,7 +301,7 @@ __create_hw_context(struct drm_device *dev,
 	return ctx;
 
 err_out:
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 	return ERR_PTR(ret);
 }
 
@@ -329,7 +329,7 @@ i915_gem_create_context(struct drm_device *dev,
 			DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
 					 PTR_ERR(ppgtt));
 			idr_remove(&file_priv->context_idr, ctx->user_handle);
-			i915_gem_context_unreference(ctx);
+			i915_gem_context_put(ctx);
 			return ERR_CAST(ppgtt);
 		}
 
@@ -352,7 +352,7 @@ static void i915_gem_context_unpin(struct i915_gem_context *ctx,
 		if (ce->state)
 			i915_gem_object_ggtt_unpin(ce->state);
 
-		i915_gem_context_unreference(ctx);
+		i915_gem_context_put(ctx);
 	}
 }
 
@@ -466,7 +466,7 @@ void i915_gem_context_fini(struct drm_device *dev)
 
 	lockdep_assert_held(&dev->struct_mutex);
 
-	i915_gem_context_unreference(dctx);
+	i915_gem_context_put(dctx);
 	dev_priv->kernel_context = NULL;
 
 	ida_destroy(&dev_priv->context_hw_ida);
@@ -477,7 +477,7 @@ static int context_idr_cleanup(int id, void *p, void *data)
 	struct i915_gem_context *ctx = p;
 
 	ctx->file_priv = ERR_PTR(-EBADF);
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 	return 0;
 }
 
@@ -789,10 +789,9 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
 
 		/* obj is kept alive until the next request by its active ref */
 		i915_gem_object_ggtt_unpin(from->engine[RCS].state);
-		i915_gem_context_unreference(from);
+		i915_gem_context_put(from);
 	}
-	i915_gem_context_reference(to);
-	engine->last_context = to;
+	engine->last_context = i915_gem_context_get(to);
 
 	/* GEN8 does *not* require an explicit reload if the PDPs have been
 	 * setup, and we do not wish to move them.
@@ -876,10 +875,9 @@ int i915_switch_context(struct drm_i915_gem_request *req)
 		}
 
 		if (to != engine->last_context) {
-			i915_gem_context_reference(to);
 			if (engine->last_context)
-				i915_gem_context_unreference(engine->last_context);
-			engine->last_context = to;
+				i915_gem_context_put(engine->last_context);
+			engine->last_context = i915_gem_context_get(to);
 		}
 
 		return 0;
@@ -947,7 +945,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
 	}
 
 	idr_remove(&file_priv->context_idr, ctx->user_handle);
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 	mutex_unlock(&dev->struct_mutex);
 
 	DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index d3297dab0298..7f441e74c903 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1496,7 +1496,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 		goto pre_mutex_err;
 	}
 
-	i915_gem_context_reference(ctx);
+	i915_gem_context_get(ctx);
 
 	if (ctx->ppgtt)
 		vm = &ctx->ppgtt->base;
@@ -1507,7 +1507,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 
 	eb = eb_create(args);
 	if (eb == NULL) {
-		i915_gem_context_unreference(ctx);
+		i915_gem_context_put(ctx);
 		mutex_unlock(&dev->struct_mutex);
 		ret = -ENOMEM;
 		goto pre_mutex_err;
@@ -1651,7 +1651,7 @@ err_batch_unpin:
 
 err:
 	/* the request owns the ref now */
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 	eb_destroy(eb);
 
 	mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 2ecaf9fa936a..987a43f1aac8 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -243,8 +243,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
 	req->i915 = dev_priv;
 	req->engine = engine;
 	req->reset_counter = reset_counter;
-	req->ctx = ctx;
-	i915_gem_context_reference(ctx);
+	req->ctx = i915_gem_context_get(ctx);
 
 	/*
 	 * Reserve space in the ring buffer for all the commands required to
@@ -266,7 +265,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
 	return 0;
 
 err_ctx:
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 err:
 	kmem_cache_free(dev_priv->requests, req);
 	return ret;
@@ -364,7 +363,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
 					       request->engine);
 	}
 
-	i915_gem_context_unreference(request->ctx);
+	i915_gem_context_put(request->ctx);
 	i915_gem_request_put(request);
 }
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index a25177016fb3..d55aa9ca2877 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -961,7 +961,6 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
 	if (ret)
 		goto unpin_map;
 
-	i915_gem_context_reference(ctx);
 	ce->lrc_vma = i915_gem_obj_to_ggtt(ce->state);
 	intel_lr_context_descriptor_update(ctx, engine);
 
@@ -973,6 +972,7 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
 	if (i915.enable_guc_submission)
 		I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);
 
+	i915_gem_context_get(ctx);
 	return 0;
 
 unpin_map:
@@ -1004,7 +1004,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
 	ce->lrc_desc = 0;
 	ce->lrc_reg_state = NULL;
 
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 }
 
 static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index c3d6345aa2c1..e6a2e4973a01 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2058,7 +2058,7 @@ static int intel_ring_context_pin(struct i915_gem_context *ctx,
 	if (ctx == ctx->i915->kernel_context)
 		ce->initialised = true;
 
-	i915_gem_context_reference(ctx);
+	i915_gem_context_get(ctx);
 	return 0;
 
 error:
@@ -2079,7 +2079,7 @@ static void intel_ring_context_unpin(struct i915_gem_context *ctx,
 	if (ce->state)
 		i915_gem_object_ggtt_unpin(ce->state);
 
-	i915_gem_context_unreference(ctx);
+	i915_gem_context_put(ctx);
 }
 
 static int intel_init_ring_buffer(struct drm_device *dev,
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 16/62] drm/i915: Wrap drm_gem_object_lookup in i915_gem_object_lookup
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (14 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference() Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 17/62] drm/i915: Wrap drm_gem_object_reference in i915_gem_object_get Chris Wilson
                   ` (47 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

For symmetry with a forthcoming i915_gem_object_get() and
i915_gem_object_pu().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h        | 18 ++++++++++-
 drivers/gpu/drm/i915/i915_gem.c        | 56 +++++++++++++++++-----------------
 drivers/gpu/drm/i915/i915_gem_tiling.c |  8 ++---
 drivers/gpu/drm/i915/intel_display.c   |  4 +--
 drivers/gpu/drm/i915/intel_overlay.c   |  5 ++-
 5 files changed, 53 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 48d89b181246..27096004db7c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2268,7 +2268,23 @@ struct drm_i915_gem_object {
 		} userptr;
 	};
 };
-#define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
+
+static inline struct drm_i915_gem_object *
+to_intel_bo(struct drm_gem_object *gem)
+{
+	/* Assert that to_intel_bo(NULL) == NULL */
+	BUILD_BUG_ON(offsetof(struct drm_i915_gem_object, base));
+
+	return container_of(gem, struct drm_i915_gem_object, base);
+}
+
+static inline struct drm_i915_gem_object *
+i915_gem_object_lookup(struct drm_file *file, u32 handle)
+{
+	return to_intel_bo(drm_gem_object_lookup(file, handle));
+}
+__deprecated extern struct drm_gem_object *
+drm_gem_object_lookup(struct drm_file *file, u32 handle);
 
 /*
  * Optimised SGL iterator for GEM objects
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5f232fb1a2a4..837b1402c798 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -695,8 +695,8 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -1049,8 +1049,8 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto put_rpm;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -1253,8 +1253,8 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -1301,8 +1301,8 @@ i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -1339,7 +1339,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 		    struct drm_file *file)
 {
 	struct drm_i915_gem_mmap *args = data;
-	struct drm_gem_object *obj;
+	struct drm_i915_gem_object *obj;
 	unsigned long addr;
 
 	if (args->flags & ~(I915_MMAP_WC))
@@ -1348,19 +1348,19 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 	if (args->flags & I915_MMAP_WC && !boot_cpu_has(X86_FEATURE_PAT))
 		return -ENODEV;
 
-	obj = drm_gem_object_lookup(file, args->handle);
-	if (obj == NULL)
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj)
 		return -ENOENT;
 
 	/* prime objects have no backing filp to GEM mmap
 	 * pages from.
 	 */
-	if (!obj->filp) {
-		drm_gem_object_unreference_unlocked(obj);
+	if (!obj->base.filp) {
+		drm_gem_object_unreference_unlocked(&obj->base);
 		return -EINVAL;
 	}
 
-	addr = vm_mmap(obj->filp, 0, args->size,
+	addr = vm_mmap(obj->base.filp, 0, args->size,
 		       PROT_READ | PROT_WRITE, MAP_SHARED,
 		       args->offset);
 	if (args->flags & I915_MMAP_WC) {
@@ -1368,7 +1368,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 		struct vm_area_struct *vma;
 
 		if (down_write_killable(&mm->mmap_sem)) {
-			drm_gem_object_unreference_unlocked(obj);
+			drm_gem_object_unreference_unlocked(&obj->base);
 			return -EINTR;
 		}
 		vma = find_vma(mm, addr);
@@ -1379,7 +1379,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 			addr = -ENOMEM;
 		up_write(&mm->mmap_sem);
 	}
-	drm_gem_object_unreference_unlocked(obj);
+	drm_gem_object_unreference_unlocked(&obj->base);
 	if (IS_ERR((void *)addr))
 		return addr;
 
@@ -1714,8 +1714,8 @@ i915_gem_mmap_gtt(struct drm_file *file,
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -2504,8 +2504,8 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->bo_handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->bo_handle);
+	if (!obj) {
 		mutex_unlock(&dev->struct_mutex);
 		return -ENOENT;
 	}
@@ -3301,8 +3301,8 @@ int i915_gem_get_caching_ioctl(struct drm_device *dev, void *data,
 	struct drm_i915_gem_caching *args = data;
 	struct drm_i915_gem_object *obj;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL)
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj)
 		return -ENOENT;
 
 	switch (obj->cache_level) {
@@ -3362,8 +3362,8 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		goto rpm_put;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -3729,8 +3729,8 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
@@ -3794,8 +3794,8 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	if (ret)
 		return ret;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file_priv, args->handle));
-	if (&obj->base == NULL) {
+	obj = i915_gem_object_lookup(file_priv, args->handle);
+	if (!obj) {
 		ret = -ENOENT;
 		goto unlock;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index a6eb5c47a49c..de2ba6bf95f1 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -166,8 +166,8 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 	struct drm_i915_gem_object *obj;
 	int ret = 0;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL)
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj)
 		return -ENOENT;
 
 	if (!i915_tiling_ok(dev,
@@ -297,8 +297,8 @@ i915_gem_get_tiling(struct drm_device *dev, void *data,
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj;
 
-	obj = to_intel_bo(drm_gem_object_lookup(file, args->handle));
-	if (&obj->base == NULL)
+	obj = i915_gem_object_lookup(file, args->handle);
+	if (!obj)
 		return -ENOENT;
 
 	mutex_lock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 9b257126fa22..ae35d5bfe1a9 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -15007,8 +15007,8 @@ intel_user_framebuffer_create(struct drm_device *dev,
 	struct drm_i915_gem_object *obj;
 	struct drm_mode_fb_cmd2 mode_cmd = *user_mode_cmd;
 
-	obj = to_intel_bo(drm_gem_object_lookup(filp, mode_cmd.handles[0]));
-	if (&obj->base == NULL)
+	obj = i915_gem_object_lookup(filp, mode_cmd.handles[0]);
+	if (!obj)
 		return ERR_PTR(-ENOENT);
 
 	fb = intel_framebuffer_create(dev, &mode_cmd, obj);
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index eb93f90bb74d..2dc9bde714f3 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -1121,9 +1121,8 @@ int intel_overlay_put_image_ioctl(struct drm_device *dev, void *data,
 	}
 	crtc = to_intel_crtc(drmmode_crtc);
 
-	new_bo = to_intel_bo(drm_gem_object_lookup(file_priv,
-						   put_image_rec->bo_handle));
-	if (&new_bo->base == NULL) {
+	new_bo = i915_gem_object_lookup(file_priv, put_image_rec->bo_handle);
+	if (!new_bo) {
 		ret = -ENOENT;
 		goto out_free;
 	}
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 17/62] drm/i915: Wrap drm_gem_object_reference in i915_gem_object_get
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (15 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 16/62] drm/i915: Wrap drm_gem_object_lookup in i915_gem_object_lookup Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 18/62] drm/i915: Rename drm_gem_object_unreference in preparation for lockless free Chris Wilson
                   ` (46 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h            | 10 +++++++++-
 drivers/gpu/drm/i915/i915_gem.c            |  4 ++--
 drivers/gpu/drm/i915/i915_gem_dmabuf.c     |  3 +--
 drivers/gpu/drm/i915/i915_gem_evict.c      |  2 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 ++--
 drivers/gpu/drm/i915/i915_gem_shrinker.c   |  2 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c    |  3 +--
 drivers/gpu/drm/i915/intel_display.c       |  3 +--
 8 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 27096004db7c..1ff7a9df4209 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2284,7 +2284,15 @@ i915_gem_object_lookup(struct drm_file *file, u32 handle)
 	return to_intel_bo(drm_gem_object_lookup(file, handle));
 }
 __deprecated extern struct drm_gem_object *
-drm_gem_object_lookup(struct drm_file *file, u32 handle);
+drm_gem_object_lookup(struct drm_file *, u32);
+
+__attribute__((nonnull)) static inline struct drm_i915_gem_object *
+i915_gem_object_get(struct drm_i915_gem_object *obj)
+{
+	drm_gem_object_reference(&obj->base);
+	return obj;
+}
+__deprecated extern void drm_gem_object_reference(struct drm_gem_object *);
 
 /*
  * Optimised SGL iterator for GEM objects
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 837b1402c798..4aecdd4434d8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -266,7 +266,7 @@ drop_pages(struct drm_i915_gem_object *obj)
 	struct i915_vma *vma, *next;
 	int ret;
 
-	drm_gem_object_reference(&obj->base);
+	i915_gem_object_get(obj);
 	list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link)
 		if (i915_vma_unbind(vma))
 			break;
@@ -2107,7 +2107,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 
 	/* Add a reference if we're newly entering the active list. */
 	if (obj->active == 0)
-		drm_gem_object_reference(&obj->base);
+		i915_gem_object_get(obj);
 	obj->active |= intel_engine_flag(engine);
 
 	list_move_tail(&obj->engine_list[engine->id], &engine->active_list);
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index 80bbe43a2e92..7accb99f3da3 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -278,8 +278,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
 			 * Importing dmabuf exported from out own gem increases
 			 * refcount on gem itself instead of f_count of dmabuf.
 			 */
-			drm_gem_object_reference(&obj->base);
-			return &obj->base;
+			return &i915_gem_object_get(obj)->base;
 		}
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 3c1280ec7ff6..d5777a0750f0 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -214,7 +214,7 @@ found:
 				       exec_list);
 		if (drm_mm_scan_remove_block(&vma->node)) {
 			list_move(&vma->exec_list, &eviction_list);
-			drm_gem_object_reference(&vma->obj->base);
+			i915_gem_object_get(vma->obj);
 			continue;
 		}
 		list_del_init(&vma->exec_list);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 7f441e74c903..590c4d3ac2e4 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -122,7 +122,7 @@ eb_lookup_vmas(struct eb_vmas *eb,
 			goto err;
 		}
 
-		drm_gem_object_reference(&obj->base);
+		i915_gem_object_get(obj);
 		list_add_tail(&obj->obj_exec_link, &objects);
 	}
 	spin_unlock(&file->table_lock);
@@ -1203,7 +1203,7 @@ i915_gem_execbuffer_parse(struct intel_engine_cs *engine,
 	vma = i915_gem_obj_to_ggtt(shadow_batch_obj);
 	vma->exec_entry = shadow_exec_entry;
 	vma->exec_entry->flags = __EXEC_OBJECT_HAS_PIN;
-	drm_gem_object_reference(&shadow_batch_obj->base);
+	i915_gem_object_get(shadow_batch_obj);
 	list_add_tail(&vma->exec_list, &eb->vmas);
 
 	shadow_batch_obj->base.pending_read_domains = I915_GEM_DOMAIN_COMMAND;
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index 1bf14544d8ad..416eaaece776 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -190,7 +190,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
 			if (!can_release_pages(obj))
 				continue;
 
-			drm_gem_object_reference(&obj->base);
+			i915_gem_object_get(obj);
 
 			/* For the unbound phase, this should be a no-op! */
 			list_for_each_entry_safe(vma, v,
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index ba16e044fac6..c41bf74f926e 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -622,8 +622,7 @@ __i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj,
 	obj->userptr.work = &work->work;
 	obj->userptr.workers++;
 
-	work->obj = obj;
-	drm_gem_object_reference(&obj->base);
+	work->obj = i915_gem_object_get(obj);
 
 	work->task = current;
 	get_task_struct(work->task);
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ae35d5bfe1a9..ab168f54c046 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11697,13 +11697,12 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 	/* Reference the objects for the scheduled work. */
 	drm_framebuffer_reference(work->old_fb);
-	drm_gem_object_reference(&obj->base);
 
 	crtc->primary->fb = fb;
 	update_state_fb(crtc->primary);
 	intel_fbc_pre_update(intel_crtc);
 
-	work->pending_flip_obj = obj;
+	work->pending_flip_obj = i915_gem_object_get(obj);
 
 	ret = i915_mutex_lock_interruptible(dev);
 	if (ret)
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 18/62] drm/i915: Rename drm_gem_object_unreference in preparation for lockless free
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (16 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 17/62] drm/i915: Wrap drm_gem_object_reference in i915_gem_object_get Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 19/62] drm/i915: Rename drm_gem_object_unreference_unlocked " Chris Wilson
                   ` (45 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h              |  7 +++++++
 drivers/gpu/drm/i915/i915_gem.c              | 26 +++++++++++++-------------
 drivers/gpu/drm/i915/i915_gem_batch_pool.c   |  4 ++--
 drivers/gpu/drm/i915/i915_gem_context.c      |  4 ++--
 drivers/gpu/drm/i915/i915_gem_evict.c        |  7 ++++---
 drivers/gpu/drm/i915/i915_gem_execbuffer.c   |  6 +++---
 drivers/gpu/drm/i915/i915_gem_render_state.c |  4 ++--
 drivers/gpu/drm/i915/i915_gem_shrinker.c     |  2 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c       |  2 +-
 drivers/gpu/drm/i915/i915_gem_tiling.c       |  4 ++--
 drivers/gpu/drm/i915/i915_gem_userptr.c      |  4 ++--
 drivers/gpu/drm/i915/i915_guc_submission.c   |  6 +++---
 drivers/gpu/drm/i915/intel_display.c         |  6 +++---
 drivers/gpu/drm/i915/intel_fbdev.c           |  2 +-
 drivers/gpu/drm/i915/intel_guc_loader.c      |  8 +++++---
 drivers/gpu/drm/i915/intel_lrc.c             |  6 +++---
 drivers/gpu/drm/i915/intel_overlay.c         |  8 ++++----
 drivers/gpu/drm/i915/intel_pm.c              |  2 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c      | 14 +++++++-------
 19 files changed, 66 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1ff7a9df4209..2d8cc5f3a77b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2294,6 +2294,13 @@ i915_gem_object_get(struct drm_i915_gem_object *obj)
 }
 __deprecated extern void drm_gem_object_reference(struct drm_gem_object *);
 
+__attribute__((nonnull)) static inline void
+i915_gem_object_put(struct drm_i915_gem_object *obj)
+{
+	drm_gem_object_unreference(&obj->base);
+}
+__deprecated extern void drm_gem_object_unreference(struct drm_gem_object *);
+
 /*
  * Optimised SGL iterator for GEM objects
  */
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4aecdd4434d8..e887d07dea4c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -272,7 +272,7 @@ drop_pages(struct drm_i915_gem_object *obj)
 			break;
 
 	ret = i915_gem_object_put_pages(obj);
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 
 	return ret;
 }
@@ -721,7 +721,7 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
 	ret = i915_gem_shmem_pread(dev, obj, args, file);
 
 out:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
@@ -1096,7 +1096,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	}
 
 out:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 put_rpm:
@@ -1280,7 +1280,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 					ORIGIN_GTT : ORIGIN_CPU);
 
 unref:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
@@ -1311,7 +1311,7 @@ i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
 	if (obj->pin_display)
 		i915_gem_object_flush_cpu_write_domain(obj);
 
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
@@ -1733,7 +1733,7 @@ i915_gem_mmap_gtt(struct drm_file *file,
 	*offset = drm_vma_node_offset_addr(&obj->base.vma_node);
 
 out:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
@@ -2157,7 +2157,7 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 	}
 
 	i915_gem_request_assign(&obj->last_fenced_req, NULL);
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 }
 
 static bool i915_context_is_banned(const struct i915_gem_context *ctx)
@@ -2526,7 +2526,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		goto out;
 	}
 
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		if (obj->last_read_req[i] == NULL)
@@ -2547,7 +2547,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	return ret;
 
 out:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
 }
@@ -3370,7 +3370,7 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
 
 	ret = i915_gem_object_set_cache_level(obj, level);
 
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 rpm_put:
@@ -3760,7 +3760,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 	}
 
 unref:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
@@ -3824,7 +3824,7 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	args->retained = obj->madv != __I915_MADV_PURGED;
 
 out:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
 	return ret;
@@ -4670,6 +4670,6 @@ i915_gem_object_create_from_data(struct drm_device *dev,
 	return obj;
 
 fail:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	return ERR_PTR(ret);
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.c b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
index 3752d5daa4b2..3507b2753fd3 100644
--- a/drivers/gpu/drm/i915/i915_gem_batch_pool.c
+++ b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
@@ -75,7 +75,7 @@ void i915_gem_batch_pool_fini(struct i915_gem_batch_pool *pool)
 						 batch_pool_link);
 
 			list_del(&obj->batch_pool_link);
-			drm_gem_object_unreference(&obj->base);
+			i915_gem_object_put(obj);
 		}
 	}
 }
@@ -121,7 +121,7 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
 		/* While we're looping, do some clean up */
 		if (tmp->madv == __I915_MADV_PURGED) {
 			list_del(&tmp->batch_pool_link);
-			drm_gem_object_unreference(&tmp->base);
+			i915_gem_object_put(tmp);
 			continue;
 		}
 
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index b62862e31642..d8ef41138c95 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -176,7 +176,7 @@ void i915_gem_context_free(struct kref *ctx_ref)
 		if (ce->ringbuf)
 			intel_ringbuffer_free(ce->ringbuf);
 
-		drm_gem_object_unreference(&ce->state->base);
+		i915_gem_object_put(ce->state);
 	}
 
 	list_del(&ctx->link);
@@ -216,7 +216,7 @@ i915_gem_alloc_context_obj(struct drm_device *dev, size_t size)
 		ret = i915_gem_object_set_cache_level(obj, I915_CACHE_L3_LLC);
 		/* Failure shouldn't ever happen this early */
 		if (WARN_ON(ret)) {
-			drm_gem_object_unreference(&obj->base);
+			i915_gem_object_put(obj);
 			return ERR_PTR(ret);
 		}
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index d5777a0750f0..5a02c32e9ae6 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -222,17 +222,18 @@ found:
 
 	/* Unbinding will emit any required flushes */
 	while (!list_empty(&eviction_list)) {
-		struct drm_gem_object *obj;
+		struct drm_i915_gem_object *obj;
+
 		vma = list_first_entry(&eviction_list,
 				       struct i915_vma,
 				       exec_list);
 
-		obj =  &vma->obj->base;
+		obj =  vma->obj;
 		list_del_init(&vma->exec_list);
 		if (ret == 0)
 			ret = i915_vma_unbind(vma);
 
-		drm_gem_object_unreference(obj);
+		i915_gem_object_put(obj);
 	}
 
 	return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 590c4d3ac2e4..147279fc1b67 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -175,7 +175,7 @@ err:
 				       struct drm_i915_gem_object,
 				       obj_exec_link);
 		list_del_init(&obj->obj_exec_link);
-		drm_gem_object_unreference(&obj->base);
+		i915_gem_object_put(obj);
 	}
 	/*
 	 * Objects already transfered to the vmas list will be unreferenced by
@@ -234,7 +234,7 @@ static void eb_destroy(struct eb_vmas *eb)
 				       exec_list);
 		list_del_init(&vma->exec_list);
 		i915_gem_execbuffer_unreserve_vma(vma);
-		drm_gem_object_unreference(&vma->obj->base);
+		i915_gem_object_put(vma->obj);
 	}
 	kfree(eb);
 }
@@ -843,7 +843,7 @@ i915_gem_execbuffer_relocate_slow(struct drm_device *dev,
 		vma = list_first_entry(&eb->vmas, struct i915_vma, exec_list);
 		list_del_init(&vma->exec_list);
 		i915_gem_execbuffer_unreserve_vma(vma);
-		drm_gem_object_unreference(&vma->obj->base);
+		i915_gem_object_put(vma->obj);
 	}
 
 	mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 7c93327b70fe..99eff898b4cb 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -70,7 +70,7 @@ static int render_state_init(struct render_state *so,
 	return 0;
 
 free_gem:
-	drm_gem_object_unreference(&so->obj->base);
+	i915_gem_object_put(so->obj);
 	return ret;
 }
 
@@ -167,7 +167,7 @@ err_out:
 void i915_gem_render_state_fini(struct render_state *so)
 {
 	i915_gem_object_ggtt_unpin(so->obj);
-	drm_gem_object_unreference(&so->obj->base);
+	i915_gem_object_put(so->obj);
 }
 
 int i915_gem_render_state_prepare(struct intel_engine_cs *engine,
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index 416eaaece776..c4858c12f69e 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -201,7 +201,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
 			if (i915_gem_object_put_pages(obj) == 0)
 				count += obj->base.size >> PAGE_SHIFT;
 
-			drm_gem_object_unreference(&obj->base);
+			i915_gem_object_put(obj);
 		}
 		list_splice(&still_in_list, phase->list);
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index f9253f2b7ba0..ecf920b1f986 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -714,6 +714,6 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	return obj;
 
 err:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	return NULL;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index de2ba6bf95f1..9b096d1e8164 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -268,7 +268,7 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 	}
 
 err:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 
 	intel_runtime_pm_put(dev_priv);
@@ -328,7 +328,7 @@ i915_gem_get_tiling(struct drm_device *dev, void *data,
 	if (args->swizzle_mode == I915_BIT_6_SWIZZLE_9_10_17)
 		args->swizzle_mode = I915_BIT_6_SWIZZLE_9_10;
 
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index c41bf74f926e..cd4af22b8c59 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -119,7 +119,7 @@ static void cancel_userptr(struct work_struct *work)
 		dev_priv->mm.interruptible = was_interruptible;
 	}
 
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 }
 
@@ -577,7 +577,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
 	}
 
 	obj->userptr.workers--;
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 
 	release_pages(pvec, pinned, 0);
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 629111d42ce0..4cec580784ea 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -611,13 +611,13 @@ static struct drm_i915_gem_object *gem_allocate_guc_obj(struct drm_device *dev,
 		return NULL;
 
 	if (i915_gem_object_get_pages(obj)) {
-		drm_gem_object_unreference(&obj->base);
+		i915_gem_object_put(obj);
 		return NULL;
 	}
 
 	if (i915_gem_obj_ggtt_pin(obj, PAGE_SIZE,
 			PIN_OFFSET_BIAS | GUC_WOPCM_TOP)) {
-		drm_gem_object_unreference(&obj->base);
+		i915_gem_object_put(obj);
 		return NULL;
 	}
 
@@ -639,7 +639,7 @@ static void gem_release_guc_obj(struct drm_i915_gem_object *obj)
 	if (i915_gem_obj_is_pinned(obj))
 		i915_gem_object_ggtt_unpin(obj);
 
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 }
 
 static void guc_client_free(struct drm_device *dev,
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ab168f54c046..7675f1080a0f 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2523,7 +2523,7 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
 	return true;
 
 out_unref_obj:
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 	return false;
 }
@@ -11004,7 +11004,7 @@ static void intel_unpin_work_fn(struct work_struct *__work)
 
 	mutex_lock(&dev->struct_mutex);
 	intel_unpin_fb_obj(work->old_fb, primary->state->rotation);
-	drm_gem_object_unreference(&work->pending_flip_obj->base);
+	i915_gem_object_put(work->pending_flip_obj);
 	mutex_unlock(&dev->struct_mutex);
 
 	i915_gem_request_put(work->flip_queued_req);
@@ -14769,7 +14769,7 @@ static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb)
 	drm_framebuffer_cleanup(fb);
 	mutex_lock(&dev->struct_mutex);
 	WARN_ON(!intel_fb->obj->framebuffer_references--);
-	drm_gem_object_unreference(&intel_fb->obj->base);
+	i915_gem_object_put(intel_fb->obj);
 	mutex_unlock(&dev->struct_mutex);
 	kfree(intel_fb);
 }
diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
index f39d525169f4..10600975fe8d 100644
--- a/drivers/gpu/drm/i915/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/intel_fbdev.c
@@ -159,7 +159,7 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
 
 	fb = __intel_framebuffer_create(dev, &mode_cmd, obj);
 	if (IS_ERR(fb)) {
-		drm_gem_object_unreference(&obj->base);
+		i915_gem_object_put(obj);
 		ret = PTR_ERR(fb);
 		goto out;
 	}
diff --git a/drivers/gpu/drm/i915/intel_guc_loader.c b/drivers/gpu/drm/i915/intel_guc_loader.c
index f2b88c7209cb..74a5f11a5689 100644
--- a/drivers/gpu/drm/i915/intel_guc_loader.c
+++ b/drivers/gpu/drm/i915/intel_guc_loader.c
@@ -657,7 +657,7 @@ fail:
 	mutex_lock(&dev->struct_mutex);
 	obj = guc_fw->guc_fw_obj;
 	if (obj)
-		drm_gem_object_unreference(&obj->base);
+		i915_gem_object_put(obj);
 	guc_fw->guc_fw_obj = NULL;
 	mutex_unlock(&dev->struct_mutex);
 
@@ -728,13 +728,15 @@ void intel_guc_fini(struct drm_device *dev)
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_guc_fw *guc_fw = &dev_priv->guc.guc_fw;
 
+	if (!guc_fw->guc_fw_obj)
+		return;
+
 	mutex_lock(&dev->struct_mutex);
 	direct_interrupts_to_host(dev_priv);
 	i915_guc_submission_disable(dev);
 	i915_guc_submission_fini(dev);
 
-	if (guc_fw->guc_fw_obj)
-		drm_gem_object_unreference(&guc_fw->guc_fw_obj->base);
+	i915_gem_object_put(guc_fw->guc_fw_obj);
 	guc_fw->guc_fw_obj = NULL;
 	mutex_unlock(&dev->struct_mutex);
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index d55aa9ca2877..5ef81347055c 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1327,7 +1327,7 @@ static int lrc_setup_wa_ctx_obj(struct intel_engine_cs *engine, u32 size)
 	if (ret) {
 		DRM_DEBUG_DRIVER("pin LRC WA ctx backing obj failed: %d\n",
 				 ret);
-		drm_gem_object_unreference(&engine->wa_ctx.obj->base);
+		i915_gem_object_put(engine->wa_ctx.obj);
 		return ret;
 	}
 
@@ -1338,7 +1338,7 @@ static void lrc_destroy_wa_ctx_obj(struct intel_engine_cs *engine)
 {
 	if (engine->wa_ctx.obj) {
 		i915_gem_object_ggtt_unpin(engine->wa_ctx.obj);
-		drm_gem_object_unreference(&engine->wa_ctx.obj->base);
+		i915_gem_object_put(engine->wa_ctx.obj);
 		engine->wa_ctx.obj = NULL;
 	}
 }
@@ -2465,7 +2465,7 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 error_ringbuf:
 	intel_ringbuffer_free(ringbuf);
 error_deref_obj:
-	drm_gem_object_unreference(&ctx_obj->base);
+	i915_gem_object_put(ctx_obj);
 	ce->ringbuf = NULL;
 	ce->state = NULL;
 	return ret;
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 2dc9bde714f3..57e919db1ae1 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -308,7 +308,7 @@ static void intel_overlay_release_old_vid_tail(struct intel_overlay *overlay)
 	struct drm_i915_gem_object *obj = overlay->old_vid_bo;
 
 	i915_gem_object_ggtt_unpin(obj);
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 
 	overlay->old_vid_bo = NULL;
 }
@@ -322,7 +322,7 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
 		return;
 
 	i915_gem_object_ggtt_unpin(obj);
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	overlay->vid_bo = NULL;
 
 	overlay->crtc->overlay = NULL;
@@ -1218,7 +1218,7 @@ int intel_overlay_put_image_ioctl(struct drm_device *dev, void *data,
 out_unlock:
 	mutex_unlock(&dev->struct_mutex);
 	drm_modeset_unlock_all(dev);
-	drm_gem_object_unreference_unlocked(&new_bo->base);
+	i915_gem_object_put(new_bo);
 out_free:
 	kfree(params);
 
@@ -1441,7 +1441,7 @@ out_unpin_bo:
 	if (!OVERLAY_NEEDS_PHYSICAL(dev_priv))
 		i915_gem_object_ggtt_unpin(reg_bo);
 out_free_bo:
-	drm_gem_object_unreference(&reg_bo->base);
+	i915_gem_object_put(reg_bo);
 out_free:
 	mutex_unlock(&dev_priv->dev->struct_mutex);
 	kfree(overlay);
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index ee247063c1b2..337f46c50934 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -5680,7 +5680,7 @@ static void valleyview_cleanup_pctx(struct drm_i915_private *dev_priv)
 	if (WARN_ON(!dev_priv->vlv_pctx))
 		return;
 
-	drm_gem_object_unreference_unlocked(&dev_priv->vlv_pctx->base);
+	i915_gem_object_put(dev_priv->vlv_pctx);
 	dev_priv->vlv_pctx = NULL;
 }
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index e6a2e4973a01..71ddf1dfea76 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -641,7 +641,7 @@ void intel_fini_pipe_control(struct intel_engine_cs *engine)
 		return;
 
 	i915_gem_object_ggtt_unpin(engine->scratch.obj);
-	drm_gem_object_unreference(&engine->scratch.obj->base);
+	i915_gem_object_put(engine->scratch.obj);
 	engine->scratch.obj = NULL;
 }
 
@@ -672,7 +672,7 @@ int intel_init_pipe_control(struct intel_engine_cs *engine, int size)
 	return 0;
 
 err_unref:
-	drm_gem_object_unreference(&engine->scratch.obj->base);
+	i915_gem_object_put(engine->scratch.obj);
 err:
 	return ret;
 }
@@ -1230,7 +1230,7 @@ static void render_ring_cleanup(struct intel_engine_cs *engine)
 
 	if (dev_priv->semaphore_obj) {
 		i915_gem_object_ggtt_unpin(dev_priv->semaphore_obj);
-		drm_gem_object_unreference(&dev_priv->semaphore_obj->base);
+		i915_gem_object_put(dev_priv->semaphore_obj);
 		dev_priv->semaphore_obj = NULL;
 	}
 
@@ -1817,7 +1817,7 @@ static void cleanup_status_page(struct intel_engine_cs *engine)
 
 	kunmap(sg_page(obj->pages->sgl));
 	i915_gem_object_ggtt_unpin(obj);
-	drm_gem_object_unreference(&obj->base);
+	i915_gem_object_put(obj);
 	engine->status_page.obj = NULL;
 }
 
@@ -1855,7 +1855,7 @@ static int init_status_page(struct intel_engine_cs *engine)
 		ret = i915_gem_obj_ggtt_pin(obj, 4096, flags);
 		if (ret) {
 err_unref:
-			drm_gem_object_unreference(&obj->base);
+			i915_gem_object_put(obj);
 			return ret;
 		}
 
@@ -1958,7 +1958,7 @@ err_unpin:
 
 static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
 {
-	drm_gem_object_unreference(&ringbuf->obj->base);
+	i915_gem_object_put(ringbuf->obj);
 	ringbuf->obj = NULL;
 }
 
@@ -2613,7 +2613,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 				i915_gem_object_set_cache_level(obj, I915_CACHE_LLC);
 				ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_NONBLOCK);
 				if (ret != 0) {
-					drm_gem_object_unreference(&obj->base);
+					i915_gem_object_put(obj);
 					DRM_ERROR("Failed to pin semaphore bo. Disabling semaphores\n");
 					i915.semaphores = 0;
 				} else
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 19/62] drm/i915: Rename drm_gem_object_unreference_unlocked in preparation for lockless free
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (17 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 18/62] drm/i915: Rename drm_gem_object_unreference in preparation for lockless free Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 20/62] drm/i915: Disable waitboosting for fence_wait() Chris Wilson
                   ` (44 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h         |  7 +++++++
 drivers/gpu/drm/i915/i915_gem.c         | 10 +++++-----
 drivers/gpu/drm/i915/i915_gem_tiling.c  |  2 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c |  2 +-
 drivers/gpu/drm/i915/intel_display.c    |  6 +++---
 drivers/gpu/drm/i915/intel_overlay.c    |  2 +-
 6 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 2d8cc5f3a77b..316192077142 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2301,6 +2301,13 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
 }
 __deprecated extern void drm_gem_object_unreference(struct drm_gem_object *);
 
+static inline void __attribute__((nonnull))
+i915_gem_object_put_unlocked(struct drm_i915_gem_object *obj)
+{
+	drm_gem_object_unreference_unlocked(&obj->base);
+}
+__deprecated extern void drm_gem_object_unreference_unlocked(struct drm_gem_object *);
+
 /*
  * Optimised SGL iterator for GEM objects
  */
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e887d07dea4c..50df7a11d6b1 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -387,7 +387,7 @@ i915_gem_create(struct drm_file *file,
 
 	ret = drm_gem_handle_create(file, &obj->base, &handle);
 	/* drop reference from allocate - handle holds it now */
-	drm_gem_object_unreference_unlocked(&obj->base);
+	i915_gem_object_put_unlocked(obj);
 	if (ret)
 		return ret;
 
@@ -1356,7 +1356,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 	 * pages from.
 	 */
 	if (!obj->base.filp) {
-		drm_gem_object_unreference_unlocked(&obj->base);
+		i915_gem_object_put_unlocked(obj);
 		return -EINVAL;
 	}
 
@@ -1368,7 +1368,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 		struct vm_area_struct *vma;
 
 		if (down_write_killable(&mm->mmap_sem)) {
-			drm_gem_object_unreference_unlocked(&obj->base);
+			i915_gem_object_put_unlocked(obj);
 			return -EINTR;
 		}
 		vma = find_vma(mm, addr);
@@ -1379,7 +1379,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 			addr = -ENOMEM;
 		up_write(&mm->mmap_sem);
 	}
-	drm_gem_object_unreference_unlocked(&obj->base);
+	i915_gem_object_put_unlocked(obj);
 	if (IS_ERR((void *)addr))
 		return addr;
 
@@ -3320,7 +3320,7 @@ int i915_gem_get_caching_ioctl(struct drm_device *dev, void *data,
 		break;
 	}
 
-	drm_gem_object_unreference_unlocked(&obj->base);
+	i915_gem_object_put_unlocked(obj);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index 9b096d1e8164..adeb0621e1f1 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -172,7 +172,7 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 
 	if (!i915_tiling_ok(dev,
 			    args->stride, obj->base.size, args->tiling_mode)) {
-		drm_gem_object_unreference_unlocked(&obj->base);
+		i915_gem_object_put_unlocked(obj);
 		return -EINVAL;
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index cd4af22b8c59..ca8b82ab93d6 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -845,7 +845,7 @@ i915_gem_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file
 		ret = drm_gem_handle_create(file, &obj->base, &handle);
 
 	/* drop reference from allocate - handle holds it now */
-	drm_gem_object_unreference_unlocked(&obj->base);
+	i915_gem_object_put_unlocked(obj);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 7675f1080a0f..30f1854b3ab9 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -10493,7 +10493,7 @@ intel_framebuffer_create_for_mode(struct drm_device *dev,
 
 	fb = intel_framebuffer_create(dev, &mode_cmd, obj);
 	if (IS_ERR(fb))
-		drm_gem_object_unreference_unlocked(&obj->base);
+		i915_gem_object_put_unlocked(obj);
 
 	return fb;
 }
@@ -11802,7 +11802,7 @@ cleanup:
 	crtc->primary->fb = old_fb;
 	update_state_fb(crtc->primary);
 
-	drm_gem_object_unreference_unlocked(&obj->base);
+	i915_gem_object_put_unlocked(obj);
 	drm_framebuffer_unreference(work->old_fb);
 
 	spin_lock_irq(&dev->event_lock);
@@ -15012,7 +15012,7 @@ intel_user_framebuffer_create(struct drm_device *dev,
 
 	fb = intel_framebuffer_create(dev, &mode_cmd, obj);
 	if (IS_ERR(fb))
-		drm_gem_object_unreference_unlocked(&obj->base);
+		i915_gem_object_put_unlocked(obj);
 
 	return fb;
 }
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 57e919db1ae1..7f91f18ad29d 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -1458,7 +1458,7 @@ void intel_cleanup_overlay(struct drm_i915_private *dev_priv)
 	 * hardware should be off already */
 	WARN_ON(dev_priv->overlay->active);
 
-	drm_gem_object_unreference_unlocked(&dev_priv->overlay->reg_bo->base);
+	i915_gem_object_put_unlocked(dev_priv->overlay->reg_bo);
 	kfree(dev_priv->overlay);
 }
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 20/62] drm/i915: Disable waitboosting for fence_wait()
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (18 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 19/62] drm/i915: Rename drm_gem_object_unreference_unlocked " Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 21/62] drm/i915: Disable waitboosting for mmioflips/semaphores Chris Wilson
                   ` (43 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

We want to restrict waitboosting to known process contexts, where we can
track which clients are receiving waitboosts and prevent excessive power
wasting. For fence_wait() we do not have any client tracking and so that
leaves it open to abuse.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_request.c | 7 ++++---
 drivers/gpu/drm/i915/i915_gem_request.h | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 987a43f1aac8..ba745f0740d0 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -74,7 +74,7 @@ static signed long i915_fence_wait(struct fence *fence,
 
 	ret = __i915_wait_request(to_i915_request(fence),
 				  interruptible, timeout,
-				  NULL);
+				  NO_WAITBOOST);
 	if (ret == -ETIME)
 		return 0;
 
@@ -634,7 +634,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
 	 * forcing the clocks too high for the whole system, we only allow
 	 * each client to waitboost once in a busy period.
 	 */
-	if (INTEL_INFO(req->i915)->gen >= 6)
+	if (!IS_ERR(rps) && INTEL_INFO(req->i915)->gen >= 6)
 		gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
 
 	/* Optimistic spin for the next ~jiffie before touching IRQs */
@@ -707,7 +707,8 @@ complete:
 			*timeout = 0;
 	}
 
-	if (rps && req->fence.seqno == req->engine->last_submitted_seqno) {
+	if (!IS_ERR_OR_NULL(rps) &&
+	    req->fence.seqno == req->engine->last_submitted_seqno) {
 		/* The GPU is now idle and this client has stalled.
 		 * Since no other client has submitted a request in the
 		 * meantime, assume that this client is the only one
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index b1bc96c9e31d..a3cac13ab9af 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -205,6 +205,7 @@ void __i915_add_request(struct drm_i915_gem_request *req,
 	__i915_add_request(req, NULL, false)
 
 struct intel_rps_client;
+#define NO_WAITBOOST ERR_PTR(-1)
 
 int __i915_wait_request(struct drm_i915_gem_request *req,
 			bool interruptible,
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 21/62] drm/i915: Disable waitboosting for mmioflips/semaphores
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (19 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 20/62] drm/i915: Disable waitboosting for fence_wait() Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 22/62] drm/i915: Treat ringbuffer writes as write to normal memory Chris Wilson
                   ` (42 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Since

commit a6f766f3975185af66a31a2cea2cd38721645999
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Mon Apr 27 13:41:20 2015 +0100

    drm/i915: Limit ring synchronisation (sw sempahores) RPS boosts

and

commit bcafc4e38b6ad03f48989b7ecaff03845b5b7acf
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Mon Apr 27 13:41:21 2015 +0100

    drm/i915: Limit mmio flip RPS boosts

we have limited the waitboosting for semaphores and flips. Ideally we do
not want to boost in either of these instances as no consumer is waiting
upon the results. With the introduction of NO_WAITBOOST in the previous
patch, we can finally disable these needless boosts.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c  | 8 +-------
 drivers/gpu/drm/i915/i915_drv.h      | 2 --
 drivers/gpu/drm/i915/i915_gem.c      | 2 +-
 drivers/gpu/drm/i915/intel_display.c | 2 +-
 drivers/gpu/drm/i915/intel_pm.c      | 2 --
 5 files changed, 3 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 8e37315443f3..daabbc6b65e9 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2462,13 +2462,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
 			   list_empty(&file_priv->rps.link) ? "" : ", active");
 		rcu_read_unlock();
 	}
-	seq_printf(m, "Semaphore boosts: %d%s\n",
-		   dev_priv->rps.semaphores.boosts,
-		   list_empty(&dev_priv->rps.semaphores.link) ? "" : ", active");
-	seq_printf(m, "MMIO flip boosts: %d%s\n",
-		   dev_priv->rps.mmioflips.boosts,
-		   list_empty(&dev_priv->rps.mmioflips.link) ? "" : ", active");
-	seq_printf(m, "Kernel boosts: %d\n", dev_priv->rps.boosts);
+	seq_printf(m, "Kernel (anonymous) boosts: %d\n", dev_priv->rps.boosts);
 	spin_unlock(&dev_priv->rps.client_lock);
 	mutex_unlock(&dev->filelist_mutex);
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 316192077142..548fd3b9d858 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1176,8 +1176,6 @@ struct intel_gen6_power_mgmt {
 	struct delayed_work delayed_resume_work;
 	unsigned boosts;
 
-	struct intel_rps_client semaphores, mmioflips;
-
 	/* manual wa residency calculations */
 	struct intel_rps_ei up_ei, down_ei;
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 50df7a11d6b1..703e98e1a2e5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2573,7 +2573,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 		ret = __i915_wait_request(from_req,
 					  i915->mm.interruptible,
 					  NULL,
-					  &i915->rps.semaphores);
+					  NO_WAITBOOST);
 		if (ret)
 			return ret;
 
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 30f1854b3ab9..849abb565d3d 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11520,7 +11520,7 @@ static void intel_mmio_flip_work_func(struct work_struct *w)
 	if (work->flip_queued_req)
 		WARN_ON(__i915_wait_request(work->flip_queued_req,
 					    false, NULL,
-					    &dev_priv->rps.mmioflips));
+					    NO_WAITBOOST));
 
 	/* For framebuffer backed by dmabuf, wait for fence */
 	if (obj->base.dma_buf)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 337f46c50934..c141d3e15eed 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7730,8 +7730,6 @@ void intel_pm_setup(struct drm_device *dev)
 	INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,
 			  __intel_autoenable_gt_powersave);
 	INIT_LIST_HEAD(&dev_priv->rps.clients);
-	INIT_LIST_HEAD(&dev_priv->rps.semaphores.link);
-	INIT_LIST_HEAD(&dev_priv->rps.mmioflips.link);
 
 	dev_priv->pm.suspended = false;
 	atomic_set(&dev_priv->pm.wakeref_count, 0);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 22/62] drm/i915: Treat ringbuffer writes as write to normal memory
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (20 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 21/62] drm/i915: Disable waitboosting for mmioflips/semaphores Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 23/62] drm/i915: Rename ring->virtual_start as ring->vaddr Chris Wilson
                   ` (41 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Ringbuffers are now being written to either through LLC or WC paths, so
treating them as simply iomem is no longer adequate. However, for the
older !llc hardware, the hardware is documentated as treating the TAIL
register update as serialising, so we can relax the barriers when filling
the rings (but even if it were not, it is still an uncached register write
and so serialising anyway.).

For simplicity, let's ignore the iomem annotation.

v2: Remove iomem from ringbuffer->virtual_address

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.h        |  6 +++---
 drivers/gpu/drm/i915/intel_ringbuffer.h | 22 ++++++++++++++--------
 2 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index a8db42a9c50f..e99848067fb8 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -73,8 +73,9 @@ int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
  */
 static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
 {
-	ringbuf->tail &= ringbuf->size - 1;
+	__intel_ringbuffer_advance(ringbuf);
 }
+
 /**
  * intel_logical_ring_emit() - write a DWORD to the ringbuffer.
  * @ringbuf: Ringbuffer to write to.
@@ -83,8 +84,7 @@ static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
 static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf,
 					   u32 data)
 {
-	iowrite32(data, ringbuf->virtual_start + ringbuf->tail);
-	ringbuf->tail += 4;
+	__intel_ringbuffer_emit(ringbuf, data);
 }
 static inline void intel_logical_ring_emit_reg(struct intel_ringbuffer *ringbuf,
 					       i915_reg_t reg)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index b041fb6a6d01..5db7db069566 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -96,7 +96,7 @@ struct intel_ring_hangcheck {
 
 struct intel_ringbuffer {
 	struct drm_i915_gem_object *obj;
-	void __iomem *virtual_start;
+	void *virtual_start;
 	struct i915_vma *vma;
 
 	struct intel_engine_cs *engine;
@@ -462,12 +462,19 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);
 
 int __must_check intel_ring_begin(struct drm_i915_gem_request *req, int n);
 int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
-static inline void intel_ring_emit(struct intel_engine_cs *engine,
-				   u32 data)
+static inline void __intel_ringbuffer_emit(struct intel_ringbuffer *rb,
+					   u32 data)
 {
-	struct intel_ringbuffer *ringbuf = engine->buffer;
-	iowrite32(data, ringbuf->virtual_start + ringbuf->tail);
-	ringbuf->tail += 4;
+	*(uint32_t *)(rb->virtual_start + rb->tail) = data;
+	rb->tail += 4;
+}
+static inline void __intel_ringbuffer_advance(struct intel_ringbuffer *rb)
+{
+	rb->tail &= rb->size - 1;
+}
+static inline void intel_ring_emit(struct intel_engine_cs *engine, u32 data)
+{
+	__intel_ringbuffer_emit(engine->buffer, data);
 }
 static inline void intel_ring_emit_reg(struct intel_engine_cs *engine,
 				       i915_reg_t reg)
@@ -476,8 +483,7 @@ static inline void intel_ring_emit_reg(struct intel_engine_cs *engine,
 }
 static inline void intel_ring_advance(struct intel_engine_cs *engine)
 {
-	struct intel_ringbuffer *ringbuf = engine->buffer;
-	ringbuf->tail &= ringbuf->size - 1;
+	__intel_ringbuffer_advance(engine->buffer);
 }
 int __intel_ring_space(int head, int tail, int size);
 void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 23/62] drm/i915: Rename ring->virtual_start as ring->vaddr
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (21 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 22/62] drm/i915: Treat ringbuffer writes as write to normal memory Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 24/62] drm/i915: Convert i915_semaphores_is_enabled over to early sanitize Chris Wilson
                   ` (40 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Just a different colour to better match virtual addresses elsewhere.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_irq.c         | 8 ++++----
 drivers/gpu/drm/i915/intel_ringbuffer.c | 9 ++++-----
 drivers/gpu/drm/i915/intel_ringbuffer.h | 4 ++--
 3 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 860235d1e0bf..42149153510e 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2906,7 +2906,7 @@ semaphore_waits_for(struct intel_engine_cs *engine, u32 *seqno)
 		head &= engine->buffer->size - 1;
 
 		/* This here seems to blow up */
-		cmd = ioread32(engine->buffer->virtual_start + head);
+		cmd = ioread32(engine->buffer->vaddr + head);
 		if (cmd == ipehr)
 			break;
 
@@ -2916,11 +2916,11 @@ semaphore_waits_for(struct intel_engine_cs *engine, u32 *seqno)
 	if (!i)
 		return NULL;
 
-	*seqno = ioread32(engine->buffer->virtual_start + head + 4) + 1;
+	*seqno = ioread32(engine->buffer->vaddr + head + 4) + 1;
 	if (INTEL_GEN(dev_priv) >= 8) {
-		offset = ioread32(engine->buffer->virtual_start + head + 12);
+		offset = ioread32(engine->buffer->vaddr + head + 12);
 		offset <<= 32;
-		offset = ioread32(engine->buffer->virtual_start + head + 8);
+		offset = ioread32(engine->buffer->vaddr + head + 8);
 	}
 	return semaphore_wait_to_signaller_ring(engine, ipehr, offset);
 }
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 71ddf1dfea76..75b6d6eee0ac 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1892,13 +1892,13 @@ static int init_phys_status_page(struct intel_engine_cs *engine)
 void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
 {
 	GEM_BUG_ON(ringbuf->vma == NULL);
-	GEM_BUG_ON(ringbuf->virtual_start == NULL);
+	GEM_BUG_ON(ringbuf->vaddr == NULL);
 
 	if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
 		i915_gem_object_unpin_map(ringbuf->obj);
 	else
 		i915_vma_unpin_iomap(ringbuf->vma);
-	ringbuf->virtual_start = NULL;
+	ringbuf->vaddr = NULL;
 
 	i915_gem_object_ggtt_unpin(ringbuf->obj);
 	ringbuf->vma = NULL;
@@ -1947,7 +1947,7 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_i915_private *dev_priv,
 		}
 	}
 
-	ringbuf->virtual_start = addr;
+	ringbuf->vaddr = addr;
 	ringbuf->vma = i915_gem_obj_to_ggtt(obj);
 	return 0;
 
@@ -2317,8 +2317,7 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
 		GEM_BUG_ON(ringbuf->tail + remain_actual > ringbuf->size);
 
 		/* Fill the tail with MI_NOOP */
-		memset(ringbuf->virtual_start + ringbuf->tail,
-		       0, remain_actual);
+		memset(ringbuf->vaddr + ringbuf->tail, 0, remain_actual);
 		ringbuf->tail = 0;
 		ringbuf->space -= remain_actual;
 	}
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 5db7db069566..3cbcdd5751ad 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -96,7 +96,7 @@ struct intel_ring_hangcheck {
 
 struct intel_ringbuffer {
 	struct drm_i915_gem_object *obj;
-	void *virtual_start;
+	void *vaddr;
 	struct i915_vma *vma;
 
 	struct intel_engine_cs *engine;
@@ -465,7 +465,7 @@ int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
 static inline void __intel_ringbuffer_emit(struct intel_ringbuffer *rb,
 					   u32 data)
 {
-	*(uint32_t *)(rb->virtual_start + rb->tail) = data;
+	*(uint32_t *)(rb->vaddr + rb->tail) = data;
 	rb->tail += 4;
 }
 static inline void __intel_ringbuffer_advance(struct intel_ringbuffer *rb)
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 24/62] drm/i915: Convert i915_semaphores_is_enabled over to early sanitize
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (22 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 23/62] drm/i915: Rename ring->virtual_start as ring->vaddr Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 25/62] drm/i915: Unify intel_logical_ring_emit and intel_ring_emit Chris Wilson
                   ` (39 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Rather than recomputing whether semaphores are enabled, we can do that
computation once during early initialisation as the i915.semaphores
module parameter is now read-only.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |  2 +-
 drivers/gpu/drm/i915/i915_drv.c         |  4 +++-
 drivers/gpu/drm/i915/i915_drv.h         |  3 ++-
 drivers/gpu/drm/i915/i915_gem.c         | 27 ++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_context.c |  2 +-
 drivers/gpu/drm/i915/i915_gpu_error.c   |  2 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c | 20 ++++++++++----------
 7 files changed, 44 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index daabbc6b65e9..c1f8b5126d16 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -3201,7 +3201,7 @@ static int i915_semaphore_status(struct seq_file *m, void *unused)
 	enum intel_engine_id id;
 	int j, ret;
 
-	if (!i915_semaphore_is_enabled(dev_priv)) {
+	if (!i915.semaphores) {
 		seq_puts(m, "Semaphores are disabled\n");
 		return 0;
 	}
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index f2ac0cae929b..babeee1a6127 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -318,7 +318,7 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		value = 1;
 		break;
 	case I915_PARAM_HAS_SEMAPHORES:
-		value = i915_semaphore_is_enabled(dev_priv);
+		value = i915.semaphores;
 		break;
 	case I915_PARAM_HAS_PRIME_VMAP_FLUSH:
 		value = 1;
@@ -1102,6 +1102,8 @@ static void intel_device_info_runtime_init(struct drm_device *dev)
 	i915.enable_ppgtt =
 		intel_sanitize_enable_ppgtt(dev_priv, i915.enable_ppgtt);
 	DRM_DEBUG_DRIVER("ppgtt mode: %i\n", i915.enable_ppgtt);
+
+	i915.semaphores = intel_sanitize_semaphores(dev_priv, i915.semaphores);
 }
 
 static void intel_init_dpio(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 548fd3b9d858..fcac90104ba9 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2748,6 +2748,8 @@ extern int i915_resume_switcheroo(struct drm_device *dev);
 int intel_sanitize_enable_ppgtt(struct drm_i915_private *dev_priv,
 			       	int enable_ppgtt);
 
+bool intel_sanitize_semaphores(struct drm_i915_private *dev_priv, int value);
+
 /* i915_drv.c */
 void __printf(3, 4)
 __i915_printk(struct drm_i915_private *dev_priv, const char *level,
@@ -3528,7 +3530,6 @@ extern void intel_set_rps(struct drm_i915_private *dev_priv, u8 val);
 extern void intel_set_memory_cxsr(struct drm_i915_private *dev_priv,
 				  bool enable);
 
-extern bool i915_semaphore_is_enabled(struct drm_i915_private *dev_priv);
 int i915_reg_read_ioctl(struct drm_device *dev, void *data,
 			struct drm_file *file);
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 703e98e1a2e5..22c8361748d6 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2568,7 +2568,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 	if (i915_gem_request_completed(from_req))
 		return 0;
 
-	if (!i915_semaphore_is_enabled(to_i915(obj->base.dev))) {
+	if (!i915.semaphores) {
 		struct drm_i915_private *i915 = to_i915(obj->base.dev);
 		ret = __i915_wait_request(from_req,
 					  i915->mm.interruptible,
@@ -4253,6 +4253,31 @@ out:
 	return ret;
 }
 
+bool intel_sanitize_semaphores(struct drm_i915_private *dev_priv, int value)
+{
+	if (INTEL_INFO(dev_priv)->gen < 6)
+		return false;
+
+	if (value >= 0)
+		return value;
+
+	/* TODO: make semaphores and Execlists play nicely together */
+	if (i915.enable_execlists)
+		return false;
+
+	/* Until we get further testing... */
+	if (IS_GEN8(dev_priv))
+		return false;
+
+#ifdef CONFIG_INTEL_IOMMU
+	/* Enable semaphores on SNB when IO remapping is off */
+	if (INTEL_INFO(dev_priv)->gen == 6 && intel_iommu_gfx_mapped)
+		return false;
+#endif
+
+	return true;
+}
+
 int i915_gem_init(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index d8ef41138c95..7c114f90f61a 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -518,7 +518,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 	u32 flags = hw_flags | MI_MM_SPACE_GTT;
 	const int num_rings =
 		/* Use an extended w/a on ivb+ if signalling from other rings */
-		i915_semaphore_is_enabled(dev_priv) ?
+		i915.semaphores ?
 		hweight32(INTEL_INFO(dev_priv)->ring_mask) - 1 :
 		0;
 	int len, ret;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 5332bd32c555..a8082b8a9797 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -863,7 +863,7 @@ static void gen8_record_semaphore_state(struct drm_i915_private *dev_priv,
 	struct intel_engine_cs *to;
 	enum intel_engine_id id;
 
-	if (!i915_semaphore_is_enabled(dev_priv))
+	if (!i915.semaphores)
 		return;
 
 	if (!error->semaphore_obj)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 75b6d6eee0ac..c0a132a742cb 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2603,7 +2603,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		engine->irq_keep_mask = GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
 
 	if (INTEL_GEN(dev_priv) >= 8) {
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			obj = i915_gem_object_create(dev, 4096);
 			if (IS_ERR(obj)) {
 				DRM_ERROR("Failed to allocate semaphore bo. Disabling semaphores\n");
@@ -2626,7 +2626,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
 		engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			WARN_ON(!dev_priv->semaphore_obj);
 			engine->semaphore.sync_to = gen8_ring_sync;
 			engine->semaphore.signal = gen8_rcs_signal;
@@ -2642,7 +2642,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		engine->irq_disable = gen6_ring_disable_irq;
 		engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
 		engine->irq_seqno_barrier = gen6_seqno_barrier;
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen6_ring_sync;
 			engine->semaphore.signal = gen6_signal;
 			/*
@@ -2745,7 +2745,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 			engine->irq_disable = gen8_ring_disable_irq;
 			engine->dispatch_execbuffer =
 				gen8_ring_dispatch_execbuffer;
-			if (i915_semaphore_is_enabled(dev_priv)) {
+			if (i915.semaphores) {
 				engine->semaphore.sync_to = gen8_ring_sync;
 				engine->semaphore.signal = gen8_xcs_signal;
 				GEN8_RING_SEMAPHORE_INIT(engine);
@@ -2756,7 +2756,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 			engine->irq_disable = gen6_ring_disable_irq;
 			engine->dispatch_execbuffer =
 				gen6_ring_dispatch_execbuffer;
-			if (i915_semaphore_is_enabled(dev_priv)) {
+			if (i915.semaphores) {
 				engine->semaphore.sync_to = gen6_ring_sync;
 				engine->semaphore.signal = gen6_signal;
 				engine->semaphore.mbox.wait[RCS] = MI_SEMAPHORE_SYNC_VR;
@@ -2816,7 +2816,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 	engine->irq_disable = gen8_ring_disable_irq;
 	engine->dispatch_execbuffer =
 			gen8_ring_dispatch_execbuffer;
-	if (i915_semaphore_is_enabled(dev_priv)) {
+	if (i915.semaphores) {
 		engine->semaphore.sync_to = gen8_ring_sync;
 		engine->semaphore.signal = gen8_xcs_signal;
 		GEN8_RING_SEMAPHORE_INIT(engine);
@@ -2847,7 +2847,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
 		engine->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen8_ring_sync;
 			engine->semaphore.signal = gen8_xcs_signal;
 			GEN8_RING_SEMAPHORE_INIT(engine);
@@ -2857,7 +2857,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 		engine->irq_enable = gen6_ring_enable_irq;
 		engine->irq_disable = gen6_ring_disable_irq;
 		engine->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			engine->semaphore.signal = gen6_signal;
 			engine->semaphore.sync_to = gen6_ring_sync;
 			/*
@@ -2906,7 +2906,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
 		engine->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen8_ring_sync;
 			engine->semaphore.signal = gen8_xcs_signal;
 			GEN8_RING_SEMAPHORE_INIT(engine);
@@ -2916,7 +2916,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 		engine->irq_enable = hsw_vebox_enable_irq;
 		engine->irq_disable = hsw_vebox_disable_irq;
 		engine->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
-		if (i915_semaphore_is_enabled(dev_priv)) {
+		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen6_ring_sync;
 			engine->semaphore.signal = gen6_signal;
 			engine->semaphore.mbox.wait[RCS] = MI_SEMAPHORE_SYNC_VER;
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 25/62] drm/i915: Unify intel_logical_ring_emit and intel_ring_emit
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (23 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 24/62] drm/i915: Convert i915_semaphores_is_enabled over to early sanitize Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 26/62] drm/i915: Rename request->ring to request->engine Chris Wilson
                   ` (38 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Both perform the same actions with more or less indirection, so just
unify the code.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_context.c    |  54 ++---
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  53 ++---
 drivers/gpu/drm/i915/i915_gem_gtt.c        |  62 ++---
 drivers/gpu/drm/i915/intel_display.c       |  80 +++----
 drivers/gpu/drm/i915/intel_lrc.c           | 160 ++++++-------
 drivers/gpu/drm/i915/intel_lrc.h           |  26 --
 drivers/gpu/drm/i915/intel_mocs.c          |  38 ++-
 drivers/gpu/drm/i915/intel_overlay.c       |  50 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.c    | 365 +++++++++++++++--------------
 drivers/gpu/drm/i915/intel_ringbuffer.h    |  23 +-
 10 files changed, 439 insertions(+), 472 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 7c114f90f61a..41e32426d174 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -514,7 +514,7 @@ static inline int
 mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 {
 	struct drm_i915_private *dev_priv = req->i915;
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	u32 flags = hw_flags | MI_MM_SPACE_GTT;
 	const int num_rings =
 		/* Use an extended w/a on ivb+ if signalling from other rings */
@@ -529,7 +529,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 	 * itlb_before_ctx_switch.
 	 */
 	if (IS_GEN6(dev_priv)) {
-		ret = engine->flush(req, I915_GEM_GPU_DOMAINS, 0);
+		ret = req->engine->flush(req, I915_GEM_GPU_DOMAINS, 0);
 		if (ret)
 			return ret;
 	}
@@ -551,64 +551,64 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 
 	/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */
 	if (INTEL_GEN(dev_priv) >= 7) {
-		intel_ring_emit(engine, MI_ARB_ON_OFF | MI_ARB_DISABLE);
+		intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE);
 		if (num_rings) {
 			struct intel_engine_cs *signaller;
 
-			intel_ring_emit(engine,
+			intel_ring_emit(ring,
 					MI_LOAD_REGISTER_IMM(num_rings));
 			for_each_engine(signaller, dev_priv) {
-				if (signaller == engine)
+				if (signaller == req->engine)
 					continue;
 
-				intel_ring_emit_reg(engine,
+				intel_ring_emit_reg(ring,
 						    RING_PSMI_CTL(signaller->mmio_base));
-				intel_ring_emit(engine,
+				intel_ring_emit(ring,
 						_MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
 			}
 		}
 	}
 
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_emit(engine, MI_SET_CONTEXT);
-	intel_ring_emit(engine,
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_emit(ring, MI_SET_CONTEXT);
+	intel_ring_emit(ring,
 			i915_gem_obj_ggtt_offset(req->ctx->engine[RCS].state) |
 			flags);
 	/*
 	 * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP
 	 * WaMiSetContext_Hang:snb,ivb,vlv
 	 */
-	intel_ring_emit(engine, MI_NOOP);
+	intel_ring_emit(ring, MI_NOOP);
 
 	if (INTEL_GEN(dev_priv) >= 7) {
 		if (num_rings) {
 			struct intel_engine_cs *signaller;
 			i915_reg_t last_reg = {}; /* keep gcc quiet */
 
-			intel_ring_emit(engine,
+			intel_ring_emit(ring,
 					MI_LOAD_REGISTER_IMM(num_rings));
 			for_each_engine(signaller, dev_priv) {
-				if (signaller == engine)
+				if (signaller == req->engine)
 					continue;
 
 				last_reg = RING_PSMI_CTL(signaller->mmio_base);
-				intel_ring_emit_reg(engine, last_reg);
-				intel_ring_emit(engine,
+				intel_ring_emit_reg(ring, last_reg);
+				intel_ring_emit(ring,
 						_MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE));
 			}
 
 			/* Insert a delay before the next switch! */
-			intel_ring_emit(engine,
+			intel_ring_emit(ring,
 					MI_STORE_REGISTER_MEM |
 					MI_SRM_LRM_GLOBAL_GTT);
-			intel_ring_emit_reg(engine, last_reg);
-			intel_ring_emit(engine, engine->scratch.gtt_offset);
-			intel_ring_emit(engine, MI_NOOP);
+			intel_ring_emit_reg(ring, last_reg);
+			intel_ring_emit(ring, req->engine->scratch.gtt_offset);
+			intel_ring_emit(ring, MI_NOOP);
 		}
-		intel_ring_emit(engine, MI_ARB_ON_OFF | MI_ARB_ENABLE);
+		intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_ENABLE);
 	}
 
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 
 	return ret;
 }
@@ -616,7 +616,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 static int remap_l3(struct drm_i915_gem_request *req, int slice)
 {
 	u32 *remap_info = req->i915->l3_parity.remap_info[slice];
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int i, ret;
 
 	if (!remap_info)
@@ -631,13 +631,13 @@ static int remap_l3(struct drm_i915_gem_request *req, int slice)
 	 * here because no other code should access these registers other than
 	 * at initialization time.
 	 */
-	intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(GEN7_L3LOG_SIZE/4));
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(GEN7_L3LOG_SIZE/4));
 	for (i = 0; i < GEN7_L3LOG_SIZE/4; i++) {
-		intel_ring_emit_reg(engine, GEN7_L3LOG(slice, i));
-		intel_ring_emit(engine, remap_info[i]);
+		intel_ring_emit_reg(ring, GEN7_L3LOG(slice, i));
+		intel_ring_emit(ring, remap_info[i]);
 	}
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 147279fc1b67..99663e8429b3 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1138,14 +1138,12 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
 }
 
 static int
-i915_reset_gen7_sol_offsets(struct drm_device *dev,
-			    struct drm_i915_gem_request *req)
+i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
-	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret, i;
 
-	if (!IS_GEN7(dev) || engine != &dev_priv->engine[RCS]) {
+	if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
 		DRM_DEBUG("sol reset is gen7/rcs only\n");
 		return -EINVAL;
 	}
@@ -1155,12 +1153,12 @@ i915_reset_gen7_sol_offsets(struct drm_device *dev,
 		return ret;
 
 	for (i = 0; i < 4; i++) {
-		intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
-		intel_ring_emit_reg(engine, GEN7_SO_WRITE_OFFSET(i));
-		intel_ring_emit(engine, 0);
+		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+		intel_ring_emit_reg(ring, GEN7_SO_WRITE_OFFSET(i));
+		intel_ring_emit(ring, 0);
 	}
 
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1223,9 +1221,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 			       struct drm_i915_gem_execbuffer2 *args,
 			       struct list_head *vmas)
 {
-	struct drm_device *dev = params->dev;
-	struct intel_engine_cs *engine = params->engine;
-	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct drm_i915_private *dev_priv = params->request->i915;
 	u64 exec_start, exec_len;
 	int instp_mode;
 	u32 instp_mask;
@@ -1239,34 +1235,31 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 	if (ret)
 		return ret;
 
-	WARN(params->ctx->ppgtt && params->ctx->ppgtt->pd_dirty_rings & (1<<engine->id),
-	     "%s didn't clear reload\n", engine->name);
-
 	instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
 	instp_mask = I915_EXEC_CONSTANTS_MASK;
 	switch (instp_mode) {
 	case I915_EXEC_CONSTANTS_REL_GENERAL:
 	case I915_EXEC_CONSTANTS_ABSOLUTE:
 	case I915_EXEC_CONSTANTS_REL_SURFACE:
-		if (instp_mode != 0 && engine != &dev_priv->engine[RCS]) {
+		if (instp_mode != 0 && params->engine->id != RCS) {
 			DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
 			return -EINVAL;
 		}
 
 		if (instp_mode != dev_priv->relative_constants_mode) {
-			if (INTEL_INFO(dev)->gen < 4) {
+			if (INTEL_INFO(dev_priv)->gen < 4) {
 				DRM_DEBUG("no rel constants on pre-gen4\n");
 				return -EINVAL;
 			}
 
-			if (INTEL_INFO(dev)->gen > 5 &&
+			if (INTEL_INFO(dev_priv)->gen > 5 &&
 			    instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) {
 				DRM_DEBUG("rel surface constants mode invalid on gen5+\n");
 				return -EINVAL;
 			}
 
 			/* The HW changed the meaning on this bit on gen6 */
-			if (INTEL_INFO(dev)->gen >= 6)
+			if (INTEL_INFO(dev_priv)->gen >= 6)
 				instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE;
 		}
 		break;
@@ -1275,23 +1268,25 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 		return -EINVAL;
 	}
 
-	if (engine == &dev_priv->engine[RCS] &&
+	if (params->engine->id == RCS &&
 	    instp_mode != dev_priv->relative_constants_mode) {
+		struct intel_ringbuffer *ring = params->request->ringbuf;
+
 		ret = intel_ring_begin(params->request, 4);
 		if (ret)
 			return ret;
 
-		intel_ring_emit(engine, MI_NOOP);
-		intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
-		intel_ring_emit_reg(engine, INSTPM);
-		intel_ring_emit(engine, instp_mask << 16 | instp_mode);
-		intel_ring_advance(engine);
+		intel_ring_emit(ring, MI_NOOP);
+		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+		intel_ring_emit_reg(ring, INSTPM);
+		intel_ring_emit(ring, instp_mask << 16 | instp_mode);
+		intel_ring_advance(ring);
 
 		dev_priv->relative_constants_mode = instp_mode;
 	}
 
 	if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
-		ret = i915_reset_gen7_sol_offsets(dev, params->request);
+		ret = i915_reset_gen7_sol_offsets(params->request);
 		if (ret)
 			return ret;
 	}
@@ -1303,9 +1298,9 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 	if (exec_len == 0)
 		exec_len = params->batch_obj->base.size;
 
-	ret = engine->dispatch_execbuffer(params->request,
-					exec_start, exec_len,
-					params->dispatch_flags);
+	ret = params->engine->dispatch_execbuffer(params->request,
+						  exec_start, exec_len,
+						  params->dispatch_flags);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 5860fb73c0e3..f735d1ec189a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -669,7 +669,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
 			  unsigned entry,
 			  dma_addr_t addr)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	BUG_ON(entry >= 4);
@@ -678,13 +678,13 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
-	intel_ring_emit_reg(engine, GEN8_RING_PDP_UDW(engine, entry));
-	intel_ring_emit(engine, upper_32_bits(addr));
-	intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
-	intel_ring_emit_reg(engine, GEN8_RING_PDP_LDW(engine, entry));
-	intel_ring_emit(engine, lower_32_bits(addr));
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+	intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(req->engine, entry));
+	intel_ring_emit(ring, upper_32_bits(addr));
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+	intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(req->engine, entry));
+	intel_ring_emit(ring, lower_32_bits(addr));
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1660,11 +1660,13 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
 static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			 struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
-	ret = engine->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+	ret = req->engine->flush(req,
+				 I915_GEM_GPU_DOMAINS,
+				 I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -1672,13 +1674,13 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(2));
-	intel_ring_emit_reg(engine, RING_PP_DIR_DCLV(engine));
-	intel_ring_emit(engine, PP_DIR_DCLV_2G);
-	intel_ring_emit_reg(engine, RING_PP_DIR_BASE(engine));
-	intel_ring_emit(engine, get_pd_offset(ppgtt));
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
+	intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->engine));
+	intel_ring_emit(ring, PP_DIR_DCLV_2G);
+	intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->engine));
+	intel_ring_emit(ring, get_pd_offset(ppgtt));
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1697,11 +1699,13 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
 static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			  struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
-	ret = engine->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+	ret = req->engine->flush(req,
+				 I915_GEM_GPU_DOMAINS,
+				 I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -1709,17 +1713,19 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(2));
-	intel_ring_emit_reg(engine, RING_PP_DIR_DCLV(engine));
-	intel_ring_emit(engine, PP_DIR_DCLV_2G);
-	intel_ring_emit_reg(engine, RING_PP_DIR_BASE(engine));
-	intel_ring_emit(engine, get_pd_offset(ppgtt));
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
+	intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->engine));
+	intel_ring_emit(ring, PP_DIR_DCLV_2G);
+	intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->engine));
+	intel_ring_emit(ring, get_pd_offset(ppgtt));
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	/* XXX: RCS is the only one to auto invalidate the TLBs? */
-	if (engine->id != RCS) {
-		ret = engine->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+	if (req->engine->id != RCS) {
+		ret = req->engine->flush(req,
+					 I915_GEM_GPU_DOMAINS,
+					 I915_GEM_GPU_DOMAINS);
 		if (ret)
 			return ret;
 	}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 849abb565d3d..2cba91207d7e 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11174,7 +11174,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	u32 flip_mask;
 	int ret;
@@ -11190,13 +11190,13 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
 		flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
 	else
 		flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
-	intel_ring_emit(engine, MI_WAIT_FOR_EVENT | flip_mask);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_emit(engine, MI_DISPLAY_FLIP |
+	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_emit(ring, MI_DISPLAY_FLIP |
 			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(engine, fb->pitches[0]);
-	intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
-	intel_ring_emit(engine, 0); /* aux display base address, unused */
+	intel_ring_emit(ring, fb->pitches[0]);
+	intel_ring_emit(ring, intel_crtc->flip_work->gtt_offset);
+	intel_ring_emit(ring, 0); /* aux display base address, unused */
 
 	return 0;
 }
@@ -11208,7 +11208,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	u32 flip_mask;
 	int ret;
@@ -11221,13 +11221,13 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
 		flip_mask = MI_WAIT_FOR_PLANE_B_FLIP;
 	else
 		flip_mask = MI_WAIT_FOR_PLANE_A_FLIP;
-	intel_ring_emit(engine, MI_WAIT_FOR_EVENT | flip_mask);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_emit(engine, MI_DISPLAY_FLIP_I915 |
+	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | flip_mask);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 |
 			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(engine, fb->pitches[0]);
-	intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
-	intel_ring_emit(engine, MI_NOOP);
+	intel_ring_emit(ring, fb->pitches[0]);
+	intel_ring_emit(ring, intel_crtc->flip_work->gtt_offset);
+	intel_ring_emit(ring, MI_NOOP);
 
 	return 0;
 }
@@ -11239,7 +11239,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t pf, pipesrc;
@@ -11253,10 +11253,10 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
 	 * Display Registers (which do not change across a page-flip)
 	 * so we need only reprogram the base address.
 	 */
-	intel_ring_emit(engine, MI_DISPLAY_FLIP |
+	intel_ring_emit(ring, MI_DISPLAY_FLIP |
 			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(engine, fb->pitches[0]);
-	intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset |
+	intel_ring_emit(ring, fb->pitches[0]);
+	intel_ring_emit(ring, intel_crtc->flip_work->gtt_offset |
 			obj->tiling_mode);
 
 	/* XXX Enabling the panel-fitter across page-flip is so far
@@ -11265,7 +11265,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
 	 */
 	pf = 0;
 	pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
-	intel_ring_emit(engine, pf | pipesrc);
+	intel_ring_emit(ring, pf | pipesrc);
 
 	return 0;
 }
@@ -11277,7 +11277,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t pf, pipesrc;
@@ -11287,10 +11287,10 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_DISPLAY_FLIP |
+	intel_ring_emit(ring, MI_DISPLAY_FLIP |
 			MI_DISPLAY_FLIP_PLANE(intel_crtc->plane));
-	intel_ring_emit(engine, fb->pitches[0] | obj->tiling_mode);
-	intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
+	intel_ring_emit(ring, fb->pitches[0] | obj->tiling_mode);
+	intel_ring_emit(ring, intel_crtc->flip_work->gtt_offset);
 
 	/* Contrary to the suggestions in the documentation,
 	 * "Enable Panel Fitter" does not seem to be required when page
@@ -11300,7 +11300,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
 	 */
 	pf = 0;
 	pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff;
-	intel_ring_emit(engine, pf | pipesrc);
+	intel_ring_emit(ring, pf | pipesrc);
 
 	return 0;
 }
@@ -11312,7 +11312,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t plane_bit = 0;
 	int len, ret;
@@ -11333,7 +11333,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
 	}
 
 	len = 4;
-	if (engine->id == RCS) {
+	if (req->engine->id == RCS) {
 		len += 6;
 		/*
 		 * On Gen 8, SRM is now taking an extra dword to accommodate
@@ -11371,30 +11371,30 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
 	 * for the RCS also doesn't appear to drop events. Setting the DERRMR
 	 * to zero does lead to lockups within MI_DISPLAY_FLIP.
 	 */
-	if (engine->id == RCS) {
-		intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(1));
-		intel_ring_emit_reg(engine, DERRMR);
-		intel_ring_emit(engine, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
+	if (req->engine->id == RCS) {
+		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+		intel_ring_emit_reg(ring, DERRMR);
+		intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
 					  DERRMR_PIPEB_PRI_FLIP_DONE |
 					  DERRMR_PIPEC_PRI_FLIP_DONE));
 		if (IS_GEN8(dev))
-			intel_ring_emit(engine, MI_STORE_REGISTER_MEM_GEN8 |
+			intel_ring_emit(ring, MI_STORE_REGISTER_MEM_GEN8 |
 					      MI_SRM_LRM_GLOBAL_GTT);
 		else
-			intel_ring_emit(engine, MI_STORE_REGISTER_MEM |
+			intel_ring_emit(ring, MI_STORE_REGISTER_MEM |
 					      MI_SRM_LRM_GLOBAL_GTT);
-		intel_ring_emit_reg(engine, DERRMR);
-		intel_ring_emit(engine, engine->scratch.gtt_offset + 256);
+		intel_ring_emit_reg(ring, DERRMR);
+		intel_ring_emit(ring, req->engine->scratch.gtt_offset + 256);
 		if (IS_GEN8(dev)) {
-			intel_ring_emit(engine, 0);
-			intel_ring_emit(engine, MI_NOOP);
+			intel_ring_emit(ring, 0);
+			intel_ring_emit(ring, MI_NOOP);
 		}
 	}
 
-	intel_ring_emit(engine, MI_DISPLAY_FLIP_I915 | plane_bit);
-	intel_ring_emit(engine, (fb->pitches[0] | obj->tiling_mode));
-	intel_ring_emit(engine, intel_crtc->flip_work->gtt_offset);
-	intel_ring_emit(engine, (MI_NOOP));
+	intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | plane_bit);
+	intel_ring_emit(ring, (fb->pitches[0] | obj->tiling_mode));
+	intel_ring_emit(ring, intel_crtc->flip_work->gtt_offset);
+	intel_ring_emit(ring, (MI_NOOP));
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 5ef81347055c..3076b63f2298 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -751,7 +751,7 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 	struct intel_ringbuffer *ringbuf = request->ringbuf;
 	struct intel_engine_cs *engine = request->engine;
 
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_advance(ringbuf);
 	request->tail = ringbuf->tail;
 
 	/*
@@ -760,9 +760,9 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 	 *
 	 * Caller must reserve WA_TAIL_DWORDS for us!
 	 */
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_advance(ringbuf);
 
 	/* We keep the previous context alive until we retire the following
 	 * request. This ensures that any the context object is still pinned
@@ -852,11 +852,11 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
 		if (ret)
 			return ret;
 
-		intel_logical_ring_emit(ringbuf, MI_NOOP);
-		intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
-		intel_logical_ring_emit_reg(ringbuf, INSTPM);
-		intel_logical_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
-		intel_logical_ring_advance(ringbuf);
+		intel_ring_emit(ringbuf, MI_NOOP);
+		intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
+		intel_ring_emit_reg(ringbuf, INSTPM);
+		intel_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
+		intel_ring_advance(ringbuf);
 
 		dev_priv->relative_constants_mode = instp_mode;
 	}
@@ -1026,14 +1026,14 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
 	if (ret)
 		return ret;
 
-	intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
+	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
 	for (i = 0; i < w->count; i++) {
-		intel_logical_ring_emit_reg(ringbuf, w->reg[i].addr);
-		intel_logical_ring_emit(ringbuf, w->reg[i].value);
+		intel_ring_emit_reg(ringbuf, w->reg[i].addr);
+		intel_ring_emit(ringbuf, w->reg[i].value);
 	}
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_emit(ringbuf, MI_NOOP);
 
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_advance(ringbuf);
 
 	engine->gpu_caches_dirty = true;
 	ret = logical_ring_flush_all_caches(req);
@@ -1506,8 +1506,7 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine)
 static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 {
 	struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
-	struct intel_engine_cs *engine = req->engine;
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
 	int i, ret;
 
@@ -1515,20 +1514,18 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 	if (ret)
 		return ret;
 
-	intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(num_lri_cmds));
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_lri_cmds));
 	for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) {
 		const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);
 
-		intel_logical_ring_emit_reg(ringbuf,
-					    GEN8_RING_PDP_UDW(engine, i));
-		intel_logical_ring_emit(ringbuf, upper_32_bits(pd_daddr));
-		intel_logical_ring_emit_reg(ringbuf,
-					    GEN8_RING_PDP_LDW(engine, i));
-		intel_logical_ring_emit(ringbuf, lower_32_bits(pd_daddr));
+		intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(req->engine, i));
+		intel_ring_emit(ring, upper_32_bits(pd_daddr));
+		intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(req->engine, i));
+		intel_ring_emit(ring, lower_32_bits(pd_daddr));
 	}
 
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1536,7 +1533,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
 			      u64 offset, unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
 	int ret;
 
@@ -1563,14 +1560,14 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
 		return ret;
 
 	/* FIXME(BDW): Address space and security selectors. */
-	intel_logical_ring_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 |
-				(ppgtt<<8) |
-				(dispatch_flags & I915_DISPATCH_RS ?
-				 MI_BATCH_RESOURCE_STREAMER : 0));
-	intel_logical_ring_emit(ringbuf, lower_32_bits(offset));
-	intel_logical_ring_emit(ringbuf, upper_32_bits(offset));
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 |
+			(ppgtt<<8) |
+			(dispatch_flags & I915_DISPATCH_RS ?
+			 MI_BATCH_RESOURCE_STREAMER : 0));
+	intel_ring_emit(ring, lower_32_bits(offset));
+	intel_ring_emit(ring, upper_32_bits(offset));
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1593,9 +1590,8 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
 			   u32 invalidate_domains,
 			   u32 unused)
 {
-	struct intel_ringbuffer *ringbuf = request->ringbuf;
-	struct intel_engine_cs *engine = ringbuf->engine;
-	struct drm_i915_private *dev_priv = request->i915;
+	struct intel_ringbuffer *ring = request->ringbuf;
+	struct intel_engine_cs *engine = ring->engine;
 	uint32_t cmd;
 	int ret;
 
@@ -1614,17 +1610,17 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
 
 	if (invalidate_domains & I915_GEM_GPU_DOMAINS) {
 		cmd |= MI_INVALIDATE_TLB;
-		if (engine == &dev_priv->engine[VCS])
+		if (engine->id == VCS)
 			cmd |= MI_INVALIDATE_BSD;
 	}
 
-	intel_logical_ring_emit(ringbuf, cmd);
-	intel_logical_ring_emit(ringbuf,
-				I915_GEM_HWS_SCRATCH_ADDR |
-				MI_FLUSH_DW_USE_GTT);
-	intel_logical_ring_emit(ringbuf, 0); /* upper addr */
-	intel_logical_ring_emit(ringbuf, 0); /* value */
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ring, cmd);
+	intel_ring_emit(ring,
+			I915_GEM_HWS_SCRATCH_ADDR |
+			MI_FLUSH_DW_USE_GTT);
+	intel_ring_emit(ring, 0); /* upper addr */
+	intel_ring_emit(ring, 0); /* value */
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1633,8 +1629,8 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
 				  u32 invalidate_domains,
 				  u32 flush_domains)
 {
-	struct intel_ringbuffer *ringbuf = request->ringbuf;
-	struct intel_engine_cs *engine = ringbuf->engine;
+	struct intel_ringbuffer *ring = request->ringbuf;
+	struct intel_engine_cs *engine = request->engine;
 	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	bool vf_flush_wa = false;
 	u32 flags = 0;
@@ -1672,21 +1668,21 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
 		return ret;
 
 	if (vf_flush_wa) {
-		intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
-		intel_logical_ring_emit(ringbuf, 0);
-		intel_logical_ring_emit(ringbuf, 0);
-		intel_logical_ring_emit(ringbuf, 0);
-		intel_logical_ring_emit(ringbuf, 0);
-		intel_logical_ring_emit(ringbuf, 0);
+		intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+		intel_ring_emit(ring, 0);
+		intel_ring_emit(ring, 0);
+		intel_ring_emit(ring, 0);
+		intel_ring_emit(ring, 0);
+		intel_ring_emit(ring, 0);
 	}
 
-	intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
-	intel_logical_ring_emit(ringbuf, flags);
-	intel_logical_ring_emit(ringbuf, scratch_addr);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+	intel_ring_emit(ring, flags);
+	intel_ring_emit(ring, scratch_addr);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, 0);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1715,7 +1711,7 @@ static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
 
 static int gen8_emit_request(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ringbuf = request->ringbuf;
+	struct intel_ringbuffer *ring = request->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
@@ -1725,21 +1721,20 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
 	/* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
 	BUILD_BUG_ON(I915_GEM_HWS_INDEX_ADDR & (1 << 5));
 
-	intel_logical_ring_emit(ringbuf,
-				(MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW);
-	intel_logical_ring_emit(ringbuf,
-				intel_hws_seqno_address(request->engine) |
-				MI_FLUSH_DW_USE_GTT);
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, request->fence.seqno);
-	intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_emit(ring, (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW);
+	intel_ring_emit(ring,
+			intel_hws_seqno_address(request->engine) |
+			MI_FLUSH_DW_USE_GTT);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, request->fence.seqno);
+	intel_ring_emit(ring, MI_USER_INTERRUPT);
+	intel_ring_emit(ring, MI_NOOP);
 	return intel_logical_ring_advance_and_submit(request);
 }
 
 static int gen8_emit_request_render(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ringbuf = request->ringbuf;
+	struct intel_ringbuffer *ring = request->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
@@ -1753,19 +1748,18 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request)
 	 * need a prior CS_STALL, which is emitted by the flush
 	 * following the batch.
 	 */
-	intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
-	intel_logical_ring_emit(ringbuf,
-				(PIPE_CONTROL_GLOBAL_GTT_IVB |
-				 PIPE_CONTROL_CS_STALL |
-				 PIPE_CONTROL_QW_WRITE));
-	intel_logical_ring_emit(ringbuf,
-				intel_hws_seqno_address(request->engine));
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request));
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+	intel_ring_emit(ring,
+			(PIPE_CONTROL_GLOBAL_GTT_IVB |
+			 PIPE_CONTROL_CS_STALL |
+			 PIPE_CONTROL_QW_WRITE));
+	intel_ring_emit(ring, intel_hws_seqno_address(request->engine));
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, i915_gem_request_get_seqno(request));
 	/* We're thrashing one dword of HWS. */
-	intel_logical_ring_emit(ringbuf, 0);
-	intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, MI_USER_INTERRUPT);
+	intel_ring_emit(ring, MI_NOOP);
 	return intel_logical_ring_advance_and_submit(request);
 }
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index e99848067fb8..baf90543857a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -65,32 +65,6 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
 int intel_logical_rings_init(struct drm_device *dev);
 
 int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
-/**
- * intel_logical_ring_advance() - advance the ringbuffer tail
- * @ringbuf: Ringbuffer to advance.
- *
- * The tail is only updated in our logical ringbuffer struct.
- */
-static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
-{
-	__intel_ringbuffer_advance(ringbuf);
-}
-
-/**
- * intel_logical_ring_emit() - write a DWORD to the ringbuffer.
- * @ringbuf: Ringbuffer to write to.
- * @data: DWORD to write.
- */
-static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf,
-					   u32 data)
-{
-	__intel_ringbuffer_emit(ringbuf, data);
-}
-static inline void intel_logical_ring_emit_reg(struct intel_ringbuffer *ringbuf,
-					       i915_reg_t reg)
-{
-	intel_logical_ring_emit(ringbuf, i915_mmio_reg_offset(reg));
-}
 
 /* Logical Ring Contexts */
 
diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index b765c75f3fcd..8513bf06d4df 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -243,14 +243,11 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_logical_ring_emit(ringbuf,
-				MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
+	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
 
 	for (index = 0; index < table->size; index++) {
-		intel_logical_ring_emit_reg(ringbuf,
-					    mocs_register(engine, index));
-		intel_logical_ring_emit(ringbuf,
-					table->table[index].control_value);
+		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
+		intel_ring_emit(ringbuf, table->table[index].control_value);
 	}
 
 	/*
@@ -262,14 +259,12 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
 	 * that value to all the used entries.
 	 */
 	for (; index < GEN9_NUM_MOCS_ENTRIES; index++) {
-		intel_logical_ring_emit_reg(ringbuf,
-					    mocs_register(engine, index));
-		intel_logical_ring_emit(ringbuf,
-					table->table[0].control_value);
+		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
+		intel_ring_emit(ringbuf, table->table[0].control_value);
 	}
 
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_advance(ringbuf);
 
 	return 0;
 }
@@ -307,19 +302,18 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_logical_ring_emit(ringbuf,
+	intel_ring_emit(ringbuf,
 			MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));
 
 	for (i = 0; i < table->size/2; i++) {
-		intel_logical_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
-		intel_logical_ring_emit(ringbuf,
-					l3cc_combine(table, 2*i, 2*i+1));
+		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
+		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 2*i+1));
 	}
 
 	if (table->size & 0x01) {
 		/* Odd table size - 1 left over */
-		intel_logical_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
-		intel_logical_ring_emit(ringbuf, l3cc_combine(table, 2*i, 0));
+		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
+		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 0));
 		i++;
 	}
 
@@ -329,12 +323,12 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
 	 * they are reserved by the hardware.
 	 */
 	for (; i < GEN9_NUM_MOCS_ENTRIES / 2; i++) {
-		intel_logical_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
-		intel_logical_ring_emit(ringbuf, l3cc_combine(table, 0, 0));
+		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
+		intel_ring_emit(ringbuf, l3cc_combine(table, 0, 0));
 	}
 
-	intel_logical_ring_emit(ringbuf, MI_NOOP);
-	intel_logical_ring_advance(ringbuf);
+	intel_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_advance(ringbuf);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 7f91f18ad29d..be79c4497af5 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -235,6 +235,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 	struct drm_i915_private *dev_priv = overlay->i915;
 	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
+	struct intel_ringbuffer *ring;
 	int ret;
 
 	WARN_ON(overlay->active);
@@ -252,11 +253,12 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 
 	overlay->active = true;
 
-	intel_ring_emit(engine, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
-	intel_ring_emit(engine, overlay->flip_addr | OFC_UPDATE);
-	intel_ring_emit(engine, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	ring = req->ringbuf;
+	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
+	intel_ring_emit(ring, overlay->flip_addr | OFC_UPDATE);
+	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return intel_overlay_do_wait_request(overlay, req, NULL);
 }
@@ -268,6 +270,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 	struct drm_i915_private *dev_priv = overlay->i915;
 	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
+	struct intel_ringbuffer *ring;
 	u32 flip_addr = overlay->flip_addr;
 	u32 tmp;
 	int ret;
@@ -292,9 +295,10 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 		return ret;
 	}
 
-	intel_ring_emit(engine, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
-	intel_ring_emit(engine, flip_addr);
-	intel_ring_advance(engine);
+	ring = req->ringbuf;
+	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
+	intel_ring_emit(ring, flip_addr);
+	intel_ring_advance(ring);
 
 	WARN_ON(overlay->last_flip_req);
 	i915_gem_request_assign(&overlay->last_flip_req, req);
@@ -336,6 +340,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 	struct drm_i915_private *dev_priv = overlay->i915;
 	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
+	struct intel_ringbuffer *ring;
 	u32 flip_addr = overlay->flip_addr;
 	int ret;
 
@@ -357,24 +362,25 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 		return ret;
 	}
 
+	ring = req->ringbuf;
 	/* wait for overlay to go idle */
-	intel_ring_emit(engine, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
-	intel_ring_emit(engine, flip_addr);
-	intel_ring_emit(engine, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
+	intel_ring_emit(ring, flip_addr);
+	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
 	/* turn overlay off */
 	if (IS_I830(dev_priv)) {
 		/* Workaround: Don't disable the overlay fully, since otherwise
 		 * it dies on the next OVERLAY_ON cmd. */
-		intel_ring_emit(engine, MI_NOOP);
-		intel_ring_emit(engine, MI_NOOP);
-		intel_ring_emit(engine, MI_NOOP);
+		intel_ring_emit(ring, MI_NOOP);
+		intel_ring_emit(ring, MI_NOOP);
+		intel_ring_emit(ring, MI_NOOP);
 	} else {
-		intel_ring_emit(engine, MI_OVERLAY_FLIP | MI_OVERLAY_OFF);
-		intel_ring_emit(engine, flip_addr);
-		intel_ring_emit(engine,
+		intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_OFF);
+		intel_ring_emit(ring, flip_addr);
+		intel_ring_emit(ring,
 				MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
 	}
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 
 	return intel_overlay_do_wait_request(overlay, req, intel_overlay_off_tail);
 }
@@ -420,6 +426,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 	if (I915_READ(ISR) & I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT) {
 		/* synchronous slowpath */
 		struct drm_i915_gem_request *req;
+		struct intel_ringbuffer *ring;
 
 		req = i915_gem_request_alloc(engine, NULL);
 		if (IS_ERR(req))
@@ -431,10 +438,11 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 			return ret;
 		}
 
-		intel_ring_emit(engine,
+		ring = req->ringbuf;
+		intel_ring_emit(ring,
 				MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
-		intel_ring_emit(engine, MI_NOOP);
-		intel_ring_advance(engine);
+		intel_ring_emit(ring, MI_NOOP);
+		intel_ring_advance(ring);
 
 		ret = intel_overlay_do_wait_request(overlay, req,
 						    intel_overlay_release_old_vid_tail);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index c0a132a742cb..ace455b2b2d6 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -58,7 +58,7 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
 					    ringbuf->tail, ringbuf->size);
 }
 
-static void __intel_ring_advance(struct intel_engine_cs *engine)
+static void __intel_engine_submit(struct intel_engine_cs *engine)
 {
 	struct intel_ringbuffer *ringbuf = engine->buffer;
 	ringbuf->tail &= ringbuf->size - 1;
@@ -70,7 +70,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
 		       u32	flush_domains)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	u32 cmd;
 	int ret;
 
@@ -85,9 +85,9 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, cmd);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, cmd);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -97,7 +97,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
 		       u32	flush_domains)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	u32 cmd;
 	int ret;
 
@@ -129,23 +129,20 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 	 * are flushed at any MI_FLUSH.
 	 */
 
-	cmd = MI_FLUSH | MI_NO_WRITE_FLUSH;
-	if ((invalidate_domains|flush_domains) & I915_GEM_DOMAIN_RENDER)
-		cmd &= ~MI_NO_WRITE_FLUSH;
-	if (invalidate_domains & I915_GEM_DOMAIN_INSTRUCTION)
+	cmd = MI_FLUSH;
+	if (invalidate_domains) {
 		cmd |= MI_EXE_FLUSH;
-
-	if (invalidate_domains & I915_GEM_DOMAIN_COMMAND &&
-	    (IS_G4X(req->i915) || IS_GEN5(req->i915)))
-		cmd |= MI_INVALIDATE_ISP;
+		if (IS_G4X(req->i915) || IS_GEN5(req->i915))
+		    cmd |= MI_INVALIDATE_ISP;
+	}
 
 	ret = intel_ring_begin(req, 2);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, cmd);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, cmd);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -190,34 +187,35 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 static int
 intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
-	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+	struct intel_ringbuffer *ring = req->ringbuf;
+	u32 scratch_addr =
+	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	int ret;
 
 	ret = intel_ring_begin(req, 6);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(5));
-	intel_ring_emit(engine, PIPE_CONTROL_CS_STALL |
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(5));
+	intel_ring_emit(ring, PIPE_CONTROL_CS_STALL |
 			PIPE_CONTROL_STALL_AT_SCOREBOARD);
-	intel_ring_emit(engine, scratch_addr | PIPE_CONTROL_GLOBAL_GTT); /* address */
-	intel_ring_emit(engine, 0); /* low dword */
-	intel_ring_emit(engine, 0); /* high dword */
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, scratch_addr | PIPE_CONTROL_GLOBAL_GTT); /* address */
+	intel_ring_emit(ring, 0); /* low dword */
+	intel_ring_emit(ring, 0); /* high dword */
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	ret = intel_ring_begin(req, 6);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(5));
-	intel_ring_emit(engine, PIPE_CONTROL_QW_WRITE);
-	intel_ring_emit(engine, scratch_addr | PIPE_CONTROL_GLOBAL_GTT); /* address */
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(5));
+	intel_ring_emit(ring, PIPE_CONTROL_QW_WRITE);
+	intel_ring_emit(ring, scratch_addr | PIPE_CONTROL_GLOBAL_GTT); /* address */
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -226,9 +224,10 @@ static int
 gen6_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
+	u32 scratch_addr =
+	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
-	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	int ret;
 
 	/* Force SNB workarounds for PIPE_CONTROL flushes */
@@ -266,11 +265,11 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(4));
-	intel_ring_emit(engine, flags);
-	intel_ring_emit(engine, scratch_addr | PIPE_CONTROL_GLOBAL_GTT);
-	intel_ring_emit(engine, 0);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4));
+	intel_ring_emit(ring, flags);
+	intel_ring_emit(ring, scratch_addr | PIPE_CONTROL_GLOBAL_GTT);
+	intel_ring_emit(ring, 0);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -278,19 +277,20 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
 static int
 gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 4);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(4));
-	intel_ring_emit(engine, PIPE_CONTROL_CS_STALL |
-			      PIPE_CONTROL_STALL_AT_SCOREBOARD);
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, 0);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4));
+	intel_ring_emit(ring,
+			PIPE_CONTROL_CS_STALL |
+			PIPE_CONTROL_STALL_AT_SCOREBOARD);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, 0);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -299,9 +299,10 @@ static int
 gen7_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
+	u32 scratch_addr =
+	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
-	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	int ret;
 
 	/*
@@ -350,11 +351,11 @@ gen7_render_ring_flush(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(4));
-	intel_ring_emit(engine, flags);
-	intel_ring_emit(engine, scratch_addr);
-	intel_ring_emit(engine, 0);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4));
+	intel_ring_emit(ring, flags);
+	intel_ring_emit(ring, scratch_addr);
+	intel_ring_emit(ring, 0);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -363,20 +364,20 @@ static int
 gen8_emit_pipe_control(struct drm_i915_gem_request *req,
 		       u32 flags, u32 scratch_addr)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 6);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(6));
-	intel_ring_emit(engine, flags);
-	intel_ring_emit(engine, scratch_addr);
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, 0);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+	intel_ring_emit(ring, flags);
+	intel_ring_emit(ring, scratch_addr);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, 0);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -385,8 +386,8 @@ static int
 gen8_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	u32 flags = 0;
 	u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+	u32 flags = 0;
 	int ret;
 
 	flags |= PIPE_CONTROL_CS_STALL;
@@ -679,14 +680,14 @@ err:
 
 static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	struct i915_workarounds *w = &req->i915->workarounds;
 	int ret, i;
 
 	if (w->count == 0)
 		return 0;
 
-	engine->gpu_caches_dirty = true;
+	req->engine->gpu_caches_dirty = true;
 	ret = intel_ring_flush_all_caches(req);
 	if (ret)
 		return ret;
@@ -695,16 +696,16 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_LOAD_REGISTER_IMM(w->count));
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
 	for (i = 0; i < w->count; i++) {
-		intel_ring_emit_reg(engine, w->reg[i].addr);
-		intel_ring_emit(engine, w->reg[i].value);
+		intel_ring_emit_reg(ring, w->reg[i].addr);
+		intel_ring_emit(ring, w->reg[i].value);
 	}
-	intel_ring_emit(engine, MI_NOOP);
+	intel_ring_emit(ring, MI_NOOP);
 
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 
-	engine->gpu_caches_dirty = true;
+	req->engine->gpu_caches_dirty = true;
 	ret = intel_ring_flush_all_caches(req);
 	if (ret)
 		return ret;
@@ -1241,7 +1242,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 			   unsigned int num_dwords)
 {
 #define MBOX_UPDATE_DWORDS 8
-	struct intel_engine_cs *signaller = signaller_req->engine;
+	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
 	enum intel_engine_id id;
@@ -1256,7 +1257,8 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 		return ret;
 
 	for_each_engine_id(waiter, dev_priv, id) {
-		u64 gtt_offset = signaller->semaphore.signal_ggtt[id];
+		u64 gtt_offset =
+		       	signaller_req->engine->semaphore.signal_ggtt[id];
 		if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
 			continue;
 
@@ -1280,7 +1282,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 			   unsigned int num_dwords)
 {
 #define MBOX_UPDATE_DWORDS 6
-	struct intel_engine_cs *signaller = signaller_req->engine;
+	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
 	enum intel_engine_id id;
@@ -1295,7 +1297,8 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 		return ret;
 
 	for_each_engine_id(waiter, dev_priv, id) {
-		u64 gtt_offset = signaller->semaphore.signal_ggtt[id];
+		u64 gtt_offset =
+		       	signaller_req->engine->semaphore.signal_ggtt[id];
 		if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
 			continue;
 
@@ -1316,7 +1319,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 		       unsigned int num_dwords)
 {
-	struct intel_engine_cs *signaller = signaller_req->engine;
+	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *useless;
 	enum intel_engine_id id;
@@ -1332,7 +1335,8 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 		return ret;
 
 	for_each_engine_id(useless, dev_priv, id) {
-		i915_reg_t mbox_reg = signaller->semaphore.mbox.signal[id];
+		i915_reg_t mbox_reg =
+		       	signaller_req->engine->semaphore.mbox.signal[id];
 
 		if (i915_mmio_reg_valid(mbox_reg)) {
 			intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
@@ -1359,23 +1363,22 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 static int
 gen6_add_request(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
-	if (engine->semaphore.signal)
-		ret = engine->semaphore.signal(req, 4);
+	if (req->engine->semaphore.signal)
+		ret = req->engine->semaphore.signal(req, 4);
 	else
 		ret = intel_ring_begin(req, 4);
 
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_STORE_DWORD_INDEX);
-	intel_ring_emit(engine,
-			I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
-	intel_ring_emit(engine, req->fence.seqno);
-	intel_ring_emit(engine, MI_USER_INTERRUPT);
-	__intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
+	intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
+	intel_ring_emit(ring, req->fence.seqno);
+	intel_ring_emit(ring, MI_USER_INTERRUPT);
+	__intel_engine_submit(req->engine);
 
 	return 0;
 }
@@ -1384,6 +1387,7 @@ static int
 gen8_render_add_request(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	if (engine->semaphore.signal)
@@ -1393,18 +1397,18 @@ gen8_render_add_request(struct drm_i915_gem_request *req)
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, GFX_OP_PIPE_CONTROL(6));
-	intel_ring_emit(engine, (PIPE_CONTROL_GLOBAL_GTT_IVB |
-				 PIPE_CONTROL_CS_STALL |
-				 PIPE_CONTROL_QW_WRITE));
-	intel_ring_emit(engine, intel_hws_seqno_address(req->engine));
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, i915_gem_request_get_seqno(req));
+	intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+	intel_ring_emit(ring, (PIPE_CONTROL_GLOBAL_GTT_IVB |
+			       PIPE_CONTROL_CS_STALL |
+			       PIPE_CONTROL_QW_WRITE));
+	intel_ring_emit(ring, intel_hws_seqno_address(engine));
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, i915_gem_request_get_seqno(req));
 	/* We're thrashing one dword of HWS. */
-	intel_ring_emit(engine, 0);
-	intel_ring_emit(engine, MI_USER_INTERRUPT);
-	intel_ring_emit(engine, MI_NOOP);
-	__intel_ring_advance(engine);
+	intel_ring_emit(ring, 0);
+	intel_ring_emit(ring, MI_USER_INTERRUPT);
+	intel_ring_emit(ring, MI_NOOP);
+	__intel_engine_submit(engine);
 
 	return 0;
 }
@@ -1428,7 +1432,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
 	       struct intel_engine_cs *signaller,
 	       u32 seqno)
 {
-	struct intel_engine_cs *waiter = waiter_req->engine;
+	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
 	struct drm_i915_private *dev_priv = waiter_req->i915;
 	struct i915_hw_ppgtt *ppgtt;
 	int ret;
@@ -1442,9 +1446,11 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
 				MI_SEMAPHORE_SAD_GTE_SDD);
 	intel_ring_emit(waiter, seqno);
 	intel_ring_emit(waiter,
-			lower_32_bits(GEN8_WAIT_OFFSET(waiter, signaller->id)));
+			lower_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
+						       signaller->id)));
 	intel_ring_emit(waiter,
-			upper_32_bits(GEN8_WAIT_OFFSET(waiter, signaller->id)));
+			upper_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
+						       signaller->id)));
 	intel_ring_advance(waiter);
 
 	/* When the !RCS engines idle waiting upon a semaphore, they lose their
@@ -1463,11 +1469,11 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
 	       struct intel_engine_cs *signaller,
 	       u32 seqno)
 {
-	struct intel_engine_cs *waiter = waiter_req->engine;
+	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
 	u32 dw1 = MI_SEMAPHORE_MBOX |
 		  MI_SEMAPHORE_COMPARE |
 		  MI_SEMAPHORE_REGISTER;
-	u32 wait_mbox = signaller->semaphore.mbox.wait[waiter->id];
+	u32 wait_mbox = signaller->semaphore.mbox.wait[waiter_req->engine->id];
 	int ret;
 
 	/* Throughout all of the GEM code, seqno passed implies our current
@@ -1597,35 +1603,34 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 	       u32     invalidate_domains,
 	       u32     flush_domains)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_FLUSH);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_FLUSH);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 	return 0;
 }
 
 static int
 i9xx_add_request(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 4);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_STORE_DWORD_INDEX);
-	intel_ring_emit(engine,
-		       	I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
-	intel_ring_emit(engine, req->fence.seqno);
-	intel_ring_emit(engine, MI_USER_INTERRUPT);
-	__intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
+	intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
+	intel_ring_emit(ring, req->fence.seqno);
+	intel_ring_emit(ring, MI_USER_INTERRUPT);
+	__intel_engine_submit(req->engine);
 
 	return 0;
 }
@@ -1692,20 +1697,20 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 length,
 			 unsigned dispatch_flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine,
+	intel_ring_emit(ring,
 			MI_BATCH_BUFFER_START |
 			MI_BATCH_GTT |
 			(dispatch_flags & I915_DISPATCH_SECURE ?
 			 0 : MI_BATCH_NON_SECURE_I965));
-	intel_ring_emit(engine, offset);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, offset);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1719,8 +1724,8 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 len,
 			 unsigned dispatch_flags)
 {
-	struct intel_engine_cs *engine = req->engine;
-	u32 cs_offset = engine->scratch.gtt_offset;
+	struct intel_ringbuffer *ring = req->ringbuf;
+	u32 cs_offset = req->engine->scratch.gtt_offset;
 	int ret;
 
 	ret = intel_ring_begin(req, 6);
@@ -1728,13 +1733,13 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 		return ret;
 
 	/* Evict the invalid PTE TLBs */
-	intel_ring_emit(engine, COLOR_BLT_CMD | BLT_WRITE_RGBA);
-	intel_ring_emit(engine, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096);
-	intel_ring_emit(engine, I830_TLB_ENTRIES << 16 | 4); /* load each page */
-	intel_ring_emit(engine, cs_offset);
-	intel_ring_emit(engine, 0xdeadbeef);
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, COLOR_BLT_CMD | BLT_WRITE_RGBA);
+	intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096);
+	intel_ring_emit(ring, I830_TLB_ENTRIES << 16 | 4); /* load each page */
+	intel_ring_emit(ring, cs_offset);
+	intel_ring_emit(ring, 0xdeadbeef);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	if ((dispatch_flags & I915_DISPATCH_PINNED) == 0) {
 		if (len > I830_BATCH_LIMIT)
@@ -1748,17 +1753,17 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 		 * stable batch scratch bo area (so that the CS never
 		 * stumbles over its tlb invalidation bug) ...
 		 */
-		intel_ring_emit(engine, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA);
-		intel_ring_emit(engine,
+		intel_ring_emit(ring, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA);
+		intel_ring_emit(ring,
 				BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096);
-		intel_ring_emit(engine, DIV_ROUND_UP(len, 4096) << 16 | 4096);
-		intel_ring_emit(engine, cs_offset);
-		intel_ring_emit(engine, 4096);
-		intel_ring_emit(engine, offset);
+		intel_ring_emit(ring, DIV_ROUND_UP(len, 4096) << 16 | 4096);
+		intel_ring_emit(ring, cs_offset);
+		intel_ring_emit(ring, 4096);
+		intel_ring_emit(ring, offset);
 
-		intel_ring_emit(engine, MI_FLUSH);
-		intel_ring_emit(engine, MI_NOOP);
-		intel_ring_advance(engine);
+		intel_ring_emit(ring, MI_FLUSH);
+		intel_ring_emit(ring, MI_NOOP);
+		intel_ring_advance(ring);
 
 		/* ... and execute it. */
 		offset = cs_offset;
@@ -1768,10 +1773,10 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_BATCH_BUFFER_START | MI_BATCH_GTT);
-	intel_ring_emit(engine, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
-					  0 : MI_BATCH_NON_SECURE));
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_BATCH_BUFFER_START | MI_BATCH_GTT);
+	intel_ring_emit(ring, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
+					0 : MI_BATCH_NON_SECURE));
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -1781,17 +1786,17 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 len,
 			 unsigned dispatch_flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine, MI_BATCH_BUFFER_START | MI_BATCH_GTT);
-	intel_ring_emit(engine, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
-					  0 : MI_BATCH_NON_SECURE));
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, MI_BATCH_BUFFER_START | MI_BATCH_GTT);
+	intel_ring_emit(ring, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
+					0 : MI_BATCH_NON_SECURE));
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -2330,8 +2335,9 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
 /* Align the ring tail to a cacheline boundary */
 int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = req->engine;
-	int num_dwords = (engine->buffer->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
+	struct intel_ringbuffer *ring = req->ringbuf;
+	int num_dwords =
+	       	(ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
 	int ret;
 
 	if (num_dwords == 0)
@@ -2343,9 +2349,9 @@ int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
 		return ret;
 
 	while (num_dwords--)
-		intel_ring_emit(engine, MI_NOOP);
+		intel_ring_emit(ring, MI_NOOP);
 
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -2423,7 +2429,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
 static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
 			       u32 invalidate, u32 flush)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	uint32_t cmd;
 	int ret;
 
@@ -2451,17 +2457,16 @@ static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
 	if (invalidate & I915_GEM_GPU_DOMAINS)
 		cmd |= MI_INVALIDATE_TLB | MI_INVALIDATE_BSD;
 
-	intel_ring_emit(engine, cmd);
-	intel_ring_emit(engine,
-			I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
+	intel_ring_emit(ring, cmd);
+	intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
 	if (INTEL_GEN(req->i915) >= 8) {
-		intel_ring_emit(engine, 0); /* upper addr */
-		intel_ring_emit(engine, 0); /* value */
+		intel_ring_emit(ring, 0); /* upper addr */
+		intel_ring_emit(ring, 0); /* value */
 	} else  {
-		intel_ring_emit(engine, 0);
-		intel_ring_emit(engine, MI_NOOP);
+		intel_ring_emit(ring, 0);
+		intel_ring_emit(ring, MI_NOOP);
 	}
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 	return 0;
 }
 
@@ -2470,8 +2475,8 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			      u64 offset, u32 len,
 			      unsigned dispatch_flags)
 {
-	struct intel_engine_cs *engine = req->engine;
-	bool ppgtt = USES_PPGTT(engine->dev) &&
+	struct intel_ringbuffer *ring = req->ringbuf;
+	bool ppgtt = USES_PPGTT(req->i915) &&
 			!(dispatch_flags & I915_DISPATCH_SECURE);
 	int ret;
 
@@ -2480,13 +2485,13 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 		return ret;
 
 	/* FIXME(BDW): Address space and security selectors. */
-	intel_ring_emit(engine, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8) |
+	intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8) |
 			(dispatch_flags & I915_DISPATCH_RS ?
 			 MI_BATCH_RESOURCE_STREAMER : 0));
-	intel_ring_emit(engine, lower_32_bits(offset));
-	intel_ring_emit(engine, upper_32_bits(offset));
-	intel_ring_emit(engine, MI_NOOP);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, lower_32_bits(offset));
+	intel_ring_emit(ring, upper_32_bits(offset));
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -2496,22 +2501,22 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			     u64 offset, u32 len,
 			     unsigned dispatch_flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine,
+	intel_ring_emit(ring,
 			MI_BATCH_BUFFER_START |
 			(dispatch_flags & I915_DISPATCH_SECURE ?
 			 0 : MI_BATCH_PPGTT_HSW | MI_BATCH_NON_SECURE_HSW) |
 			(dispatch_flags & I915_DISPATCH_RS ?
 			 MI_BATCH_RESOURCE_STREAMER : 0));
 	/* bit0-7 is the length on GEN6+ */
-	intel_ring_emit(engine, offset);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, offset);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -2521,20 +2526,20 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			      u64 offset, u32 len,
 			      unsigned dispatch_flags)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(engine,
+	intel_ring_emit(ring,
 			MI_BATCH_BUFFER_START |
 			(dispatch_flags & I915_DISPATCH_SECURE ?
 			 0 : MI_BATCH_NON_SECURE_I965));
 	/* bit0-7 is the length on GEN6+ */
-	intel_ring_emit(engine, offset);
-	intel_ring_advance(engine);
+	intel_ring_emit(ring, offset);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -2544,7 +2549,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 static int gen6_ring_flush(struct drm_i915_gem_request *req,
 			   u32 invalidate, u32 flush)
 {
-	struct intel_engine_cs *engine = req->engine;
+	struct intel_ringbuffer *ring = req->ringbuf;
 	uint32_t cmd;
 	int ret;
 
@@ -2571,17 +2576,17 @@ static int gen6_ring_flush(struct drm_i915_gem_request *req,
 	 */
 	if (invalidate & I915_GEM_DOMAIN_RENDER)
 		cmd |= MI_INVALIDATE_TLB;
-	intel_ring_emit(engine, cmd);
-	intel_ring_emit(engine,
+	intel_ring_emit(ring, cmd);
+	intel_ring_emit(ring,
 			I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
 	if (INTEL_GEN(req->i915) >= 8) {
-		intel_ring_emit(engine, 0); /* upper addr */
-		intel_ring_emit(engine, 0); /* value */
+		intel_ring_emit(ring, 0); /* upper addr */
+		intel_ring_emit(ring, 0); /* value */
 	} else  {
-		intel_ring_emit(engine, 0);
-		intel_ring_emit(engine, MI_NOOP);
+		intel_ring_emit(ring, 0);
+		intel_ring_emit(ring, MI_NOOP);
 	}
-	intel_ring_advance(engine);
+	intel_ring_advance(ring);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 3cbcdd5751ad..3a4ed97b563f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -462,28 +462,19 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);
 
 int __must_check intel_ring_begin(struct drm_i915_gem_request *req, int n);
 int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
-static inline void __intel_ringbuffer_emit(struct intel_ringbuffer *rb,
-					   u32 data)
+static inline void intel_ring_emit(struct intel_ringbuffer *ring, u32 data)
 {
-	*(uint32_t *)(rb->vaddr + rb->tail) = data;
-	rb->tail += 4;
+	*(uint32_t *)(ring->vaddr + ring->tail) = data;
+	ring->tail += 4;
 }
-static inline void __intel_ringbuffer_advance(struct intel_ringbuffer *rb)
-{
-	rb->tail &= rb->size - 1;
-}
-static inline void intel_ring_emit(struct intel_engine_cs *engine, u32 data)
-{
-	__intel_ringbuffer_emit(engine->buffer, data);
-}
-static inline void intel_ring_emit_reg(struct intel_engine_cs *engine,
+static inline void intel_ring_emit_reg(struct intel_ringbuffer *ring,
 				       i915_reg_t reg)
 {
-	intel_ring_emit(engine, i915_mmio_reg_offset(reg));
+	intel_ring_emit(ring, i915_mmio_reg_offset(reg));
 }
-static inline void intel_ring_advance(struct intel_engine_cs *engine)
+static inline void intel_ring_advance(struct intel_ringbuffer *ring)
 {
-	__intel_ringbuffer_advance(engine->buffer);
+	ring->tail &= ring->size - 1;
 }
 int __intel_ring_space(int head, int tail, int size);
 void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 26/62] drm/i915: Rename request->ring to request->engine
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (24 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 25/62] drm/i915: Unify intel_logical_ring_emit and intel_ring_emit Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-06 13:42   ` Tvrtko Ursulin
  2016-06-03 16:36 ` [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring Chris Wilson
                   ` (37 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

In order to disambiguate between the pointer to the intel_engine_cs
(called ring) and the intel_ringbuffer (called ringbuf), rename
s/ring/engine/.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c          |  3 +--
 drivers/gpu/drm/i915/i915_gem.c              |  6 ++----
 drivers/gpu/drm/i915/i915_gem_context.c      |  6 ++----
 drivers/gpu/drm/i915/i915_gem_gtt.c          |  5 ++---
 drivers/gpu/drm/i915/i915_gem_render_state.c | 12 ++++++------
 drivers/gpu/drm/i915/i915_gem_request.c      |  6 +-----
 drivers/gpu/drm/i915/i915_gpu_error.c        |  3 +--
 drivers/gpu/drm/i915/i915_guc_submission.c   |  4 ++--
 drivers/gpu/drm/i915/intel_lrc.c             |  6 +++---
 9 files changed, 20 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index c1f8b5126d16..34e41ae2943e 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -193,8 +193,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		seq_printf(m, " (%s mappable)", s);
 	}
 	if (obj->last_write_req != NULL)
-		seq_printf(m, " (%s)",
-			   i915_gem_request_get_engine(obj->last_write_req)->name);
+		seq_printf(m, " (%s)", obj->last_write_req->engine->name);
 	if (obj->frontbuffer_bits)
 		seq_printf(m, " (frontbuffer: 0x%03x)", obj->frontbuffer_bits);
 }
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 22c8361748d6..8edd79ad08b4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2101,9 +2101,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 			     struct drm_i915_gem_request *req)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
-	struct intel_engine_cs *engine;
-
-	engine = i915_gem_request_get_engine(req);
+	struct intel_engine_cs *engine = req->engine;
 
 	/* Add a reference if we're newly entering the active list. */
 	if (obj->active == 0)
@@ -2561,7 +2559,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 	struct intel_engine_cs *from;
 	int ret;
 
-	from = i915_gem_request_get_engine(from_req);
+	from = from_req->engine;
 	if (to == from)
 		return 0;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 41e32426d174..899731f9a2c4 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -555,8 +555,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 		if (num_rings) {
 			struct intel_engine_cs *signaller;
 
-			intel_ring_emit(ring,
-					MI_LOAD_REGISTER_IMM(num_rings));
+			intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
 			for_each_engine(signaller, dev_priv) {
 				if (signaller == req->engine)
 					continue;
@@ -585,8 +584,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 			struct intel_engine_cs *signaller;
 			i915_reg_t last_reg = {}; /* keep gcc quiet */
 
-			intel_ring_emit(ring,
-					MI_LOAD_REGISTER_IMM(num_rings));
+			intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
 			for_each_engine(signaller, dev_priv) {
 				if (signaller == req->engine)
 					continue;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index f735d1ec189a..4b4e3de58ad9 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1689,7 +1689,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			  struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
-	struct drm_i915_private *dev_priv = to_i915(ppgtt->base.dev);
+	struct drm_i915_private *dev_priv = req->i915;
 
 	I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
 	I915_WRITE(RING_PP_DIR_BASE(engine), get_pd_offset(ppgtt));
@@ -1737,8 +1737,7 @@ static int gen6_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			  struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
-	struct drm_device *dev = ppgtt->base.dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct drm_i915_private *dev_priv = req->i915;
 
 
 	I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 99eff898b4cb..41eb9a91bfee 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -207,17 +207,17 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
 		return 0;
 
 	ret = req->engine->dispatch_execbuffer(req, so.ggtt_offset,
-					     so.rodata->batch_items * 4,
-					     I915_DISPATCH_SECURE);
+					       so.rodata->batch_items * 4,
+					       I915_DISPATCH_SECURE);
 	if (ret)
 		goto out;
 
 	if (so.aux_batch_size > 8) {
 		ret = req->engine->dispatch_execbuffer(req,
-						     (so.ggtt_offset +
-						      so.aux_batch_offset),
-						     so.aux_batch_size,
-						     I915_DISPATCH_SECURE);
+						       (so.ggtt_offset +
+							so.aux_batch_offset),
+						       so.aux_batch_size,
+						       I915_DISPATCH_SECURE);
 		if (ret)
 			goto out;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index ba745f0740d0..059ba88e182e 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -299,7 +299,6 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
 int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
 				   struct drm_file *file)
 {
-	struct drm_i915_private *dev_private;
 	struct drm_i915_file_private *file_priv;
 
 	WARN_ON(!req || !file || req->file_priv);
@@ -310,7 +309,6 @@ int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
 	if (req->file_priv)
 		return -EINVAL;
 
-	dev_private = req->i915;
 	file_priv = file->driver_priv;
 
 	spin_lock(&file_priv->mm.lock);
@@ -417,7 +415,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 			bool flush_caches)
 {
 	struct intel_engine_cs *engine;
-	struct drm_i915_private *dev_priv;
 	struct intel_ringbuffer *ringbuf;
 	u32 request_start;
 	u32 reserved_tail;
@@ -427,7 +424,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 		return;
 
 	engine = request->engine;
-	dev_priv = request->i915;
 	ringbuf = request->ringbuf;
 
 	/*
@@ -502,7 +498,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 		  "for adding the request (%d bytes)\n",
 		  reserved_tail, ret);
 
-	i915_gem_mark_busy(dev_priv, engine);
+	i915_gem_mark_busy(request->i915, engine);
 }
 
 static unsigned long local_clock_us(unsigned *cpu)
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index a8082b8a9797..d1667aa640ef 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -762,8 +762,7 @@ static void capture_bo(struct drm_i915_error_buffer *err,
 	err->dirty = obj->dirty;
 	err->purgeable = obj->madv != I915_MADV_WILLNEED;
 	err->userptr = obj->userptr.mm != NULL;
-	err->ring = obj->last_write_req ?
-			i915_gem_request_get_engine(obj->last_write_req)->id : -1;
+	err->ring = obj->last_write_req ? obj->last_write_req->engine->id : -1;
 	err->cache_level = obj->cache_level;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 4cec580784ea..337b8f60989c 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -534,8 +534,8 @@ static void guc_add_workqueue_item(struct i915_guc_client *gc,
 			WQ_NO_WCFLUSH_WAIT;
 
 	/* The GuC wants only the low-order word of the context descriptor */
-	wqi->context_desc = (u32)intel_lr_context_descriptor(rq->ctx,
-							     rq->engine);
+	wqi->context_desc =
+		(u32)intel_lr_context_descriptor(rq->ctx, rq->engine);
 
 	wqi->ring_tail = tail << WQ_RING_TAIL_SHIFT;
 	wqi->fence_id = rq->fence.seqno;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3076b63f2298..a1820d531e49 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1776,13 +1776,13 @@ static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
 		return 0;
 
 	ret = req->engine->emit_bb_start(req, so.ggtt_offset,
-				       I915_DISPATCH_SECURE);
+					 I915_DISPATCH_SECURE);
 	if (ret)
 		goto out;
 
 	ret = req->engine->emit_bb_start(req,
-				       (so.ggtt_offset + so.aux_batch_offset),
-				       I915_DISPATCH_SECURE);
+					 (so.ggtt_offset + so.aux_batch_offset),
+					 I915_DISPATCH_SECURE);
 	if (ret)
 		goto out;
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (25 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 26/62] drm/i915: Rename request->ring to request->engine Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-06 13:44   ` Tvrtko Ursulin
  2016-06-03 16:36 ` [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs Chris Wilson
                   ` (36 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Now that we have disambuigated ring and engine, we can use the clearer
and more consistent name for the intel_ringbuffer pointer in the
request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_context.c    |  4 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c        |  6 +-
 drivers/gpu/drm/i915/i915_gem_request.c    | 16 +++---
 drivers/gpu/drm/i915/i915_gem_request.h    |  3 +-
 drivers/gpu/drm/i915/i915_gpu_error.c      | 20 +++----
 drivers/gpu/drm/i915/intel_display.c       | 10 ++--
 drivers/gpu/drm/i915/intel_lrc.c           | 57 +++++++++---------
 drivers/gpu/drm/i915/intel_mocs.c          | 36 ++++++------
 drivers/gpu/drm/i915/intel_overlay.c       |  8 +--
 drivers/gpu/drm/i915/intel_ringbuffer.c    | 92 +++++++++++++++---------------
 11 files changed, 126 insertions(+), 130 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 899731f9a2c4..a7911f39f416 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -514,7 +514,7 @@ static inline int
 mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 {
 	struct drm_i915_private *dev_priv = req->i915;
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 flags = hw_flags | MI_MM_SPACE_GTT;
 	const int num_rings =
 		/* Use an extended w/a on ivb+ if signalling from other rings */
@@ -614,7 +614,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 static int remap_l3(struct drm_i915_gem_request *req, int slice)
 {
 	u32 *remap_info = req->i915->l3_parity.remap_info[slice];
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int i, ret;
 
 	if (!remap_info)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 99663e8429b3..246bd70c0c9f 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1140,7 +1140,7 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
 static int
 i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret, i;
 
 	if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
@@ -1270,7 +1270,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 
 	if (params->engine->id == RCS &&
 	    instp_mode != dev_priv->relative_constants_mode) {
-		struct intel_ringbuffer *ring = params->request->ringbuf;
+		struct intel_ringbuffer *ring = params->request->ring;
 
 		ret = intel_ring_begin(params->request, 4);
 		if (ret)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4b4e3de58ad9..b0a644cede20 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -669,7 +669,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
 			  unsigned entry,
 			  dma_addr_t addr)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	BUG_ON(entry >= 4);
@@ -1660,7 +1660,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
 static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			 struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
@@ -1699,7 +1699,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
 static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			  struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 059ba88e182e..c6a7a7984f1f 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -351,7 +351,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
 	 * Note this requires that we are always called in request
 	 * completion order.
 	 */
-	request->ringbuf->last_retired_head = request->postfix;
+	request->ring->last_retired_head = request->postfix;
 
 	i915_gem_request_remove_from_client(request);
 
@@ -415,7 +415,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 			bool flush_caches)
 {
 	struct intel_engine_cs *engine;
-	struct intel_ringbuffer *ringbuf;
+	struct intel_ringbuffer *ring;
 	u32 request_start;
 	u32 reserved_tail;
 	int ret;
@@ -424,14 +424,14 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 		return;
 
 	engine = request->engine;
-	ringbuf = request->ringbuf;
+	ring = request->ring;
 
 	/*
 	 * To ensure that this call will not fail, space for its emissions
 	 * should already have been reserved in the ring buffer. Let the ring
 	 * know that it is time to use that space up.
 	 */
-	request_start = intel_ring_get_tail(ringbuf);
+	request_start = intel_ring_get_tail(ring);
 	reserved_tail = request->reserved_space;
 	request->reserved_space = 0;
 
@@ -478,21 +478,21 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	 * GPU processing the request, we never over-estimate the
 	 * position of the head.
 	 */
-	request->postfix = intel_ring_get_tail(ringbuf);
+	request->postfix = intel_ring_get_tail(ring);
 
 	if (i915.enable_execlists)
 		ret = engine->emit_request(request);
 	else {
 		ret = engine->add_request(request);
 
-		request->tail = intel_ring_get_tail(ringbuf);
+		request->tail = intel_ring_get_tail(ring);
 	}
 	/* Not allowed to fail! */
 	WARN(ret, "emit|add_request failed: %d!\n", ret);
 	/* Sanity check that the reserved size was large enough. */
-	ret = intel_ring_get_tail(ringbuf) - request_start;
+	ret = intel_ring_get_tail(ring) - request_start;
 	if (ret < 0)
-		ret += ringbuf->size;
+		ret += ring->size;
 	WARN_ONCE(ret > reserved_tail,
 		  "Not enough space reserved (%d bytes) "
 		  "for adding the request (%d bytes)\n",
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index a3cac13ab9af..913565fbb0e3 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -59,7 +59,7 @@ struct drm_i915_gem_request {
 	 */
 	struct i915_gem_context *ctx;
 	struct intel_engine_cs *engine;
-	struct intel_ringbuffer *ringbuf;
+	struct intel_ringbuffer *ring;
 	struct intel_signal_node signaling;
 
 	unsigned reset_counter;
@@ -86,7 +86,6 @@ struct drm_i915_gem_request {
 	/** Preallocate space in the ringbuffer for the emitting the request */
 	u32 reserved_space;
 
-
 	/**
 	 * Context related to the previous request.
 	 * As the contexts are accessed by the hardware until the switch is
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index d1667aa640ef..b934986bb117 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1089,7 +1089,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 		request = i915_gem_find_active_request(engine);
 		if (request) {
 			struct i915_address_space *vm;
-			struct intel_ringbuffer *rb;
+			struct intel_ringbuffer *ring;
 
 			vm = request->ctx && request->ctx->ppgtt ?
 				&request->ctx->ppgtt->base :
@@ -1107,7 +1107,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 			if (HAS_BROKEN_CS_TLB(dev_priv))
 				error->ring[i].wa_batchbuffer =
 					i915_error_ggtt_object_create(dev_priv,
-							     engine->scratch.obj);
+								      engine->scratch.obj);
 
 			if (request->pid) {
 				struct task_struct *task;
@@ -1123,23 +1123,21 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 
 			error->simulated |= request->ctx->flags & CONTEXT_NO_ERROR_CAPTURE;
 
-			rb = request->ringbuf;
-			error->ring[i].cpu_ring_head = rb->head;
-			error->ring[i].cpu_ring_tail = rb->tail;
+			ring = request->ring;
+			error->ring[i].cpu_ring_head = ring->head;
+			error->ring[i].cpu_ring_tail = ring->tail;
 			error->ring[i].ringbuffer =
 				i915_error_ggtt_object_create(dev_priv,
-							      rb->obj);
+							      ring->obj);
 		}
 
 		error->ring[i].hws_page =
 			i915_error_ggtt_object_create(dev_priv,
 						      engine->status_page.obj);
 
-		if (engine->wa_ctx.obj) {
-			error->ring[i].wa_ctx =
-				i915_error_ggtt_object_create(dev_priv,
-							      engine->wa_ctx.obj);
-		}
+		error->ring[i].wa_ctx =
+			i915_error_ggtt_object_create(dev_priv,
+						      engine->wa_ctx.obj);
 
 		i915_gem_record_active_context(engine, error, &error->ring[i]);
 
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 2cba91207d7e..2dafbfbc8134 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11174,7 +11174,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	u32 flip_mask;
 	int ret;
@@ -11208,7 +11208,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	u32 flip_mask;
 	int ret;
@@ -11239,7 +11239,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t pf, pipesrc;
@@ -11277,7 +11277,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t pf, pipesrc;
@@ -11312,7 +11312,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t plane_bit = 0;
 	int len, ret;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index a1820d531e49..229545fc5b4a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -692,7 +692,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
 			return ret;
 	}
 
-	request->ringbuf = ce->ringbuf;
+	request->ring = ce->ringbuf;
 
 	if (i915.enable_guc_submission) {
 		/*
@@ -748,11 +748,11 @@ err_unpin:
 static int
 intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ringbuf = request->ringbuf;
+	struct intel_ringbuffer *ring = request->ring;
 	struct intel_engine_cs *engine = request->engine;
 
-	intel_ring_advance(ringbuf);
-	request->tail = ringbuf->tail;
+	intel_ring_advance(ring);
+	request->tail = ring->tail;
 
 	/*
 	 * Here we add two extra NOOPs as padding to avoid
@@ -760,9 +760,9 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 	 *
 	 * Caller must reserve WA_TAIL_DWORDS for us!
 	 */
-	intel_ring_emit(ringbuf, MI_NOOP);
-	intel_ring_emit(ringbuf, MI_NOOP);
-	intel_ring_advance(ringbuf);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	/* We keep the previous context alive until we retire the following
 	 * request. This ensures that any the context object is still pinned
@@ -805,7 +805,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
 	struct drm_device       *dev = params->dev;
 	struct intel_engine_cs *engine = params->engine;
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_ringbuffer *ringbuf = params->ctx->engine[engine->id].ringbuf;
+	struct intel_ringbuffer *ring = params->request->ring;
 	u64 exec_start;
 	int instp_mode;
 	u32 instp_mask;
@@ -817,7 +817,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
 	case I915_EXEC_CONSTANTS_REL_GENERAL:
 	case I915_EXEC_CONSTANTS_ABSOLUTE:
 	case I915_EXEC_CONSTANTS_REL_SURFACE:
-		if (instp_mode != 0 && engine != &dev_priv->engine[RCS]) {
+		if (instp_mode != 0 && engine->id != RCS) {
 			DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
 			return -EINVAL;
 		}
@@ -846,17 +846,17 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
 	if (ret)
 		return ret;
 
-	if (engine == &dev_priv->engine[RCS] &&
+	if (engine->id == RCS &&
 	    instp_mode != dev_priv->relative_constants_mode) {
 		ret = intel_ring_begin(params->request, 4);
 		if (ret)
 			return ret;
 
-		intel_ring_emit(ringbuf, MI_NOOP);
-		intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
-		intel_ring_emit_reg(ringbuf, INSTPM);
-		intel_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
-		intel_ring_advance(ringbuf);
+		intel_ring_emit(ring, MI_NOOP);
+		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+		intel_ring_emit_reg(ring, INSTPM);
+		intel_ring_emit(ring, instp_mask << 16 | instp_mode);
+		intel_ring_advance(ring);
 
 		dev_priv->relative_constants_mode = instp_mode;
 	}
@@ -1011,7 +1011,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
 {
 	int ret, i;
 	struct intel_engine_cs *engine = req->engine;
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct i915_workarounds *w = &req->i915->workarounds;
 
 	if (w->count == 0)
@@ -1026,14 +1026,14 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
 	if (ret)
 		return ret;
 
-	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
 	for (i = 0; i < w->count; i++) {
-		intel_ring_emit_reg(ringbuf, w->reg[i].addr);
-		intel_ring_emit(ringbuf, w->reg[i].value);
+		intel_ring_emit_reg(ring, w->reg[i].addr);
+		intel_ring_emit(ring, w->reg[i].value);
 	}
-	intel_ring_emit(ringbuf, MI_NOOP);
+	intel_ring_emit(ring, MI_NOOP);
 
-	intel_ring_advance(ringbuf);
+	intel_ring_advance(ring);
 
 	engine->gpu_caches_dirty = true;
 	ret = logical_ring_flush_all_caches(req);
@@ -1506,7 +1506,7 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine)
 static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 {
 	struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
 	int i, ret;
 
@@ -1533,7 +1533,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
 			      u64 offset, unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
 	int ret;
 
@@ -1590,8 +1590,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
 			   u32 invalidate_domains,
 			   u32 unused)
 {
-	struct intel_ringbuffer *ring = request->ringbuf;
-	struct intel_engine_cs *engine = ring->engine;
+	struct intel_ringbuffer *ring = request->ring;
 	uint32_t cmd;
 	int ret;
 
@@ -1610,7 +1609,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
 
 	if (invalidate_domains & I915_GEM_GPU_DOMAINS) {
 		cmd |= MI_INVALIDATE_TLB;
-		if (engine->id == VCS)
+		if (request->engine->id == VCS)
 			cmd |= MI_INVALIDATE_BSD;
 	}
 
@@ -1629,7 +1628,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
 				  u32 invalidate_domains,
 				  u32 flush_domains)
 {
-	struct intel_ringbuffer *ring = request->ringbuf;
+	struct intel_ringbuffer *ring = request->ring;
 	struct intel_engine_cs *engine = request->engine;
 	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	bool vf_flush_wa = false;
@@ -1711,7 +1710,7 @@ static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
 
 static int gen8_emit_request(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ring = request->ringbuf;
+	struct intel_ringbuffer *ring = request->ring;
 	int ret;
 
 	ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
@@ -1734,7 +1733,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
 
 static int gen8_emit_request_render(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ring = request->ringbuf;
+	struct intel_ringbuffer *ring = request->ring;
 	int ret;
 
 	ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 8513bf06d4df..4b44bbcfd7cd 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -231,7 +231,7 @@ int intel_mocs_init_engine(struct intel_engine_cs *engine)
 static int emit_mocs_control_table(struct drm_i915_gem_request *req,
 				   const struct drm_i915_mocs_table *table)
 {
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	enum intel_engine_id engine = req->engine->id;
 	unsigned int index;
 	int ret;
@@ -243,11 +243,11 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
+	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
 
 	for (index = 0; index < table->size; index++) {
-		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
-		intel_ring_emit(ringbuf, table->table[index].control_value);
+		intel_ring_emit_reg(ring, mocs_register(engine, index));
+		intel_ring_emit(ring, table->table[index].control_value);
 	}
 
 	/*
@@ -259,12 +259,12 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
 	 * that value to all the used entries.
 	 */
 	for (; index < GEN9_NUM_MOCS_ENTRIES; index++) {
-		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
-		intel_ring_emit(ringbuf, table->table[0].control_value);
+		intel_ring_emit_reg(ring, mocs_register(engine, index));
+		intel_ring_emit(ring, table->table[0].control_value);
 	}
 
-	intel_ring_emit(ringbuf, MI_NOOP);
-	intel_ring_advance(ringbuf);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
@@ -291,7 +291,7 @@ static inline u32 l3cc_combine(const struct drm_i915_mocs_table *table,
 static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
 				const struct drm_i915_mocs_table *table)
 {
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	unsigned int i;
 	int ret;
 
@@ -302,18 +302,18 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
 	if (ret)
 		return ret;
 
-	intel_ring_emit(ringbuf,
+	intel_ring_emit(ring,
 			MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));
 
 	for (i = 0; i < table->size/2; i++) {
-		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
-		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 2*i+1));
+		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
+		intel_ring_emit(ring, l3cc_combine(table, 2*i, 2*i+1));
 	}
 
 	if (table->size & 0x01) {
 		/* Odd table size - 1 left over */
-		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
-		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 0));
+		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
+		intel_ring_emit(ring, l3cc_combine(table, 2*i, 0));
 		i++;
 	}
 
@@ -323,12 +323,12 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
 	 * they are reserved by the hardware.
 	 */
 	for (; i < GEN9_NUM_MOCS_ENTRIES / 2; i++) {
-		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
-		intel_ring_emit(ringbuf, l3cc_combine(table, 0, 0));
+		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
+		intel_ring_emit(ring, l3cc_combine(table, 0, 0));
 	}
 
-	intel_ring_emit(ringbuf, MI_NOOP);
-	intel_ring_advance(ringbuf);
+	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index be79c4497af5..f9c062fea39f 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -253,7 +253,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 
 	overlay->active = true;
 
-	ring = req->ringbuf;
+	ring = req->ring;
 	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
 	intel_ring_emit(ring, overlay->flip_addr | OFC_UPDATE);
 	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
@@ -295,7 +295,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 		return ret;
 	}
 
-	ring = req->ringbuf;
+	ring = req->ring;
 	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
 	intel_ring_emit(ring, flip_addr);
 	intel_ring_advance(ring);
@@ -362,7 +362,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 		return ret;
 	}
 
-	ring = req->ringbuf;
+	ring = req->ring;
 	/* wait for overlay to go idle */
 	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
 	intel_ring_emit(ring, flip_addr);
@@ -438,7 +438,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 			return ret;
 		}
 
-		ring = req->ringbuf;
+		ring = req->ring;
 		intel_ring_emit(ring,
 				MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
 		intel_ring_emit(ring, MI_NOOP);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index ace455b2b2d6..0f13e9900bd6 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -70,7 +70,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
 		       u32	flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 cmd;
 	int ret;
 
@@ -97,7 +97,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
 		       u32	flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 cmd;
 	int ret;
 
@@ -187,7 +187,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 static int
 intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 scratch_addr =
 	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	int ret;
@@ -224,7 +224,7 @@ static int
 gen6_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 scratch_addr =
 	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
@@ -277,7 +277,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
 static int
 gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 4);
@@ -299,7 +299,7 @@ static int
 gen7_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 scratch_addr =
 	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
@@ -364,7 +364,7 @@ static int
 gen8_emit_pipe_control(struct drm_i915_gem_request *req,
 		       u32 flags, u32 scratch_addr)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 6);
@@ -680,7 +680,7 @@ err:
 
 static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct i915_workarounds *w = &req->i915->workarounds;
 	int ret, i;
 
@@ -1242,7 +1242,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 			   unsigned int num_dwords)
 {
 #define MBOX_UPDATE_DWORDS 8
-	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
+	struct intel_ringbuffer *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
 	enum intel_engine_id id;
@@ -1282,7 +1282,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 			   unsigned int num_dwords)
 {
 #define MBOX_UPDATE_DWORDS 6
-	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
+	struct intel_ringbuffer *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
 	enum intel_engine_id id;
@@ -1319,7 +1319,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 		       unsigned int num_dwords)
 {
-	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
+	struct intel_ringbuffer *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *useless;
 	enum intel_engine_id id;
@@ -1363,7 +1363,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 static int
 gen6_add_request(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	if (req->engine->semaphore.signal)
@@ -1387,7 +1387,7 @@ static int
 gen8_render_add_request(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	if (engine->semaphore.signal)
@@ -1432,7 +1432,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
 	       struct intel_engine_cs *signaller,
 	       u32 seqno)
 {
-	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
+	struct intel_ringbuffer *waiter = waiter_req->ring;
 	struct drm_i915_private *dev_priv = waiter_req->i915;
 	struct i915_hw_ppgtt *ppgtt;
 	int ret;
@@ -1469,7 +1469,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
 	       struct intel_engine_cs *signaller,
 	       u32 seqno)
 {
-	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
+	struct intel_ringbuffer *waiter = waiter_req->ring;
 	u32 dw1 = MI_SEMAPHORE_MBOX |
 		  MI_SEMAPHORE_COMPARE |
 		  MI_SEMAPHORE_REGISTER;
@@ -1603,7 +1603,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 	       u32     invalidate_domains,
 	       u32     flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -1619,7 +1619,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 static int
 i9xx_add_request(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 4);
@@ -1697,7 +1697,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 length,
 			 unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -1724,7 +1724,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 len,
 			 unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	u32 cs_offset = req->engine->scratch.gtt_offset;
 	int ret;
 
@@ -1786,7 +1786,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 len,
 			 unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -2221,7 +2221,7 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
 	 */
 	request->reserved_space += LEGACY_REQUEST_SIZE;
 
-	request->ringbuf = request->engine->buffer;
+	request->ring = request->engine->buffer;
 
 	ret = intel_ring_begin(request, 0);
 	if (ret)
@@ -2233,12 +2233,12 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
 
 static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 {
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	struct intel_engine_cs *engine = req->engine;
 	struct drm_i915_gem_request *target;
 
-	intel_ring_update_space(ringbuf);
-	if (ringbuf->space >= bytes)
+	intel_ring_update_space(ring);
+	if (ring->space >= bytes)
 		return 0;
 
 	/*
@@ -2260,12 +2260,12 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 		 * from multiple ringbuffers. Here, we must ignore any that
 		 * aren't from the ringbuffer we're considering.
 		 */
-		if (target->ringbuf != ringbuf)
+		if (target->ring != ring)
 			continue;
 
 		/* Would completion of this request free enough space? */
-		space = __intel_ring_space(target->postfix, ringbuf->tail,
-					   ringbuf->size);
+		space = __intel_ring_space(target->postfix, ring->tail,
+					   ring->size);
 		if (space >= bytes)
 			break;
 	}
@@ -2278,9 +2278,9 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 
 int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
 {
-	struct intel_ringbuffer *ringbuf = req->ringbuf;
-	int remain_actual = ringbuf->size - ringbuf->tail;
-	int remain_usable = ringbuf->effective_size - ringbuf->tail;
+	struct intel_ringbuffer *ring = req->ring;
+	int remain_actual = ring->size - ring->tail;
+	int remain_usable = ring->effective_size - ring->tail;
 	int bytes = num_dwords * sizeof(u32);
 	int total_bytes, wait_bytes;
 	bool need_wrap = false;
@@ -2307,35 +2307,35 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
 		wait_bytes = total_bytes;
 	}
 
-	if (wait_bytes > ringbuf->space) {
+	if (wait_bytes > ring->space) {
 		int ret = wait_for_space(req, wait_bytes);
 		if (unlikely(ret))
 			return ret;
 
-		intel_ring_update_space(ringbuf);
-		if (unlikely(ringbuf->space < wait_bytes))
+		intel_ring_update_space(ring);
+		if (unlikely(ring->space < wait_bytes))
 			return -EAGAIN;
 	}
 
 	if (unlikely(need_wrap)) {
-		GEM_BUG_ON(remain_actual > ringbuf->space);
-		GEM_BUG_ON(ringbuf->tail + remain_actual > ringbuf->size);
+		GEM_BUG_ON(remain_actual > ring->space);
+		GEM_BUG_ON(ring->tail + remain_actual > ring->size);
 
 		/* Fill the tail with MI_NOOP */
-		memset(ringbuf->vaddr + ringbuf->tail, 0, remain_actual);
-		ringbuf->tail = 0;
-		ringbuf->space -= remain_actual;
+		memset(ring->vaddr + ring->tail, 0, remain_actual);
+		ring->tail = 0;
+		ring->space -= remain_actual;
 	}
 
-	ringbuf->space -= bytes;
-	GEM_BUG_ON(ringbuf->space < 0);
+	ring->space -= bytes;
+	GEM_BUG_ON(ring->space < 0);
 	return 0;
 }
 
 /* Align the ring tail to a cacheline boundary */
 int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int num_dwords =
 	       	(ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
 	int ret;
@@ -2429,7 +2429,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
 static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
 			       u32 invalidate, u32 flush)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	uint32_t cmd;
 	int ret;
 
@@ -2475,7 +2475,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			      u64 offset, u32 len,
 			      unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	bool ppgtt = USES_PPGTT(req->i915) &&
 			!(dispatch_flags & I915_DISPATCH_SECURE);
 	int ret;
@@ -2501,7 +2501,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			     u64 offset, u32 len,
 			     unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -2526,7 +2526,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			      u64 offset, u32 len,
 			      unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -2549,7 +2549,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 static int gen6_ring_flush(struct drm_i915_gem_request *req,
 			   u32 invalidate, u32 flush)
 {
-	struct intel_ringbuffer *ring = req->ringbuf;
+	struct intel_ringbuffer *ring = req->ring;
 	uint32_t cmd;
 	int ret;
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (26 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-06 13:45   ` Tvrtko Ursulin
  2016-06-03 16:36 ` [PATCH 29/62] drm/i915: Rename intel_context[engine].ringbuf Chris Wilson
                   ` (35 subsequent siblings)
  63 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Having ringbuf->ring point to an engine is confusing, so rename it once
again to ring->engine.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_ringbuffer.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 0f13e9900bd6..ab498ecce1ca 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2087,8 +2087,8 @@ static void intel_ring_context_unpin(struct i915_gem_context *ctx,
 	i915_gem_context_put(ctx);
 }
 
-static int intel_init_ring_buffer(struct drm_device *dev,
-				  struct intel_engine_cs *engine)
+static int intel_init_engine(struct drm_device *dev,
+			     struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct intel_ringbuffer *ringbuf;
@@ -2707,7 +2707,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	engine->init_hw = init_render_ring;
 	engine->cleanup = render_ring_cleanup;
 
-	ret = intel_init_ring_buffer(dev, engine);
+	ret = intel_init_engine(dev, engine);
 	if (ret)
 		return ret;
 
@@ -2794,7 +2794,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 	}
 	engine->init_hw = init_ring_common;
 
-	return intel_init_ring_buffer(dev, engine);
+	return intel_init_engine(dev, engine);
 }
 
 /**
@@ -2828,7 +2828,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 	}
 	engine->init_hw = init_ring_common;
 
-	return intel_init_ring_buffer(dev, engine);
+	return intel_init_engine(dev, engine);
 }
 
 int intel_init_blt_ring_buffer(struct drm_device *dev)
@@ -2886,7 +2886,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 	}
 	engine->init_hw = init_ring_common;
 
-	return intel_init_ring_buffer(dev, engine);
+	return intel_init_engine(dev, engine);
 }
 
 int intel_init_vebox_ring_buffer(struct drm_device *dev)
@@ -2938,7 +2938,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 	}
 	engine->init_hw = init_ring_common;
 
-	return intel_init_ring_buffer(dev, engine);
+	return intel_init_engine(dev, engine);
 }
 
 int
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 29/62] drm/i915: Rename intel_context[engine].ringbuf
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (27 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 30/62] drm/i915: Rename struct intel_ringbuffer to struct intel_ring Chris Wilson
                   ` (34 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Perform s/ringbuf/ring/ on the context struct for consistency with the
ring/engine split.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |  8 ++++----
 drivers/gpu/drm/i915/i915_drv.h            |  2 +-
 drivers/gpu/drm/i915/i915_gem_context.c    |  4 ++--
 drivers/gpu/drm/i915/i915_guc_submission.c |  2 +-
 drivers/gpu/drm/i915/intel_lrc.c           | 33 ++++++++++++++----------------
 5 files changed, 23 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 34e41ae2943e..8d3bc2bd532e 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -424,8 +424,8 @@ static int per_file_ctx_stats(int id, void *ptr, void *data)
 	for (n = 0; n < ARRAY_SIZE(ctx->engine); n++) {
 		if (ctx->engine[n].state)
 			per_file_stats(0, ctx->engine[n].state, data);
-		if (ctx->engine[n].ringbuf)
-			per_file_stats(0, ctx->engine[n].ringbuf->obj, data);
+		if (ctx->engine[n].ring)
+			per_file_stats(0, ctx->engine[n].ring->obj, data);
 	}
 
 	return 0;
@@ -2062,8 +2062,8 @@ static int i915_context_status(struct seq_file *m, void *unused)
 			seq_putc(m, ce->initialised ? 'I' : 'i');
 			if (ce->state)
 				describe_obj(m, ce->state);
-			if (ce->ringbuf)
-				describe_ctx_ringbuf(m, ce->ringbuf);
+			if (ce->ring)
+				describe_ctx_ringbuf(m, ce->ring);
 			seq_putc(m, '\n');
 		}
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index fcac90104ba9..de54adbf5768 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -881,7 +881,7 @@ struct i915_gem_context {
 
 	struct intel_context {
 		struct drm_i915_gem_object *state;
-		struct intel_ringbuffer *ringbuf;
+		struct intel_ringbuffer *ring;
 		struct i915_vma *lrc_vma;
 		uint32_t *lrc_reg_state;
 		u64 lrc_desc;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index a7911f39f416..7e45e7cdb538 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -173,8 +173,8 @@ void i915_gem_context_free(struct kref *ctx_ref)
 			continue;
 
 		WARN_ON(ce->pin_count);
-		if (ce->ringbuf)
-			intel_ringbuffer_free(ce->ringbuf);
+		if (ce->ring)
+			intel_ringbuffer_free(ce->ring);
 
 		i915_gem_object_put(ce->state);
 	}
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 337b8f60989c..8aa3cf8cac45 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -395,7 +395,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
 		lrc->context_id = (client->ctx_index << GUC_ELC_CTXID_OFFSET) |
 				(engine->guc_id << GUC_ELC_ENGINE_OFFSET);
 
-		obj = ce->ringbuf->obj;
+		obj = ce->ring->obj;
 		gfx_addr = i915_gem_obj_ggtt_offset(obj);
 
 		lrc->ring_begin = gfx_addr;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 229545fc5b4a..14e3437d9074 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -459,11 +459,8 @@ static void execlists_context_unqueue(struct intel_engine_cs *engine)
 		 * resubmit the request. See gen8_emit_request() for where we
 		 * prepare the padding after the end of the request.
 		 */
-		struct intel_ringbuffer *ringbuf;
-
-		ringbuf = req0->ctx->engine[engine->id].ringbuf;
 		req0->tail += 8;
-		req0->tail &= ringbuf->size - 1;
+		req0->tail &= req0->ring->size - 1;
 	}
 
 	execlists_submit_requests(req0, req1);
@@ -692,7 +689,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
 			return ret;
 	}
 
-	request->ring = ce->ringbuf;
+	request->ring = ce->ring;
 
 	if (i915.enable_guc_submission) {
 		/*
@@ -957,14 +954,14 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
 
 	lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
 
-	ret = intel_pin_and_map_ringbuffer_obj(dev_priv, ce->ringbuf);
+	ret = intel_pin_and_map_ringbuffer_obj(dev_priv, ce->ring);
 	if (ret)
 		goto unpin_map;
 
 	ce->lrc_vma = i915_gem_obj_to_ggtt(ce->state);
 	intel_lr_context_descriptor_update(ctx, engine);
 
-	lrc_reg_state[CTX_RING_BUFFER_START+1] = ce->ringbuf->vma->node.start;
+	lrc_reg_state[CTX_RING_BUFFER_START+1] = ce->ring->vma->node.start;
 	ce->lrc_reg_state = lrc_reg_state;
 	ce->state->dirty = true;
 
@@ -995,7 +992,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
 	if (--ce->pin_count)
 		return;
 
-	intel_unpin_ringbuffer_obj(ce->ringbuf);
+	intel_unpin_ringbuffer_obj(ce->ring);
 
 	i915_gem_object_unpin_map(ce->state);
 	i915_gem_object_ggtt_unpin(ce->state);
@@ -2421,7 +2418,7 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 	struct drm_i915_gem_object *ctx_obj;
 	struct intel_context *ce = &ctx->engine[engine->id];
 	uint32_t context_size;
-	struct intel_ringbuffer *ringbuf;
+	struct intel_ringbuffer *ring;
 	int ret;
 
 	WARN_ON(ce->state);
@@ -2437,29 +2434,29 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 		return PTR_ERR(ctx_obj);
 	}
 
-	ringbuf = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
-	if (IS_ERR(ringbuf)) {
-		ret = PTR_ERR(ringbuf);
+	ring = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
+	if (IS_ERR(ring)) {
+		ret = PTR_ERR(ring);
 		goto error_deref_obj;
 	}
 
-	ret = populate_lr_context(ctx, ctx_obj, engine, ringbuf);
+	ret = populate_lr_context(ctx, ctx_obj, engine, ring);
 	if (ret) {
 		DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
 		goto error_ringbuf;
 	}
 
-	ce->ringbuf = ringbuf;
+	ce->ring = ring;
 	ce->state = ctx_obj;
 	ce->initialised = engine->init_context == NULL;
 
 	return 0;
 
 error_ringbuf:
-	intel_ringbuffer_free(ringbuf);
+	intel_ringbuffer_free(ring);
 error_deref_obj:
 	i915_gem_object_put(ctx_obj);
-	ce->ringbuf = NULL;
+	ce->ring = NULL;
 	ce->state = NULL;
 	return ret;
 }
@@ -2490,7 +2487,7 @@ void intel_lr_context_reset(struct drm_i915_private *dev_priv,
 
 		i915_gem_object_unpin_map(ctx_obj);
 
-		ce->ringbuf->head = 0;
-		ce->ringbuf->tail = 0;
+		ce->ring->head = 0;
+		ce->ring->tail = 0;
 	}
 }
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 30/62] drm/i915: Rename struct intel_ringbuffer to struct intel_ring
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (28 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 29/62] drm/i915: Rename intel_context[engine].ringbuf Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 31/62] drm/i915: Rename residual ringbuf parameters Chris Wilson
                   ` (33 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

The state stored in this struct is not only the information about the
buffer object, but the ring used to communicate with the hardware. Using
buffer here is overly specific and, for me at least, conflates with the
notion of buffer objects themselves.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |  11 ++-
 drivers/gpu/drm/i915/i915_drv.h            |   4 +-
 drivers/gpu/drm/i915/i915_gem.c            |  24 +++---
 drivers/gpu/drm/i915/i915_gem_context.c    |   6 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |   6 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c        |   6 +-
 drivers/gpu/drm/i915/i915_gem_request.c    |   6 +-
 drivers/gpu/drm/i915/i915_gem_request.h    |   2 +-
 drivers/gpu/drm/i915/i915_gpu_error.c      |   8 +-
 drivers/gpu/drm/i915/i915_irq.c            |  14 ++--
 drivers/gpu/drm/i915/intel_display.c       |  10 +--
 drivers/gpu/drm/i915/intel_lrc.c           |  34 ++++----
 drivers/gpu/drm/i915/intel_mocs.c          |   4 +-
 drivers/gpu/drm/i915/intel_overlay.c       |   8 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c    | 127 ++++++++++++++---------------
 drivers/gpu/drm/i915/intel_ringbuffer.h    |  51 ++++++------
 16 files changed, 159 insertions(+), 162 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 8d3bc2bd532e..48c8f74e6256 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1415,7 +1415,7 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
 	intel_runtime_pm_get(dev_priv);
 
 	for_each_engine_id(engine, dev_priv, id) {
-		acthd[id] = intel_ring_get_active_head(engine);
+		acthd[id] = intel_engine_get_active_head(engine);
 		seqno[id] = intel_engine_get_seqno(engine);
 	}
 
@@ -2013,12 +2013,11 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
 	return 0;
 }
 
-static void describe_ctx_ringbuf(struct seq_file *m,
-				 struct intel_ringbuffer *ringbuf)
+static void describe_ctx_ring(struct seq_file *m, struct intel_ring *ring)
 {
 	seq_printf(m, " (ringbuffer, space: %d, head: %u, tail: %u, last head: %d)",
-		   ringbuf->space, ringbuf->head, ringbuf->tail,
-		   ringbuf->last_retired_head);
+		   ring->space, ring->head, ring->tail,
+		   ring->last_retired_head);
 }
 
 static int i915_context_status(struct seq_file *m, void *unused)
@@ -2063,7 +2062,7 @@ static int i915_context_status(struct seq_file *m, void *unused)
 			if (ce->state)
 				describe_obj(m, ce->state);
 			if (ce->ring)
-				describe_ctx_ringbuf(m, ce->ring);
+				describe_ctx_ring(m, ce->ring);
 			seq_putc(m, '\n');
 		}
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index de54adbf5768..fe39cd2584f3 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -507,7 +507,7 @@ struct drm_i915_error_state {
 		bool waiting;
 		int num_waiters;
 		int hangcheck_score;
-		enum intel_ring_hangcheck_action hangcheck_action;
+		enum intel_engine_hangcheck_action hangcheck_action;
 		int num_requests;
 
 		/* our own tracking of ring head and tail */
@@ -881,7 +881,7 @@ struct i915_gem_context {
 
 	struct intel_context {
 		struct drm_i915_gem_object *state;
-		struct intel_ringbuffer *ring;
+		struct intel_ring *ring;
 		struct i915_vma *lrc_vma;
 		uint32_t *lrc_reg_state;
 		u64 lrc_desc;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 8edd79ad08b4..034d81c54d67 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2230,7 +2230,7 @@ static void i915_gem_reset_engine_status(struct intel_engine_cs *engine)
 
 static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
 {
-	struct intel_ringbuffer *buffer;
+	struct intel_ring *ring;
 
 	while (!list_empty(&engine->active_list)) {
 		struct drm_i915_gem_object *obj;
@@ -2279,12 +2279,12 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
 	 * upon reset is less than when we start. Do one more pass over
 	 * all the ringbuffers to reset last_retired_head.
 	 */
-	list_for_each_entry(buffer, &engine->buffers, link) {
-		buffer->last_retired_head = buffer->tail;
-		intel_ring_update_space(buffer);
+	list_for_each_entry(ring, &engine->buffers, link) {
+		ring->last_retired_head = ring->tail;
+		intel_ring_update_space(ring);
 	}
 
-	intel_ring_init_seqno(engine, engine->last_submitted_seqno);
+	intel_engine_init_seqno(engine, engine->last_submitted_seqno);
 }
 
 void i915_gem_reset(struct drm_device *dev)
@@ -2577,7 +2577,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 
 		i915_gem_object_retire_request(obj, from_req);
 	} else {
-		int idx = intel_ring_sync_index(from, to);
+		int idx = intel_engine_sync_index(from, to);
 		u32 seqno = i915_gem_request_get_seqno(from_req);
 
 		WARN_ON(!to_req);
@@ -4172,13 +4172,13 @@ int i915_gem_init_engines(struct drm_device *dev)
 	return 0;
 
 cleanup_vebox_ring:
-	intel_cleanup_engine(&dev_priv->engine[VECS]);
+	intel_engine_cleanup(&dev_priv->engine[VECS]);
 cleanup_blt_ring:
-	intel_cleanup_engine(&dev_priv->engine[BCS]);
+	intel_engine_cleanup(&dev_priv->engine[BCS]);
 cleanup_bsd_ring:
-	intel_cleanup_engine(&dev_priv->engine[VCS]);
+	intel_engine_cleanup(&dev_priv->engine[VCS]);
 cleanup_render_ring:
-	intel_cleanup_engine(&dev_priv->engine[RCS]);
+	intel_engine_cleanup(&dev_priv->engine[RCS]);
 
 	return ret;
 }
@@ -4286,8 +4286,8 @@ int i915_gem_init(struct drm_device *dev)
 	if (!i915.enable_execlists) {
 		dev_priv->gt.execbuf_submit = i915_gem_ringbuffer_submission;
 		dev_priv->gt.init_engines = i915_gem_init_engines;
-		dev_priv->gt.cleanup_engine = intel_cleanup_engine;
-		dev_priv->gt.stop_engine = intel_stop_engine;
+		dev_priv->gt.cleanup_engine = intel_engine_cleanup;
+		dev_priv->gt.stop_engine = intel_engine_stop;
 	} else {
 		dev_priv->gt.execbuf_submit = intel_execlists_submission;
 		dev_priv->gt.init_engines = intel_logical_rings_init;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 7e45e7cdb538..13b934ab4a8a 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -174,7 +174,7 @@ void i915_gem_context_free(struct kref *ctx_ref)
 
 		WARN_ON(ce->pin_count);
 		if (ce->ring)
-			intel_ringbuffer_free(ce->ring);
+			intel_ring_free(ce->ring);
 
 		i915_gem_object_put(ce->state);
 	}
@@ -514,7 +514,7 @@ static inline int
 mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 {
 	struct drm_i915_private *dev_priv = req->i915;
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 flags = hw_flags | MI_MM_SPACE_GTT;
 	const int num_rings =
 		/* Use an extended w/a on ivb+ if signalling from other rings */
@@ -614,7 +614,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 static int remap_l3(struct drm_i915_gem_request *req, int slice)
 {
 	u32 *remap_info = req->i915->l3_parity.remap_info[slice];
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int i, ret;
 
 	if (!remap_info)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 246bd70c0c9f..186e466f932f 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -971,7 +971,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
 	/* Unconditionally invalidate gpu caches and ensure that we do flush
 	 * any residual writes from the previous batch.
 	 */
-	return intel_ring_invalidate_all_caches(req);
+	return intel_engine_invalidate_all_caches(req);
 }
 
 static bool
@@ -1140,7 +1140,7 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
 static int
 i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret, i;
 
 	if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
@@ -1270,7 +1270,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 
 	if (params->engine->id == RCS &&
 	    instp_mode != dev_priv->relative_constants_mode) {
-		struct intel_ringbuffer *ring = params->request->ring;
+		struct intel_ring *ring = params->request->ring;
 
 		ret = intel_ring_begin(params->request, 4);
 		if (ret)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index b0a644cede20..6a6e69a3894f 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -669,7 +669,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
 			  unsigned entry,
 			  dma_addr_t addr)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	BUG_ON(entry >= 4);
@@ -1660,7 +1660,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
 static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			 struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
@@ -1699,7 +1699,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
 static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
 			  struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index c6a7a7984f1f..58d84b153810 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -155,7 +155,7 @@ static int i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
 
 	/* Finally reset hw state */
 	for_each_engine(engine, dev_priv)
-		intel_ring_init_seqno(engine, seqno);
+		intel_engine_init_seqno(engine, seqno);
 
 	return 0;
 }
@@ -415,7 +415,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 			bool flush_caches)
 {
 	struct intel_engine_cs *engine;
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	u32 request_start;
 	u32 reserved_tail;
 	int ret;
@@ -446,7 +446,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 		if (i915.enable_execlists)
 			ret = logical_ring_flush_all_caches(request);
 		else
-			ret = intel_ring_flush_all_caches(request);
+			ret = intel_engine_flush_all_caches(request);
 		/* Not allowed to fail! */
 		WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 913565fbb0e3..500ae6066864 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -59,7 +59,7 @@ struct drm_i915_gem_request {
 	 */
 	struct i915_gem_context *ctx;
 	struct intel_engine_cs *engine;
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	struct intel_signal_node signaling;
 
 	unsigned reset_counter;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index b934986bb117..934663166b28 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -221,7 +221,7 @@ static void print_error_buffers(struct drm_i915_error_state_buf *m,
 	}
 }
 
-static const char *hangcheck_action_to_str(enum intel_ring_hangcheck_action a)
+static const char *hangcheck_action_to_str(enum intel_engine_hangcheck_action a)
 {
 	switch (a) {
 	case HANGCHECK_IDLE:
@@ -881,7 +881,7 @@ static void gen8_record_semaphore_state(struct drm_i915_private *dev_priv,
 		signal_offset = (GEN8_SIGNAL_OFFSET(engine, id) & (PAGE_SIZE - 1))
 				/ 4;
 		tmp = error->semaphore_obj->pages[0];
-		idx = intel_ring_sync_index(engine, to);
+		idx = intel_engine_sync_index(engine, to);
 
 		ering->semaphore_mboxes[idx] = tmp[signal_offset];
 		ering->semaphore_seqno[idx] = engine->semaphore.sync_seqno[idx];
@@ -981,7 +981,7 @@ static void i915_record_ring_state(struct drm_i915_private *dev_priv,
 
 	ering->waiting = intel_engine_has_waiter(engine);
 	ering->instpm = I915_READ(RING_INSTPM(engine->mmio_base));
-	ering->acthd = intel_ring_get_active_head(engine);
+	ering->acthd = intel_engine_get_active_head(engine);
 	ering->seqno = intel_engine_get_seqno(engine);
 	ering->last_seqno = engine->last_submitted_seqno;
 	ering->start = I915_READ_START(engine);
@@ -1089,7 +1089,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 		request = i915_gem_find_active_request(engine);
 		if (request) {
 			struct i915_address_space *vm;
-			struct intel_ringbuffer *ring;
+			struct intel_ring *ring;
 
 			vm = request->ctx && request->ctx->ppgtt ?
 				&request->ctx->ppgtt->base :
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 42149153510e..1ffc997b19af 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2989,7 +2989,7 @@ static bool subunits_stuck(struct intel_engine_cs *engine)
 	return stuck;
 }
 
-static enum intel_ring_hangcheck_action
+static enum intel_engine_hangcheck_action
 head_stuck(struct intel_engine_cs *engine, u64 acthd)
 {
 	if (acthd != engine->hangcheck.acthd) {
@@ -3007,11 +3007,11 @@ head_stuck(struct intel_engine_cs *engine, u64 acthd)
 	return HANGCHECK_HUNG;
 }
 
-static enum intel_ring_hangcheck_action
-ring_stuck(struct intel_engine_cs *engine, u64 acthd)
+static enum intel_engine_hangcheck_action
+engine_stuck(struct intel_engine_cs *engine, u64 acthd)
 {
 	struct drm_i915_private *dev_priv = engine->i915;
-	enum intel_ring_hangcheck_action ha;
+	enum intel_engine_hangcheck_action ha;
 	u32 tmp;
 
 	ha = head_stuck(engine, acthd);
@@ -3120,7 +3120,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 		if (engine->irq_seqno_barrier)
 			engine->irq_seqno_barrier(engine);
 
-		acthd = intel_ring_get_active_head(engine);
+		acthd = intel_engine_get_active_head(engine);
 		seqno = intel_engine_get_seqno(engine);
 
 		/* Reset stuck interrupts between batch advances */
@@ -3150,8 +3150,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 				 * being repeatedly kicked and so responsible
 				 * for stalling the machine.
 				 */
-				engine->hangcheck.action = ring_stuck(engine,
-								      acthd);
+				engine->hangcheck.action =
+					engine_stuck(engine, acthd);
 
 				switch (engine->hangcheck.action) {
 				case HANGCHECK_IDLE:
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 2dafbfbc8134..63cfd318bcd3 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11174,7 +11174,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	u32 flip_mask;
 	int ret;
@@ -11208,7 +11208,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	u32 flip_mask;
 	int ret;
@@ -11239,7 +11239,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t pf, pipesrc;
@@ -11277,7 +11277,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t pf, pipesrc;
@@ -11312,7 +11312,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
 				 struct drm_i915_gem_request *req,
 				 uint32_t flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	uint32_t plane_bit = 0;
 	int len, ret;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 14e3437d9074..fd093efffe85 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -745,7 +745,7 @@ err_unpin:
 static int
 intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ring = request->ring;
+	struct intel_ring *ring = request->ring;
 	struct intel_engine_cs *engine = request->engine;
 
 	intel_ring_advance(ring);
@@ -802,7 +802,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
 	struct drm_device       *dev = params->dev;
 	struct intel_engine_cs *engine = params->engine;
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_ringbuffer *ring = params->request->ring;
+	struct intel_ring *ring = params->request->ring;
 	u64 exec_start;
 	int instp_mode;
 	u32 instp_mask;
@@ -954,7 +954,7 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
 
 	lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
 
-	ret = intel_pin_and_map_ringbuffer_obj(dev_priv, ce->ring);
+	ret = intel_pin_and_map_ring(dev_priv, ce->ring);
 	if (ret)
 		goto unpin_map;
 
@@ -992,7 +992,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
 	if (--ce->pin_count)
 		return;
 
-	intel_unpin_ringbuffer_obj(ce->ring);
+	intel_unpin_ring(ce->ring);
 
 	i915_gem_object_unpin_map(ce->state);
 	i915_gem_object_ggtt_unpin(ce->state);
@@ -1008,7 +1008,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
 {
 	int ret, i;
 	struct intel_engine_cs *engine = req->engine;
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct i915_workarounds *w = &req->i915->workarounds;
 
 	if (w->count == 0)
@@ -1503,7 +1503,7 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine)
 static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 {
 	struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
 	int i, ret;
 
@@ -1530,7 +1530,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
 			      u64 offset, unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
 	int ret;
 
@@ -1587,8 +1587,8 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
 			   u32 invalidate_domains,
 			   u32 unused)
 {
-	struct intel_ringbuffer *ring = request->ring;
-	uint32_t cmd;
+	struct intel_ring *ring = request->ring;
+	u32 cmd;
 	int ret;
 
 	ret = intel_ring_begin(request, 4);
@@ -1625,7 +1625,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
 				  u32 invalidate_domains,
 				  u32 flush_domains)
 {
-	struct intel_ringbuffer *ring = request->ring;
+	struct intel_ring *ring = request->ring;
 	struct intel_engine_cs *engine = request->engine;
 	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	bool vf_flush_wa = false;
@@ -1707,7 +1707,7 @@ static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
 
 static int gen8_emit_request(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ring = request->ring;
+	struct intel_ring *ring = request->ring;
 	int ret;
 
 	ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
@@ -1730,7 +1730,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
 
 static int gen8_emit_request_render(struct drm_i915_gem_request *request)
 {
-	struct intel_ringbuffer *ring = request->ring;
+	struct intel_ring *ring = request->ring;
 	int ret;
 
 	ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
@@ -2224,7 +2224,7 @@ static int
 populate_lr_context(struct i915_gem_context *ctx,
 		    struct drm_i915_gem_object *ctx_obj,
 		    struct intel_engine_cs *engine,
-		    struct intel_ringbuffer *ringbuf)
+		    struct intel_ring *ring)
 {
 	struct drm_i915_private *dev_priv = ctx->i915;
 	struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
@@ -2277,7 +2277,7 @@ populate_lr_context(struct i915_gem_context *ctx,
 		       RING_START(engine->mmio_base), 0);
 	ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_CONTROL,
 		       RING_CTL(engine->mmio_base),
-		       ((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID);
+		       ((ring->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID);
 	ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_U,
 		       RING_BBADDR_UDW(engine->mmio_base), 0);
 	ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_L,
@@ -2418,7 +2418,7 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 	struct drm_i915_gem_object *ctx_obj;
 	struct intel_context *ce = &ctx->engine[engine->id];
 	uint32_t context_size;
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	int ret;
 
 	WARN_ON(ce->state);
@@ -2434,7 +2434,7 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 		return PTR_ERR(ctx_obj);
 	}
 
-	ring = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
+	ring = intel_engine_create_ring(engine, 4 * PAGE_SIZE);
 	if (IS_ERR(ring)) {
 		ret = PTR_ERR(ring);
 		goto error_deref_obj;
@@ -2453,7 +2453,7 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 	return 0;
 
 error_ringbuf:
-	intel_ringbuffer_free(ring);
+	intel_ring_free(ring);
 error_deref_obj:
 	i915_gem_object_put(ctx_obj);
 	ce->ring = NULL;
diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 4b44bbcfd7cd..9ebbfca628ac 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -231,7 +231,7 @@ int intel_mocs_init_engine(struct intel_engine_cs *engine)
 static int emit_mocs_control_table(struct drm_i915_gem_request *req,
 				   const struct drm_i915_mocs_table *table)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	enum intel_engine_id engine = req->engine->id;
 	unsigned int index;
 	int ret;
@@ -291,7 +291,7 @@ static inline u32 l3cc_combine(const struct drm_i915_mocs_table *table,
 static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
 				const struct drm_i915_mocs_table *table)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	unsigned int i;
 	int ret;
 
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index f9c062fea39f..fe9da60d806e 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -235,7 +235,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 	struct drm_i915_private *dev_priv = overlay->i915;
 	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	int ret;
 
 	WARN_ON(overlay->active);
@@ -270,7 +270,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 	struct drm_i915_private *dev_priv = overlay->i915;
 	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	u32 flip_addr = overlay->flip_addr;
 	u32 tmp;
 	int ret;
@@ -340,7 +340,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 	struct drm_i915_private *dev_priv = overlay->i915;
 	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	u32 flip_addr = overlay->flip_addr;
 	int ret;
 
@@ -426,7 +426,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 	if (I915_READ(ISR) & I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT) {
 		/* synchronous slowpath */
 		struct drm_i915_gem_request *req;
-		struct intel_ringbuffer *ring;
+		struct intel_ring *ring;
 
 		req = i915_gem_request_alloc(engine, NULL);
 		if (IS_ERR(req))
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index ab498ecce1ca..942711cd5495 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -47,7 +47,7 @@ int __intel_ring_space(int head, int tail, int size)
 	return space - I915_RING_FREE_SPACE;
 }
 
-void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
+void intel_ring_update_space(struct intel_ring *ringbuf)
 {
 	if (ringbuf->last_retired_head != -1) {
 		ringbuf->head = ringbuf->last_retired_head;
@@ -60,9 +60,9 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
 
 static void __intel_engine_submit(struct intel_engine_cs *engine)
 {
-	struct intel_ringbuffer *ringbuf = engine->buffer;
-	ringbuf->tail &= ringbuf->size - 1;
-	engine->write_tail(engine, ringbuf->tail);
+	struct intel_ring *ring = engine->buffer;
+	ring->tail &= ring->size - 1;
+	engine->write_tail(engine, ring->tail);
 }
 
 static int
@@ -70,7 +70,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
 		       u32	flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 cmd;
 	int ret;
 
@@ -97,7 +97,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
 		       u32	flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 cmd;
 	int ret;
 
@@ -187,7 +187,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
 static int
 intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 scratch_addr =
 	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	int ret;
@@ -224,7 +224,7 @@ static int
 gen6_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 scratch_addr =
 	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
@@ -277,7 +277,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
 static int
 gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 4);
@@ -299,7 +299,7 @@ static int
 gen7_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32 invalidate_domains, u32 flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 scratch_addr =
 	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
 	u32 flags = 0;
@@ -364,7 +364,7 @@ static int
 gen8_emit_pipe_control(struct drm_i915_gem_request *req,
 		       u32 flags, u32 scratch_addr)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 6);
@@ -427,7 +427,7 @@ static void ring_write_tail(struct intel_engine_cs *engine,
 	I915_WRITE_TAIL(engine, value);
 }
 
-u64 intel_ring_get_active_head(struct intel_engine_cs *engine)
+u64 intel_engine_get_active_head(struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv = engine->i915;
 	u64 acthd;
@@ -553,8 +553,8 @@ void intel_engine_init_hangcheck(struct intel_engine_cs *engine)
 static int init_ring_common(struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv = engine->i915;
-	struct intel_ringbuffer *ringbuf = engine->buffer;
-	struct drm_i915_gem_object *obj = ringbuf->obj;
+	struct intel_ring *ring = engine->buffer;
+	struct drm_i915_gem_object *obj = ring->obj;
 	int ret = 0;
 
 	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
@@ -604,7 +604,7 @@ static int init_ring_common(struct intel_engine_cs *engine)
 	(void)I915_READ_HEAD(engine);
 
 	I915_WRITE_CTL(engine,
-			((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES)
+			((ring->size - PAGE_SIZE) & RING_NR_PAGES)
 			| RING_VALID);
 
 	/* If the head is still not zero, the ring is dead */
@@ -623,10 +623,10 @@ static int init_ring_common(struct intel_engine_cs *engine)
 		goto out;
 	}
 
-	ringbuf->last_retired_head = -1;
-	ringbuf->head = I915_READ_HEAD(engine);
-	ringbuf->tail = I915_READ_TAIL(engine) & TAIL_ADDR;
-	intel_ring_update_space(ringbuf);
+	ring->last_retired_head = -1;
+	ring->head = I915_READ_HEAD(engine);
+	ring->tail = I915_READ_TAIL(engine) & TAIL_ADDR;
+	intel_ring_update_space(ring);
 
 	intel_engine_init_hangcheck(engine);
 
@@ -680,7 +680,7 @@ err:
 
 static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct i915_workarounds *w = &req->i915->workarounds;
 	int ret, i;
 
@@ -688,7 +688,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 		return 0;
 
 	req->engine->gpu_caches_dirty = true;
-	ret = intel_ring_flush_all_caches(req);
+	ret = intel_engine_flush_all_caches(req);
 	if (ret)
 		return ret;
 
@@ -706,7 +706,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 	intel_ring_advance(ring);
 
 	req->engine->gpu_caches_dirty = true;
-	ret = intel_ring_flush_all_caches(req);
+	ret = intel_engine_flush_all_caches(req);
 	if (ret)
 		return ret;
 
@@ -1242,7 +1242,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 			   unsigned int num_dwords)
 {
 #define MBOX_UPDATE_DWORDS 8
-	struct intel_ringbuffer *signaller = signaller_req->ring;
+	struct intel_ring *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
 	enum intel_engine_id id;
@@ -1282,7 +1282,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 			   unsigned int num_dwords)
 {
 #define MBOX_UPDATE_DWORDS 6
-	struct intel_ringbuffer *signaller = signaller_req->ring;
+	struct intel_ring *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
 	enum intel_engine_id id;
@@ -1319,7 +1319,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 		       unsigned int num_dwords)
 {
-	struct intel_ringbuffer *signaller = signaller_req->ring;
+	struct intel_ring *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *useless;
 	enum intel_engine_id id;
@@ -1363,7 +1363,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 static int
 gen6_add_request(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	if (req->engine->semaphore.signal)
@@ -1387,7 +1387,7 @@ static int
 gen8_render_add_request(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	if (engine->semaphore.signal)
@@ -1432,7 +1432,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
 	       struct intel_engine_cs *signaller,
 	       u32 seqno)
 {
-	struct intel_ringbuffer *waiter = waiter_req->ring;
+	struct intel_ring *waiter = waiter_req->ring;
 	struct drm_i915_private *dev_priv = waiter_req->i915;
 	struct i915_hw_ppgtt *ppgtt;
 	int ret;
@@ -1469,7 +1469,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
 	       struct intel_engine_cs *signaller,
 	       u32 seqno)
 {
-	struct intel_ringbuffer *waiter = waiter_req->ring;
+	struct intel_ring *waiter = waiter_req->ring;
 	u32 dw1 = MI_SEMAPHORE_MBOX |
 		  MI_SEMAPHORE_COMPARE |
 		  MI_SEMAPHORE_REGISTER;
@@ -1603,7 +1603,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 	       u32     invalidate_domains,
 	       u32     flush_domains)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -1619,7 +1619,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 static int
 i9xx_add_request(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 4);
@@ -1697,7 +1697,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 length,
 			 unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -1724,7 +1724,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 len,
 			 unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	u32 cs_offset = req->engine->scratch.gtt_offset;
 	int ret;
 
@@ -1786,7 +1786,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			 u64 offset, u32 len,
 			 unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -1894,7 +1894,7 @@ static int init_phys_status_page(struct intel_engine_cs *engine)
 	return 0;
 }
 
-void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
+void intel_unpin_ring(struct intel_ring *ringbuf)
 {
 	GEM_BUG_ON(ringbuf->vma == NULL);
 	GEM_BUG_ON(ringbuf->vaddr == NULL);
@@ -1909,8 +1909,8 @@ void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
 	ringbuf->vma = NULL;
 }
 
-int intel_pin_and_map_ringbuffer_obj(struct drm_i915_private *dev_priv,
-				     struct intel_ringbuffer *ringbuf)
+int intel_pin_and_map_ring(struct drm_i915_private *dev_priv,
+			   struct intel_ring *ringbuf)
 {
 	struct drm_i915_gem_object *obj = ringbuf->obj;
 	/* Ring wraparound at offset 0 sometimes hangs. No idea why. */
@@ -1961,14 +1961,14 @@ err_unpin:
 	return ret;
 }
 
-static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
+static void intel_destroy_ringbuffer_obj(struct intel_ring *ringbuf)
 {
 	i915_gem_object_put(ringbuf->obj);
 	ringbuf->obj = NULL;
 }
 
 static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
-				      struct intel_ringbuffer *ringbuf)
+				      struct intel_ring *ringbuf)
 {
 	struct drm_i915_gem_object *obj;
 
@@ -1988,10 +1988,10 @@ static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
 	return 0;
 }
 
-struct intel_ringbuffer *
-intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size)
+struct intel_ring *
+intel_engine_create_ring(struct intel_engine_cs *engine, int size)
 {
-	struct intel_ringbuffer *ring;
+	struct intel_ring *ring;
 	int ret;
 
 	ring = kzalloc(sizeof(*ring), GFP_KERNEL);
@@ -2029,7 +2029,7 @@ intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size)
 }
 
 void
-intel_ringbuffer_free(struct intel_ringbuffer *ring)
+intel_ring_free(struct intel_ring *ring)
 {
 	intel_destroy_ringbuffer_obj(ring);
 	list_del(&ring->link);
@@ -2091,7 +2091,7 @@ static int intel_init_engine(struct drm_device *dev,
 			     struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct intel_ringbuffer *ringbuf;
+	struct intel_ring *ringbuf;
 	int ret;
 
 	WARN_ON(engine->buffer);
@@ -2119,7 +2119,7 @@ static int intel_init_engine(struct drm_device *dev,
 	if (ret)
 		goto error;
 
-	ringbuf = intel_engine_create_ringbuffer(engine, 32 * PAGE_SIZE);
+	ringbuf = intel_engine_create_ring(engine, 32 * PAGE_SIZE);
 	if (IS_ERR(ringbuf)) {
 		ret = PTR_ERR(ringbuf);
 		goto error;
@@ -2137,7 +2137,7 @@ static int intel_init_engine(struct drm_device *dev,
 			goto error;
 	}
 
-	ret = intel_pin_and_map_ringbuffer_obj(dev_priv, ringbuf);
+	ret = intel_pin_and_map_ring(dev_priv, ringbuf);
 	if (ret) {
 		DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
 				engine->name, ret);
@@ -2152,11 +2152,11 @@ static int intel_init_engine(struct drm_device *dev,
 	return 0;
 
 error:
-	intel_cleanup_engine(engine);
+	intel_engine_cleanup(engine);
 	return ret;
 }
 
-void intel_cleanup_engine(struct intel_engine_cs *engine)
+void intel_engine_cleanup(struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv;
 
@@ -2166,11 +2166,11 @@ void intel_cleanup_engine(struct intel_engine_cs *engine)
 	dev_priv = engine->i915;
 
 	if (engine->buffer) {
-		intel_stop_engine(engine);
+		intel_engine_stop(engine);
 		WARN_ON(!IS_GEN2(dev_priv) && (I915_READ_MODE(engine) & MODE_IDLE) == 0);
 
-		intel_unpin_ringbuffer_obj(engine->buffer);
-		intel_ringbuffer_free(engine->buffer);
+		intel_unpin_ring(engine->buffer);
+		intel_ring_free(engine->buffer);
 		engine->buffer = NULL;
 	}
 
@@ -2233,7 +2233,7 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
 
 static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	struct intel_engine_cs *engine = req->engine;
 	struct drm_i915_gem_request *target;
 
@@ -2278,7 +2278,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 
 int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int remain_actual = ring->size - ring->tail;
 	int remain_usable = ring->effective_size - ring->tail;
 	int bytes = num_dwords * sizeof(u32);
@@ -2335,7 +2335,7 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
 /* Align the ring tail to a cacheline boundary */
 int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int num_dwords =
 	       	(ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
 	int ret;
@@ -2356,7 +2356,7 @@ int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
 	return 0;
 }
 
-void intel_ring_init_seqno(struct intel_engine_cs *engine, u32 seqno)
+void intel_engine_init_seqno(struct intel_engine_cs *engine, u32 seqno)
 {
 	struct drm_i915_private *dev_priv = engine->i915;
 
@@ -2429,7 +2429,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
 static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
 			       u32 invalidate, u32 flush)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	uint32_t cmd;
 	int ret;
 
@@ -2475,7 +2475,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			      u64 offset, u32 len,
 			      unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	bool ppgtt = USES_PPGTT(req->i915) &&
 			!(dispatch_flags & I915_DISPATCH_SECURE);
 	int ret;
@@ -2501,7 +2501,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			     u64 offset, u32 len,
 			     unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -2526,7 +2526,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 			      u64 offset, u32 len,
 			      unsigned dispatch_flags)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	int ret;
 
 	ret = intel_ring_begin(req, 2);
@@ -2549,7 +2549,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 static int gen6_ring_flush(struct drm_i915_gem_request *req,
 			   u32 invalidate, u32 flush)
 {
-	struct intel_ringbuffer *ring = req->ring;
+	struct intel_ring *ring = req->ring;
 	uint32_t cmd;
 	int ret;
 
@@ -2942,7 +2942,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 }
 
 int
-intel_ring_flush_all_caches(struct drm_i915_gem_request *req)
+intel_engine_flush_all_caches(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
 	int ret;
@@ -2961,7 +2961,7 @@ intel_ring_flush_all_caches(struct drm_i915_gem_request *req)
 }
 
 int
-intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
+intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
 	uint32_t flush_domains;
@@ -2981,8 +2981,7 @@ intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
 	return 0;
 }
 
-void
-intel_stop_engine(struct intel_engine_cs *engine)
+void intel_engine_stop(struct intel_engine_cs *engine)
 {
 	int ret;
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 3a4ed97b563f..de0bc66af401 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -74,7 +74,7 @@ struct  intel_hw_status_page {
 	(e)->semaphore.signal_ggtt[(e)->id] = MI_SEMAPHORE_SYNC_INVALID; \
 	} while(0)
 
-enum intel_ring_hangcheck_action {
+enum intel_engine_hangcheck_action {
 	HANGCHECK_IDLE = 0,
 	HANGCHECK_WAIT,
 	HANGCHECK_ACTIVE,
@@ -84,17 +84,17 @@ enum intel_ring_hangcheck_action {
 
 #define HANGCHECK_SCORE_RING_HUNG 31
 
-struct intel_ring_hangcheck {
+struct intel_engine_hangcheck {
 	u64 acthd;
 	u32 seqno;
 	unsigned user_interrupts;
 	int score;
-	enum intel_ring_hangcheck_action action;
+	enum intel_engine_hangcheck_action action;
 	int deadlock;
 	u32 instdone[I915_NUM_INSTDONE_REG];
 };
 
-struct intel_ringbuffer {
+struct intel_ring {
 	struct drm_i915_gem_object *obj;
 	void *vaddr;
 	struct i915_vma *vma;
@@ -160,7 +160,7 @@ struct intel_engine_cs {
 	unsigned int guc_id; /* XXX same as hw_id? */
 	unsigned fence_context;
 	u32		mmio_base;
-	struct intel_ringbuffer *buffer;
+	struct intel_ring *buffer;
 	struct list_head buffers;
 
 	/* Rather than have every client wait upon all user interrupts,
@@ -338,7 +338,7 @@ struct intel_engine_cs {
 
 	struct i915_gem_context *last_context;
 
-	struct intel_ring_hangcheck hangcheck;
+	struct intel_engine_hangcheck hangcheck;
 
 	struct {
 		struct drm_i915_gem_object *obj;
@@ -385,8 +385,8 @@ intel_engine_flag(const struct intel_engine_cs *engine)
 }
 
 static inline u32
-intel_ring_sync_index(struct intel_engine_cs *engine,
-		      struct intel_engine_cs *other)
+intel_engine_sync_index(struct intel_engine_cs *engine,
+			struct intel_engine_cs *other)
 {
 	int idx;
 
@@ -448,41 +448,40 @@ intel_write_status_page(struct intel_engine_cs *engine,
 #define I915_GEM_HWS_SCRATCH_INDEX	0x40
 #define I915_GEM_HWS_SCRATCH_ADDR (I915_GEM_HWS_SCRATCH_INDEX << MI_STORE_DWORD_INDEX_SHIFT)
 
-struct intel_ringbuffer *
-intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size);
-int intel_pin_and_map_ringbuffer_obj(struct drm_i915_private *dev_priv,
-				     struct intel_ringbuffer *ringbuf);
-void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf);
-void intel_ringbuffer_free(struct intel_ringbuffer *ring);
+struct intel_ring *
+intel_engine_create_ring(struct intel_engine_cs *engine, int size);
+int intel_pin_and_map_ring(struct drm_i915_private *dev_priv,
+			   struct intel_ring *ring);
+void intel_unpin_ring(struct intel_ring *ring);
+void intel_ring_free(struct intel_ring *ring);
 
-void intel_stop_engine(struct intel_engine_cs *engine);
-void intel_cleanup_engine(struct intel_engine_cs *engine);
+void intel_engine_stop(struct intel_engine_cs *engine);
+void intel_engine_cleanup(struct intel_engine_cs *engine);
 
 int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);
 
 int __must_check intel_ring_begin(struct drm_i915_gem_request *req, int n);
 int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
-static inline void intel_ring_emit(struct intel_ringbuffer *ring, u32 data)
+static inline void intel_ring_emit(struct intel_ring *ring, u32 data)
 {
 	*(uint32_t *)(ring->vaddr + ring->tail) = data;
 	ring->tail += 4;
 }
-static inline void intel_ring_emit_reg(struct intel_ringbuffer *ring,
-				       i915_reg_t reg)
+static inline void intel_ring_emit_reg(struct intel_ring *ring, i915_reg_t reg)
 {
 	intel_ring_emit(ring, i915_mmio_reg_offset(reg));
 }
-static inline void intel_ring_advance(struct intel_ringbuffer *ring)
+static inline void intel_ring_advance(struct intel_ring *ring)
 {
 	ring->tail &= ring->size - 1;
 }
 int __intel_ring_space(int head, int tail, int size);
-void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
+void intel_ring_update_space(struct intel_ring *ringbuf);
 
 int __must_check intel_engine_idle(struct intel_engine_cs *engine);
-void intel_ring_init_seqno(struct intel_engine_cs *engine, u32 seqno);
-int intel_ring_flush_all_caches(struct drm_i915_gem_request *req);
-int intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req);
+void intel_engine_init_seqno(struct intel_engine_cs *engine, u32 seqno);
+int intel_engine_flush_all_caches(struct drm_i915_gem_request *req);
+int intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req);
 
 int intel_init_pipe_control(struct intel_engine_cs *engine, int size);
 void intel_fini_pipe_control(struct intel_engine_cs *engine);
@@ -493,7 +492,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev);
 int intel_init_blt_ring_buffer(struct drm_device *dev);
 int intel_init_vebox_ring_buffer(struct drm_device *dev);
 
-u64 intel_ring_get_active_head(struct intel_engine_cs *engine);
+u64 intel_engine_get_active_head(struct intel_engine_cs *engine);
 static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine)
 {
 	return intel_read_status_page(engine, I915_GEM_HWS_INDEX);
@@ -501,7 +500,7 @@ static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine)
 
 int init_workarounds_ring(struct intel_engine_cs *engine);
 
-static inline u32 intel_ring_get_tail(struct intel_ringbuffer *ringbuf)
+static inline u32 intel_ring_get_tail(struct intel_ring *ringbuf)
 {
 	return ringbuf->tail;
 }
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 31/62] drm/i915: Rename residual ringbuf parameters
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (29 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 30/62] drm/i915: Rename struct intel_ringbuffer to struct intel_ring Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 32/62] drm/i915: Rename intel_pin_and_map_ring() Chris Wilson
                   ` (32 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Now that we have a clear ring/engine split and a struct intel_ring, we
no longer need the stopgap ringbuf names.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_ringbuffer.c | 66 ++++++++++++++++-----------------
 drivers/gpu/drm/i915/intel_ringbuffer.h |  6 +--
 2 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 942711cd5495..d643698da830 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -47,15 +47,15 @@ int __intel_ring_space(int head, int tail, int size)
 	return space - I915_RING_FREE_SPACE;
 }
 
-void intel_ring_update_space(struct intel_ring *ringbuf)
+void intel_ring_update_space(struct intel_ring *ring)
 {
-	if (ringbuf->last_retired_head != -1) {
-		ringbuf->head = ringbuf->last_retired_head;
-		ringbuf->last_retired_head = -1;
+	if (ring->last_retired_head != -1) {
+		ring->head = ring->last_retired_head;
+		ring->last_retired_head = -1;
 	}
 
-	ringbuf->space = __intel_ring_space(ringbuf->head & HEAD_ADDR,
-					    ringbuf->tail, ringbuf->size);
+	ring->space = __intel_ring_space(ring->head & HEAD_ADDR,
+					 ring->tail, ring->size);
 }
 
 static void __intel_engine_submit(struct intel_engine_cs *engine)
@@ -1894,25 +1894,25 @@ static int init_phys_status_page(struct intel_engine_cs *engine)
 	return 0;
 }
 
-void intel_unpin_ring(struct intel_ring *ringbuf)
+void intel_unpin_ring(struct intel_ring *ring)
 {
-	GEM_BUG_ON(ringbuf->vma == NULL);
-	GEM_BUG_ON(ringbuf->vaddr == NULL);
+	GEM_BUG_ON(ring->vma == NULL);
+	GEM_BUG_ON(ring->vaddr == NULL);
 
-	if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
-		i915_gem_object_unpin_map(ringbuf->obj);
+	if (HAS_LLC(ring->obj->base.dev) && !ring->obj->stolen)
+		i915_gem_object_unpin_map(ring->obj);
 	else
-		i915_vma_unpin_iomap(ringbuf->vma);
-	ringbuf->vaddr = NULL;
+		i915_vma_unpin_iomap(ring->vma);
+	ring->vaddr = NULL;
 
-	i915_gem_object_ggtt_unpin(ringbuf->obj);
-	ringbuf->vma = NULL;
+	i915_gem_object_ggtt_unpin(ring->obj);
+	ring->vma = NULL;
 }
 
 int intel_pin_and_map_ring(struct drm_i915_private *dev_priv,
-			   struct intel_ring *ringbuf)
+			   struct intel_ring *ring)
 {
-	struct drm_i915_gem_object *obj = ringbuf->obj;
+	struct drm_i915_gem_object *obj = ring->obj;
 	/* Ring wraparound at offset 0 sometimes hangs. No idea why. */
 	unsigned flags = PIN_OFFSET_BIAS | 4096;
 	void *addr;
@@ -1952,8 +1952,8 @@ int intel_pin_and_map_ring(struct drm_i915_private *dev_priv,
 		}
 	}
 
-	ringbuf->vaddr = addr;
-	ringbuf->vma = i915_gem_obj_to_ggtt(obj);
+	ring->vaddr = addr;
+	ring->vma = i915_gem_obj_to_ggtt(obj);
 	return 0;
 
 err_unpin:
@@ -1961,29 +1961,29 @@ err_unpin:
 	return ret;
 }
 
-static void intel_destroy_ringbuffer_obj(struct intel_ring *ringbuf)
+static void intel_destroy_ringbuffer_obj(struct intel_ring *ring)
 {
-	i915_gem_object_put(ringbuf->obj);
-	ringbuf->obj = NULL;
+	i915_gem_object_put(ring->obj);
+	ring->obj = NULL;
 }
 
 static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
-				      struct intel_ring *ringbuf)
+				      struct intel_ring *ring)
 {
 	struct drm_i915_gem_object *obj;
 
 	obj = NULL;
 	if (!HAS_LLC(dev))
-		obj = i915_gem_object_create_stolen(dev, ringbuf->size);
+		obj = i915_gem_object_create_stolen(dev, ring->size);
 	if (obj == NULL)
-		obj = i915_gem_object_create(dev, ringbuf->size);
+		obj = i915_gem_object_create(dev, ring->size);
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);
 
 	/* mark ring buffers as read-only from GPU side by default */
 	obj->gt_ro = 1;
 
-	ringbuf->obj = obj;
+	ring->obj = obj;
 
 	return 0;
 }
@@ -2091,7 +2091,7 @@ static int intel_init_engine(struct drm_device *dev,
 			     struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct intel_ring *ringbuf;
+	struct intel_ring *ring;
 	int ret;
 
 	WARN_ON(engine->buffer);
@@ -2119,12 +2119,12 @@ static int intel_init_engine(struct drm_device *dev,
 	if (ret)
 		goto error;
 
-	ringbuf = intel_engine_create_ring(engine, 32 * PAGE_SIZE);
-	if (IS_ERR(ringbuf)) {
-		ret = PTR_ERR(ringbuf);
+	ring = intel_engine_create_ring(engine, 32 * PAGE_SIZE);
+	if (IS_ERR(ring)) {
+		ret = PTR_ERR(ring);
 		goto error;
 	}
-	engine->buffer = ringbuf;
+	engine->buffer = ring;
 
 	if (I915_NEED_GFX_HWS(dev_priv)) {
 		ret = init_status_page(engine);
@@ -2137,11 +2137,11 @@ static int intel_init_engine(struct drm_device *dev,
 			goto error;
 	}
 
-	ret = intel_pin_and_map_ring(dev_priv, ringbuf);
+	ret = intel_pin_and_map_ring(dev_priv, ring);
 	if (ret) {
 		DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
 				engine->name, ret);
-		intel_destroy_ringbuffer_obj(ringbuf);
+		intel_destroy_ringbuffer_obj(ring);
 		goto error;
 	}
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index de0bc66af401..ef0133188a65 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -476,7 +476,7 @@ static inline void intel_ring_advance(struct intel_ring *ring)
 	ring->tail &= ring->size - 1;
 }
 int __intel_ring_space(int head, int tail, int size);
-void intel_ring_update_space(struct intel_ring *ringbuf);
+void intel_ring_update_space(struct intel_ring *ring);
 
 int __must_check intel_engine_idle(struct intel_engine_cs *engine);
 void intel_engine_init_seqno(struct intel_engine_cs *engine, u32 seqno);
@@ -500,9 +500,9 @@ static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine)
 
 int init_workarounds_ring(struct intel_engine_cs *engine);
 
-static inline u32 intel_ring_get_tail(struct intel_ring *ringbuf)
+static inline u32 intel_ring_get_tail(struct intel_ring *ring)
 {
-	return ringbuf->tail;
+	return ring->tail;
 }
 
 /*
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 32/62] drm/i915: Rename intel_pin_and_map_ring()
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (30 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 31/62] drm/i915: Rename residual ringbuf parameters Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 33/62] drm/i915: Remove obsolete engine->gpu_caches_dirty Chris Wilson
                   ` (31 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

For more consistent oop-naming, we would use intel_ring_verb, so pick
intel_ring_pin() and intel_ring_unpin().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_lrc.c        |  4 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.c | 38 ++++++++++++++++-----------------
 drivers/gpu/drm/i915/intel_ringbuffer.h |  5 ++---
 3 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fd093efffe85..e8685ce4d2a4 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -954,7 +954,7 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
 
 	lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
 
-	ret = intel_pin_and_map_ring(dev_priv, ce->ring);
+	ret = intel_ring_pin(ce->ring);
 	if (ret)
 		goto unpin_map;
 
@@ -992,7 +992,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
 	if (--ce->pin_count)
 		return;
 
-	intel_unpin_ring(ce->ring);
+	intel_ring_unpin(ce->ring);
 
 	i915_gem_object_unpin_map(ce->state);
 	i915_gem_object_ggtt_unpin(ce->state);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index d643698da830..07c2470c24f9 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1894,24 +1894,9 @@ static int init_phys_status_page(struct intel_engine_cs *engine)
 	return 0;
 }
 
-void intel_unpin_ring(struct intel_ring *ring)
-{
-	GEM_BUG_ON(ring->vma == NULL);
-	GEM_BUG_ON(ring->vaddr == NULL);
-
-	if (HAS_LLC(ring->obj->base.dev) && !ring->obj->stolen)
-		i915_gem_object_unpin_map(ring->obj);
-	else
-		i915_vma_unpin_iomap(ring->vma);
-	ring->vaddr = NULL;
-
-	i915_gem_object_ggtt_unpin(ring->obj);
-	ring->vma = NULL;
-}
-
-int intel_pin_and_map_ring(struct drm_i915_private *dev_priv,
-			   struct intel_ring *ring)
+int intel_ring_pin(struct intel_ring *ring)
 {
+	struct drm_i915_private *dev_priv = ring->engine->i915;
 	struct drm_i915_gem_object *obj = ring->obj;
 	/* Ring wraparound at offset 0 sometimes hangs. No idea why. */
 	unsigned flags = PIN_OFFSET_BIAS | 4096;
@@ -1961,6 +1946,21 @@ err_unpin:
 	return ret;
 }
 
+void intel_ring_unpin(struct intel_ring *ring)
+{
+	GEM_BUG_ON(ring->vma == NULL);
+	GEM_BUG_ON(ring->vaddr == NULL);
+
+	if (HAS_LLC(ring->engine->i915) && !ring->obj->stolen)
+		i915_gem_object_unpin_map(ring->obj);
+	else
+		i915_vma_unpin_iomap(ring->vma);
+	ring->vaddr = NULL;
+
+	i915_gem_object_ggtt_unpin(ring->obj);
+	ring->vma = NULL;
+}
+
 static void intel_destroy_ringbuffer_obj(struct intel_ring *ring)
 {
 	i915_gem_object_put(ring->obj);
@@ -2137,7 +2137,7 @@ static int intel_init_engine(struct drm_device *dev,
 			goto error;
 	}
 
-	ret = intel_pin_and_map_ring(dev_priv, ring);
+	ret = intel_ring_pin(ring);
 	if (ret) {
 		DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
 				engine->name, ret);
@@ -2169,7 +2169,7 @@ void intel_engine_cleanup(struct intel_engine_cs *engine)
 		intel_engine_stop(engine);
 		WARN_ON(!IS_GEN2(dev_priv) && (I915_READ_MODE(engine) & MODE_IDLE) == 0);
 
-		intel_unpin_ring(engine->buffer);
+		intel_ring_unpin(engine->buffer);
 		intel_ring_free(engine->buffer);
 		engine->buffer = NULL;
 	}
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index ef0133188a65..5403cc614095 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -450,9 +450,8 @@ intel_write_status_page(struct intel_engine_cs *engine,
 
 struct intel_ring *
 intel_engine_create_ring(struct intel_engine_cs *engine, int size);
-int intel_pin_and_map_ring(struct drm_i915_private *dev_priv,
-			   struct intel_ring *ring);
-void intel_unpin_ring(struct intel_ring *ring);
+int intel_ring_pin(struct intel_ring *ring);
+void intel_ring_unpin(struct intel_ring *ring);
 void intel_ring_free(struct intel_ring *ring);
 
 void intel_engine_stop(struct intel_engine_cs *engine);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 33/62] drm/i915: Remove obsolete engine->gpu_caches_dirty
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (31 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 32/62] drm/i915: Rename intel_pin_and_map_ring() Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:36 ` [PATCH 34/62] drm/i915: Simplify request_alloc by returning the allocated request Chris Wilson
                   ` (30 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

Space for flushing the GPU cache prior to completing the request is
preallocated and so cannot fail.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_context.c    |  2 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  9 +---
 drivers/gpu/drm/i915/i915_gem_gtt.c        | 18 ++++----
 drivers/gpu/drm/i915/i915_gem_request.c    |  7 ++-
 drivers/gpu/drm/i915/intel_lrc.c           | 47 +++----------------
 drivers/gpu/drm/i915/intel_lrc.h           |  2 -
 drivers/gpu/drm/i915/intel_ringbuffer.c    | 72 +++++++-----------------------
 drivers/gpu/drm/i915/intel_ringbuffer.h    |  7 ---
 8 files changed, 39 insertions(+), 125 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 13b934ab4a8a..9eb6ab9cb610 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -529,7 +529,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
 	 * itlb_before_ctx_switch.
 	 */
 	if (IS_GEN6(dev_priv)) {
-		ret = req->engine->flush(req, I915_GEM_GPU_DOMAINS, 0);
+		ret = req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
 		if (ret)
 			return ret;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 186e466f932f..6e439f5d1674 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -968,10 +968,8 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
 	if (flush_domains & I915_GEM_DOMAIN_GTT)
 		wmb();
 
-	/* Unconditionally invalidate gpu caches and ensure that we do flush
-	 * any residual writes from the previous batch.
-	 */
-	return intel_engine_invalidate_all_caches(req);
+	/* Unconditionally invalidate gpu caches and TLBs. */
+	return req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
 }
 
 static bool
@@ -1130,9 +1128,6 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
 static void
 i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
 {
-	/* Unconditionally force add_request to emit a full flush. */
-	params->engine->gpu_caches_dirty = true;
-
 	/* Add a breadcrumb for the completion of the batch buffer */
 	__i915_add_request(params->request, params->batch_obj, true);
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 6a6e69a3894f..5d718c488f23 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1664,9 +1664,9 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
-	ret = req->engine->flush(req,
-				 I915_GEM_GPU_DOMAINS,
-				 I915_GEM_GPU_DOMAINS);
+	ret = req->engine->emit_flush(req,
+				      I915_GEM_GPU_DOMAINS,
+				      I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -1703,9 +1703,9 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
 	int ret;
 
 	/* NB: TLBs must be flushed and invalidated before a switch */
-	ret = req->engine->flush(req,
-				 I915_GEM_GPU_DOMAINS,
-				 I915_GEM_GPU_DOMAINS);
+	ret = req->engine->emit_flush(req,
+				      I915_GEM_GPU_DOMAINS,
+				      I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -1723,9 +1723,9 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
 
 	/* XXX: RCS is the only one to auto invalidate the TLBs? */
 	if (req->engine->id != RCS) {
-		ret = req->engine->flush(req,
-					 I915_GEM_GPU_DOMAINS,
-					 I915_GEM_GPU_DOMAINS);
+		ret = req->engine->emit_flush(req,
+					      I915_GEM_GPU_DOMAINS,
+					      I915_GEM_GPU_DOMAINS);
 		if (ret)
 			return ret;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 58d84b153810..b0c6e57197bb 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -443,10 +443,9 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	 * what.
 	 */
 	if (flush_caches) {
-		if (i915.enable_execlists)
-			ret = logical_ring_flush_all_caches(request);
-		else
-			ret = intel_engine_flush_all_caches(request);
+		ret = request->engine->emit_flush(request,
+						  0, I915_GEM_GPU_DOMAINS);
+
 		/* Not allowed to fail! */
 		WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
 	}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index e8685ce4d2a4..55d529e5614d 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -620,24 +620,6 @@ static void execlists_context_queue(struct drm_i915_gem_request *request)
 	spin_unlock_bh(&engine->execlist_lock);
 }
 
-static int logical_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
-{
-	struct intel_engine_cs *engine = req->engine;
-	uint32_t flush_domains;
-	int ret;
-
-	flush_domains = 0;
-	if (engine->gpu_caches_dirty)
-		flush_domains = I915_GEM_GPU_DOMAINS;
-
-	ret = engine->emit_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
-	if (ret)
-		return ret;
-
-	engine->gpu_caches_dirty = false;
-	return 0;
-}
-
 static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
 				 struct list_head *vmas)
 {
@@ -668,7 +650,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
 	/* Unconditionally invalidate gpu caches and ensure that we do flush
 	 * any residual writes from the previous batch.
 	 */
-	return logical_ring_invalidate_all_caches(req);
+	return req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
 }
 
 int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
@@ -911,22 +893,6 @@ void intel_logical_ring_stop(struct intel_engine_cs *engine)
 	I915_WRITE_MODE(engine, _MASKED_BIT_DISABLE(STOP_RING));
 }
 
-int logical_ring_flush_all_caches(struct drm_i915_gem_request *req)
-{
-	struct intel_engine_cs *engine = req->engine;
-	int ret;
-
-	if (!engine->gpu_caches_dirty)
-		return 0;
-
-	ret = engine->emit_flush(req, 0, I915_GEM_GPU_DOMAINS);
-	if (ret)
-		return ret;
-
-	engine->gpu_caches_dirty = false;
-	return 0;
-}
-
 static int intel_lr_context_pin(struct i915_gem_context *ctx,
 				struct intel_engine_cs *engine)
 {
@@ -1007,15 +973,15 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
 static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
 {
 	int ret, i;
-	struct intel_engine_cs *engine = req->engine;
 	struct intel_ring *ring = req->ring;
 	struct i915_workarounds *w = &req->i915->workarounds;
 
 	if (w->count == 0)
 		return 0;
 
-	engine->gpu_caches_dirty = true;
-	ret = logical_ring_flush_all_caches(req);
+	ret = req->engine->emit_flush(req,
+				      I915_GEM_GPU_DOMAINS,
+				      I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -1032,8 +998,9 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
 
 	intel_ring_advance(ring);
 
-	engine->gpu_caches_dirty = true;
-	ret = logical_ring_flush_all_caches(req);
+	ret = req->engine->emit_flush(req,
+				      I915_GEM_GPU_DOMAINS,
+				      I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index baf90543857a..87db0b6c2e76 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -64,8 +64,6 @@ void intel_logical_ring_stop(struct intel_engine_cs *engine);
 void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
 int intel_logical_rings_init(struct drm_device *dev);
 
-int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
-
 /* Logical Ring Contexts */
 
 /* One extra page is added before LRC for GuC as shared data */
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 07c2470c24f9..41e1bd9dc61d 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -687,8 +687,9 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 	if (w->count == 0)
 		return 0;
 
-	req->engine->gpu_caches_dirty = true;
-	ret = intel_engine_flush_all_caches(req);
+	ret = req->engine->emit_flush(req,
+				      I915_GEM_GPU_DOMAINS,
+				      I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -705,8 +706,9 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
 
 	intel_ring_advance(ring);
 
-	req->engine->gpu_caches_dirty = true;
-	ret = intel_engine_flush_all_caches(req);
+	ret = req->engine->emit_flush(req,
+				      I915_GEM_GPU_DOMAINS,
+				      I915_GEM_GPU_DOMAINS);
 	if (ret)
 		return ret;
 
@@ -2627,7 +2629,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 
 		engine->init_context = intel_rcs_ctx_init;
 		engine->add_request = gen8_render_add_request;
-		engine->flush = gen8_render_ring_flush;
+		engine->emit_flush = gen8_render_ring_flush;
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
 		engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
@@ -2640,9 +2642,9 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	} else if (INTEL_GEN(dev_priv) >= 6) {
 		engine->init_context = intel_rcs_ctx_init;
 		engine->add_request = gen6_add_request;
-		engine->flush = gen7_render_ring_flush;
+		engine->emit_flush = gen7_render_ring_flush;
 		if (IS_GEN6(dev_priv))
-			engine->flush = gen6_render_ring_flush;
+			engine->emit_flush = gen6_render_ring_flush;
 		engine->irq_enable = gen6_ring_enable_irq;
 		engine->irq_disable = gen6_ring_disable_irq;
 		engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
@@ -2670,7 +2672,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		}
 	} else if (IS_GEN5(dev_priv)) {
 		engine->add_request = i9xx_add_request;
-		engine->flush = gen4_render_ring_flush;
+		engine->emit_flush = gen4_render_ring_flush;
 		engine->irq_enable = gen5_ring_enable_irq;
 		engine->irq_disable = gen5_ring_disable_irq;
 		engine->irq_seqno_barrier = gen5_seqno_barrier;
@@ -2678,9 +2680,9 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	} else {
 		engine->add_request = i9xx_add_request;
 		if (INTEL_GEN(dev_priv) < 4)
-			engine->flush = gen2_render_ring_flush;
+			engine->emit_flush = gen2_render_ring_flush;
 		else
-			engine->flush = gen4_render_ring_flush;
+			engine->emit_flush = gen4_render_ring_flush;
 		if (IS_GEN2(dev_priv)) {
 			engine->irq_enable = i8xx_ring_enable_irq;
 			engine->irq_disable = i8xx_ring_disable_irq;
@@ -2740,7 +2742,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 		/* gen6 bsd needs a special wa for tail updates */
 		if (IS_GEN6(dev_priv))
 			engine->write_tail = gen6_bsd_ring_write_tail;
-		engine->flush = gen6_bsd_ring_flush;
+		engine->emit_flush = gen6_bsd_ring_flush;
 		engine->add_request = gen6_add_request;
 		engine->irq_seqno_barrier = gen6_seqno_barrier;
 		if (INTEL_GEN(dev_priv) >= 8) {
@@ -2778,7 +2780,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 		}
 	} else {
 		engine->mmio_base = BSD_RING_BASE;
-		engine->flush = bsd_ring_flush;
+		engine->emit_flush = bsd_ring_flush;
 		engine->add_request = i9xx_add_request;
 		if (IS_GEN5(dev_priv)) {
 			engine->irq_enable_mask = ILK_BSD_USER_INTERRUPT;
@@ -2812,7 +2814,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 
 	engine->write_tail = ring_write_tail;
 	engine->mmio_base = GEN8_BSD2_RING_BASE;
-	engine->flush = gen6_bsd_ring_flush;
+	engine->emit_flush = gen6_bsd_ring_flush;
 	engine->add_request = gen6_add_request;
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
 	engine->irq_enable_mask =
@@ -2843,7 +2845,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 
 	engine->mmio_base = BLT_RING_BASE;
 	engine->write_tail = ring_write_tail;
-	engine->flush = gen6_ring_flush;
+	engine->emit_flush = gen6_ring_flush;
 	engine->add_request = gen6_add_request;
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
 	if (INTEL_GEN(dev_priv) >= 8) {
@@ -2901,7 +2903,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 
 	engine->mmio_base = VEBOX_RING_BASE;
 	engine->write_tail = ring_write_tail;
-	engine->flush = gen6_ring_flush;
+	engine->emit_flush = gen6_ring_flush;
 	engine->add_request = gen6_add_request;
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
 
@@ -2941,46 +2943,6 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 	return intel_init_engine(dev, engine);
 }
 
-int
-intel_engine_flush_all_caches(struct drm_i915_gem_request *req)
-{
-	struct intel_engine_cs *engine = req->engine;
-	int ret;
-
-	if (!engine->gpu_caches_dirty)
-		return 0;
-
-	ret = engine->flush(req, 0, I915_GEM_GPU_DOMAINS);
-	if (ret)
-		return ret;
-
-	trace_i915_gem_ring_flush(req, 0, I915_GEM_GPU_DOMAINS);
-
-	engine->gpu_caches_dirty = false;
-	return 0;
-}
-
-int
-intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req)
-{
-	struct intel_engine_cs *engine = req->engine;
-	uint32_t flush_domains;
-	int ret;
-
-	flush_domains = 0;
-	if (engine->gpu_caches_dirty)
-		flush_domains = I915_GEM_GPU_DOMAINS;
-
-	ret = engine->flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
-	if (ret)
-		return ret;
-
-	trace_i915_gem_ring_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
-
-	engine->gpu_caches_dirty = false;
-	return 0;
-}
-
 void intel_engine_stop(struct intel_engine_cs *engine)
 {
 	int ret;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 5403cc614095..4817a7fa2154 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -214,9 +214,6 @@ struct intel_engine_cs {
 
 	void		(*write_tail)(struct intel_engine_cs *ring,
 				      u32 value);
-	int __must_check (*flush)(struct drm_i915_gem_request *req,
-				  u32	invalidate_domains,
-				  u32	flush_domains);
 	int		(*add_request)(struct drm_i915_gem_request *req);
 	/* Some chipsets are not quite as coherent as advertised and need
 	 * an expensive kick to force a true read of the up-to-date seqno.
@@ -334,8 +331,6 @@ struct intel_engine_cs {
 	u32 last_submitted_seqno;
 	unsigned user_interrupts;
 
-	bool gpu_caches_dirty;
-
 	struct i915_gem_context *last_context;
 
 	struct intel_engine_hangcheck hangcheck;
@@ -479,8 +474,6 @@ void intel_ring_update_space(struct intel_ring *ring);
 
 int __must_check intel_engine_idle(struct intel_engine_cs *engine);
 void intel_engine_init_seqno(struct intel_engine_cs *engine, u32 seqno);
-int intel_engine_flush_all_caches(struct drm_i915_gem_request *req);
-int intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req);
 
 int intel_init_pipe_control(struct intel_engine_cs *engine, int size);
 void intel_fini_pipe_control(struct intel_engine_cs *engine);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 34/62] drm/i915: Simplify request_alloc by returning the allocated request
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (32 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 33/62] drm/i915: Remove obsolete engine->gpu_caches_dirty Chris Wilson
@ 2016-06-03 16:36 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 35/62] drm/i915: Unify legacy/execlists emission of MI_BATCHBUFFER_START Chris Wilson
                   ` (29 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:36 UTC (permalink / raw)
  To: intel-gfx

If is simpler and leads to more readable code through the callstack if
the allocation returns the allocated struct through the return value.

The importance of this is that it no longer looks like we accidentally
allocate requests as side-effect of calling certain functions.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h            |  3 +-
 drivers/gpu/drm/i915/i915_gem.c            | 75 ++++++++----------------------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c | 12 ++---
 drivers/gpu/drm/i915/i915_gem_request.c    | 58 ++++++++---------------
 drivers/gpu/drm/i915/i915_trace.h          | 13 +++---
 drivers/gpu/drm/i915/intel_display.c       | 36 ++++++--------
 drivers/gpu/drm/i915/intel_lrc.c           |  2 +-
 drivers/gpu/drm/i915/intel_overlay.c       | 19 ++++----
 8 files changed, 78 insertions(+), 140 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index fe39cd2584f3..b1e00b42a830 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3067,8 +3067,7 @@ static inline void i915_gem_object_unpin_map(struct drm_i915_gem_object *obj)
 
 int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
 int i915_gem_object_sync(struct drm_i915_gem_object *obj,
-			 struct intel_engine_cs *to,
-			 struct drm_i915_gem_request **to_req);
+			 struct drm_i915_gem_request *to);
 void i915_vma_move_to_active(struct i915_vma *vma,
 			     struct drm_i915_gem_request *req);
 int i915_gem_dumb_create(struct drm_file *file_priv,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 034d81c54d67..de1e866276c5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2552,51 +2552,35 @@ out:
 
 static int
 __i915_gem_object_sync(struct drm_i915_gem_object *obj,
-		       struct intel_engine_cs *to,
-		       struct drm_i915_gem_request *from_req,
-		       struct drm_i915_gem_request **to_req)
+		       struct drm_i915_gem_request *to,
+		       struct drm_i915_gem_request *from)
 {
-	struct intel_engine_cs *from;
 	int ret;
 
-	from = from_req->engine;
-	if (to == from)
+	if (to->engine == from->engine)
 		return 0;
 
-	if (i915_gem_request_completed(from_req))
+	if (i915_gem_request_completed(from))
 		return 0;
 
 	if (!i915.semaphores) {
-		struct drm_i915_private *i915 = to_i915(obj->base.dev);
-		ret = __i915_wait_request(from_req,
-					  i915->mm.interruptible,
+		ret = __i915_wait_request(from,
+					  from->i915->mm.interruptible,
 					  NULL,
 					  NO_WAITBOOST);
 		if (ret)
 			return ret;
 
-		i915_gem_object_retire_request(obj, from_req);
+		i915_gem_object_retire_request(obj, from);
 	} else {
-		int idx = intel_engine_sync_index(from, to);
-		u32 seqno = i915_gem_request_get_seqno(from_req);
+		int idx = intel_engine_sync_index(from->engine, to->engine);
+		u32 seqno = i915_gem_request_get_seqno(from);
 
-		WARN_ON(!to_req);
-
-		if (seqno <= from->semaphore.sync_seqno[idx])
+		if (seqno <= from->engine->semaphore.sync_seqno[idx])
 			return 0;
 
-		if (*to_req == NULL) {
-			struct drm_i915_gem_request *req;
-
-			req = i915_gem_request_alloc(to, NULL);
-			if (IS_ERR(req))
-				return PTR_ERR(req);
-
-			*to_req = req;
-		}
-
-		trace_i915_gem_ring_sync_to(*to_req, from, from_req);
-		ret = to->semaphore.sync_to(*to_req, from, seqno);
+		trace_i915_gem_ring_sync_to(to, from);
+		ret = to->engine->semaphore.sync_to(to, from->engine, seqno);
 		if (ret)
 			return ret;
 
@@ -2604,8 +2588,8 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 		 * might have just caused seqno wrap under
 		 * the radar.
 		 */
-		from->semaphore.sync_seqno[idx] =
-			i915_gem_request_get_seqno(obj->last_read_req[from->id]);
+		from->engine->semaphore.sync_seqno[idx] =
+			i915_gem_request_get_seqno(obj->last_read_req[from->engine->id]);
 	}
 
 	return 0;
@@ -2615,17 +2599,12 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
  * i915_gem_object_sync - sync an object to a ring.
  *
  * @obj: object which may be in use on another ring.
- * @to: ring we wish to use the object on. May be NULL.
- * @to_req: request we wish to use the object for. See below.
- *          This will be allocated and returned if a request is
- *          required but not passed in.
+ * @to: request we are wishing to use
  *
  * This code is meant to abstract object synchronization with the GPU.
- * Calling with NULL implies synchronizing the object with the CPU
- * rather than a particular GPU ring. Conceptually we serialise writes
- * between engines inside the GPU. We only allow one engine to write
- * into a buffer at any time, but multiple readers. To ensure each has
- * a coherent view of memory, we must:
+ * Conceptually we serialise writes between engines inside the GPU.
+ * We only allow one engine to write into a buffer at any time, but
+ * multiple readers. To ensure each has a coherent view of memory, we must:
  *
  * - If there is an outstanding write request to the object, the new
  *   request must wait for it to complete (either CPU or in hw, requests
@@ -2634,22 +2613,11 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
  * - If we are a write request (pending_write_domain is set), the new
  *   request must wait for outstanding read requests to complete.
  *
- * For CPU synchronisation (NULL to) no request is required. For syncing with
- * rings to_req must be non-NULL. However, a request does not have to be
- * pre-allocated. If *to_req is NULL and sync commands will be emitted then a
- * request will be allocated automatically and returned through *to_req. Note
- * that it is not guaranteed that commands will be emitted (because the system
- * might already be idle). Hence there is no need to create a request that
- * might never have any work submitted. Note further that if a request is
- * returned in *to_req, it is the responsibility of the caller to submit
- * that request (after potentially adding more work to it).
- *
  * Returns 0 if successful, else propagates up the lower layer error.
  */
 int
 i915_gem_object_sync(struct drm_i915_gem_object *obj,
-		     struct intel_engine_cs *to,
-		     struct drm_i915_gem_request **to_req)
+		     struct drm_i915_gem_request *to)
 {
 	const bool readonly = obj->base.pending_write_domain == 0;
 	struct drm_i915_gem_request *req[I915_NUM_ENGINES];
@@ -2658,9 +2626,6 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
 	if (!obj->active)
 		return 0;
 
-	if (to == NULL)
-		return i915_gem_object_wait_rendering(obj, readonly);
-
 	n = 0;
 	if (readonly) {
 		if (obj->last_write_req)
@@ -2671,7 +2636,7 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
 				req[n++] = obj->last_read_req[i];
 	}
 	for (i = 0; i < n; i++) {
-		ret = __i915_gem_object_sync(obj, to, req[i], to_req);
+		ret = __i915_gem_object_sync(obj, to, req[i]);
 		if (ret)
 			return ret;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 6e439f5d1674..8751a21cb62a 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -951,7 +951,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
 		struct drm_i915_gem_object *obj = vma->obj;
 
 		if (obj->active & other_rings) {
-			ret = i915_gem_object_sync(obj, req->engine, &req);
+			ret = i915_gem_object_sync(obj, req);
 			if (ret)
 				return ret;
 		}
@@ -1413,7 +1413,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	struct drm_i915_gem_request *req = NULL;
 	struct eb_vmas *eb;
 	struct drm_i915_gem_object *batch_obj;
 	struct drm_i915_gem_exec_object2 shadow_exec_entry;
@@ -1601,13 +1600,13 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 		params->batch_obj_vm_offset = i915_gem_obj_offset(batch_obj, vm);
 
 	/* Allocate a request for this batch buffer nice and early. */
-	req = i915_gem_request_alloc(engine, ctx);
-	if (IS_ERR(req)) {
-		ret = PTR_ERR(req);
+	params->request = i915_gem_request_alloc(engine, ctx);
+	if (IS_ERR(params->request)) {
+		ret = PTR_ERR(params->request);
 		goto err_batch_unpin;
 	}
 
-	ret = i915_gem_request_add_to_client(req, file);
+	ret = i915_gem_request_add_to_client(params->request, file);
 	if (ret)
 		goto err_request;
 
@@ -1623,7 +1622,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	params->dispatch_flags          = dispatch_flags;
 	params->batch_obj               = batch_obj;
 	params->ctx                     = ctx;
-	params->request                 = req;
 
 	ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
 err_request:
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index b0c6e57197bb..06f724ee23dd 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -201,10 +201,21 @@ static int i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
 	return 0;
 }
 
-static inline int
-__i915_gem_request_alloc(struct intel_engine_cs *engine,
-			 struct i915_gem_context *ctx,
-			 struct drm_i915_gem_request **req_out)
+/**
+ * i915_gem_request_alloc - allocate a request structure
+ *
+ * @engine: engine that we wish to issue the request on.
+ * @ctx: context that the request will be associated with.
+ *       This can be NULL if the request is not directly related to
+ *       any specific user context, in which case this function will
+ *       choose an appropriate context to use.
+ *
+ * Returns a pointer to the allocated request if successful,
+ * or an error code if not.
+ */
+struct drm_i915_gem_request *
+i915_gem_request_alloc(struct intel_engine_cs *engine,
+		       struct i915_gem_context *ctx)
 {
 	struct drm_i915_private *dev_priv = engine->i915;
 	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
@@ -212,22 +223,17 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
 	u32 seqno;
 	int ret;
 
-	if (!req_out)
-		return -EINVAL;
-
-	*req_out = NULL;
-
 	/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
 	 * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
 	 * and restart.
 	 */
 	ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
 	if (ret)
-		return ret;
+		return ERR_PTR(ret);
 
 	req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
 	if (req == NULL)
-		return -ENOMEM;
+		return ERR_PTR(-ENOMEM);
 
 	ret = i915_gem_get_seqno(dev_priv, &seqno);
 	if (ret)
@@ -261,39 +267,13 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
 	if (ret)
 		goto err_ctx;
 
-	*req_out = req;
-	return 0;
+	return req;
 
 err_ctx:
 	i915_gem_context_put(ctx);
 err:
 	kmem_cache_free(dev_priv->requests, req);
-	return ret;
-}
-
-/**
- * i915_gem_request_alloc - allocate a request structure
- *
- * @engine: engine that we wish to issue the request on.
- * @ctx: context that the request will be associated with.
- *       This can be NULL if the request is not directly related to
- *       any specific user context, in which case this function will
- *       choose an appropriate context to use.
- *
- * Returns a pointer to the allocated request if successful,
- * or an error code if not.
- */
-struct drm_i915_gem_request *
-i915_gem_request_alloc(struct intel_engine_cs *engine,
-		       struct i915_gem_context *ctx)
-{
-	struct drm_i915_gem_request *req;
-	int err;
-
-	if (ctx == NULL)
-		ctx = engine->i915->kernel_context;
-	err = __i915_gem_request_alloc(engine, ctx, &req);
-	return err ? ERR_PTR(err) : req;
+	return ERR_PTR(ret);
 }
 
 int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 0296a77b586a..e7b3e6e4f4a4 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -449,10 +449,9 @@ TRACE_EVENT(i915_gem_evict_vm,
 );
 
 TRACE_EVENT(i915_gem_ring_sync_to,
-	    TP_PROTO(struct drm_i915_gem_request *to_req,
-		     struct intel_engine_cs *from,
-		     struct drm_i915_gem_request *req),
-	    TP_ARGS(to_req, from, req),
+	    TP_PROTO(struct drm_i915_gem_request *to,
+		     struct drm_i915_gem_request *from),
+	    TP_ARGS(to, from),
 
 	    TP_STRUCT__entry(
 			     __field(u32, dev)
@@ -463,9 +462,9 @@ TRACE_EVENT(i915_gem_ring_sync_to,
 
 	    TP_fast_assign(
 			   __entry->dev = from->i915->dev->primary->index;
-			   __entry->sync_from = from->id;
-			   __entry->sync_to = to_req->engine->id;
-			   __entry->seqno = req->fence.seqno;
+			   __entry->sync_from = from->engine->id;
+			   __entry->sync_to = to->engine->id;
+			   __entry->seqno = from->fence.seqno;
 			   ),
 
 	    TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u",
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 63cfd318bcd3..175a553dc6c8 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11631,7 +11631,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	struct intel_flip_work *work;
 	struct intel_engine_cs *engine;
 	bool mmio_flip;
-	struct drm_i915_gem_request *request = NULL;
+	struct drm_i915_gem_request *request;
 	int ret;
 
 	/*
@@ -11736,22 +11736,6 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 	mmio_flip = use_mmio_flip(engine, obj);
 
-	/* When using CS flips, we want to emit semaphores between rings.
-	 * However, when using mmio flips we will create a task to do the
-	 * synchronisation, so all we want here is to pin the framebuffer
-	 * into the display plane and skip any waits.
-	 */
-	if (!mmio_flip) {
-		ret = i915_gem_object_sync(obj, engine, &request);
-		if (!ret && !request) {
-			request = i915_gem_request_alloc(engine, NULL);
-			ret = PTR_ERR_OR_ZERO(request);
-		}
-
-		if (ret)
-			goto cleanup_pending;
-	}
-
 	ret = intel_pin_and_fence_fb_obj(fb, primary->state->rotation);
 	if (ret)
 		goto cleanup_pending;
@@ -11769,14 +11753,24 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 		schedule_work(&work->mmio_work);
 	} else {
-		i915_gem_request_assign(&work->flip_queued_req, request);
+		request = i915_gem_request_alloc(engine, engine->last_context);
+		if (IS_ERR(request)) {
+			ret = PTR_ERR(request);
+			goto cleanup_unpin;
+		}
+
+		ret = i915_gem_object_sync(obj, request);
+		if (ret)
+			goto cleanup_request;
+
 		ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
 						   page_flip_flags);
 		if (ret)
-			goto cleanup_unpin;
+			goto cleanup_request;
 
 		intel_mark_page_flip_active(intel_crtc, work);
 
+		work->flip_queued_req = i915_gem_request_get(request);
 		i915_add_request_no_flush(request);
 	}
 
@@ -11791,11 +11785,11 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 
 	return 0;
 
+cleanup_request:
+	i915_add_request_no_flush(request);
 cleanup_unpin:
 	intel_unpin_fb_obj(fb, crtc->primary->state->rotation);
 cleanup_pending:
-	if (!IS_ERR_OR_NULL(request))
-		i915_add_request_no_flush(request);
 	atomic_dec(&intel_crtc->unpin_work_count);
 	mutex_unlock(&dev->struct_mutex);
 cleanup:
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 55d529e5614d..3fa2bc5297c1 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -633,7 +633,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
 		struct drm_i915_gem_object *obj = vma->obj;
 
 		if (obj->active & other_rings) {
-			ret = i915_gem_object_sync(obj, req->engine, &req);
+			ret = i915_gem_object_sync(obj, req);
 			if (ret)
 				return ret;
 		}
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index fe9da60d806e..5f645ad2babd 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -229,11 +229,17 @@ static int intel_overlay_do_wait_request(struct intel_overlay *overlay,
 	return 0;
 }
 
+static struct drm_i915_gem_request *alloc_request(struct intel_overlay *overlay)
+{
+	struct drm_i915_private *dev_priv = overlay->i915;
+	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
+	return i915_gem_request_alloc(engine, dev_priv->kernel_context);
+}
+
 /* overlay needs to be disable in OCMD reg */
 static int intel_overlay_on(struct intel_overlay *overlay)
 {
 	struct drm_i915_private *dev_priv = overlay->i915;
-	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
 	struct intel_ring *ring;
 	int ret;
@@ -241,7 +247,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 	WARN_ON(overlay->active);
 	WARN_ON(IS_I830(dev_priv) && !(dev_priv->quirks & QUIRK_PIPEA_FORCE));
 
-	req = i915_gem_request_alloc(engine, NULL);
+	req = alloc_request(overlay);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -268,7 +274,6 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 				  bool load_polyphase_filter)
 {
 	struct drm_i915_private *dev_priv = overlay->i915;
-	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
 	struct intel_ring *ring;
 	u32 flip_addr = overlay->flip_addr;
@@ -285,7 +290,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 	if (tmp & (1 << 17))
 		DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp);
 
-	req = i915_gem_request_alloc(engine, NULL);
+	req = alloc_request(overlay);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -338,7 +343,6 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
 static int intel_overlay_off(struct intel_overlay *overlay)
 {
 	struct drm_i915_private *dev_priv = overlay->i915;
-	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	struct drm_i915_gem_request *req;
 	struct intel_ring *ring;
 	u32 flip_addr = overlay->flip_addr;
@@ -352,7 +356,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 	 * of the hw. Do it in both cases */
 	flip_addr |= OFC_UPDATE;
 
-	req = i915_gem_request_alloc(engine, NULL);
+	req = alloc_request(overlay);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -412,7 +416,6 @@ static int intel_overlay_recover_from_interrupt(struct intel_overlay *overlay)
 static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 {
 	struct drm_i915_private *dev_priv = overlay->i915;
-	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
 	int ret;
 
 	lockdep_assert_held(&dev_priv->dev->struct_mutex);
@@ -428,7 +431,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 		struct drm_i915_gem_request *req;
 		struct intel_ring *ring;
 
-		req = i915_gem_request_alloc(engine, NULL);
+		req = alloc_request(overlay);
 		if (IS_ERR(req))
 			return PTR_ERR(req);
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 35/62] drm/i915: Unify legacy/execlists emission of MI_BATCHBUFFER_START
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (33 preceding siblings ...)
  2016-06-03 16:36 ` [PATCH 34/62] drm/i915: Simplify request_alloc by returning the allocated request Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 36/62] drm/i915: Convert engine->write_tail to operate on a request Chris Wilson
                   ` (28 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Both the ->dispatch_execbuffer and ->emit_bb_start callbacks do exactly
the same thing, add MI_BATCHBUFFER_START to the request's ringbuffer -
we need only one vfunc.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_execbuffer.c   |  6 +--
 drivers/gpu/drm/i915/i915_gem_render_state.c | 16 +++----
 drivers/gpu/drm/i915/intel_lrc.c             |  9 +++-
 drivers/gpu/drm/i915/intel_ringbuffer.c      | 67 +++++++++++++---------------
 drivers/gpu/drm/i915/intel_ringbuffer.h      | 12 +++--
 5 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 8751a21cb62a..49dda93ba63c 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1293,9 +1293,9 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
 	if (exec_len == 0)
 		exec_len = params->batch_obj->base.size;
 
-	ret = params->engine->dispatch_execbuffer(params->request,
-						  exec_start, exec_len,
-						  params->dispatch_flags);
+	ret = params->engine->emit_bb_start(params->request,
+					    exec_start, exec_len,
+					    params->dispatch_flags);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 41eb9a91bfee..6aedb913f694 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -206,18 +206,18 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
 	if (so.rodata == NULL)
 		return 0;
 
-	ret = req->engine->dispatch_execbuffer(req, so.ggtt_offset,
-					       so.rodata->batch_items * 4,
-					       I915_DISPATCH_SECURE);
+	ret = req->engine->emit_bb_start(req, so.ggtt_offset,
+					 so.rodata->batch_items * 4,
+					 I915_DISPATCH_SECURE);
 	if (ret)
 		goto out;
 
 	if (so.aux_batch_size > 8) {
-		ret = req->engine->dispatch_execbuffer(req,
-						       (so.ggtt_offset +
-							so.aux_batch_offset),
-						       so.aux_batch_size,
-						       I915_DISPATCH_SECURE);
+		ret = req->engine->emit_bb_start(req,
+						 (so.ggtt_offset +
+						  so.aux_batch_offset),
+						 so.aux_batch_size,
+						 I915_DISPATCH_SECURE);
 		if (ret)
 			goto out;
 	}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3fa2bc5297c1..71960e47277c 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -843,7 +843,9 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
 	exec_start = params->batch_obj_vm_offset +
 		     args->batch_start_offset;
 
-	ret = engine->emit_bb_start(params->request, exec_start, params->dispatch_flags);
+	ret = engine->emit_bb_start(params->request,
+				    exec_start, args->batch_len,
+				    params->dispatch_flags);
 	if (ret)
 		return ret;
 
@@ -1495,7 +1497,8 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
 }
 
 static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
-			      u64 offset, unsigned dispatch_flags)
+			      u64 offset, u32 len,
+			      unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
@@ -1739,12 +1742,14 @@ static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
 		return 0;
 
 	ret = req->engine->emit_bb_start(req, so.ggtt_offset,
+					 so.rodata->batch_items * 4,
 					 I915_DISPATCH_SECURE);
 	if (ret)
 		goto out;
 
 	ret = req->engine->emit_bb_start(req,
 					 (so.ggtt_offset + so.aux_batch_offset),
+					 so.aux_batch_size,
 					 I915_DISPATCH_SECURE);
 	if (ret)
 		goto out;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 41e1bd9dc61d..943dc08c69df 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1695,9 +1695,9 @@ gen8_ring_disable_irq(struct intel_engine_cs *engine)
 }
 
 static int
-i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
-			 u64 offset, u32 length,
-			 unsigned dispatch_flags)
+i965_emit_bb_start(struct drm_i915_gem_request *req,
+		   u64 offset, u32 length,
+		   unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
@@ -1722,9 +1722,9 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
 #define I830_TLB_ENTRIES (2)
 #define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
 static int
-i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
-			 u64 offset, u32 len,
-			 unsigned dispatch_flags)
+i830_emit_bb_start(struct drm_i915_gem_request *req,
+		   u64 offset, u32 len,
+		   unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	u32 cs_offset = req->engine->scratch.gtt_offset;
@@ -1784,9 +1784,9 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
 }
 
 static int
-i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
-			 u64 offset, u32 len,
-			 unsigned dispatch_flags)
+i915_emit_bb_start(struct drm_i915_gem_request *req,
+		   u64 offset, u32 len,
+		   unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
@@ -2473,9 +2473,9 @@ static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
 }
 
 static int
-gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
-			      u64 offset, u32 len,
-			      unsigned dispatch_flags)
+gen8_emit_bb_start(struct drm_i915_gem_request *req,
+		   u64 offset, u32 len,
+		   unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	bool ppgtt = USES_PPGTT(req->i915) &&
@@ -2499,9 +2499,9 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 }
 
 static int
-hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
-			     u64 offset, u32 len,
-			     unsigned dispatch_flags)
+hsw_emit_bb_start(struct drm_i915_gem_request *req,
+		  u64 offset, u32 len,
+		  unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
@@ -2524,9 +2524,9 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
 }
 
 static int
-gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
-			      u64 offset, u32 len,
-			      unsigned dispatch_flags)
+gen6_emit_bb_start(struct drm_i915_gem_request *req,
+		   u64 offset, u32 len,
+		   unsigned dispatch_flags)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
@@ -2695,17 +2695,17 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	engine->write_tail = ring_write_tail;
 
 	if (IS_HASWELL(dev_priv))
-		engine->dispatch_execbuffer = hsw_ring_dispatch_execbuffer;
+		engine->emit_bb_start = hsw_emit_bb_start;
 	else if (IS_GEN8(dev_priv))
-		engine->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
+		engine->emit_bb_start = gen8_emit_bb_start;
 	else if (INTEL_GEN(dev_priv) >= 6)
-		engine->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
+		engine->emit_bb_start = gen6_emit_bb_start;
 	else if (INTEL_GEN(dev_priv) >= 4)
-		engine->dispatch_execbuffer = i965_dispatch_execbuffer;
+		engine->emit_bb_start = i965_emit_bb_start;
 	else if (IS_I830(dev_priv) || IS_845G(dev_priv))
-		engine->dispatch_execbuffer = i830_dispatch_execbuffer;
+		engine->emit_bb_start = i830_emit_bb_start;
 	else
-		engine->dispatch_execbuffer = i915_dispatch_execbuffer;
+		engine->emit_bb_start = i915_emit_bb_start;
 	engine->init_hw = init_render_ring;
 	engine->cleanup = render_ring_cleanup;
 
@@ -2750,8 +2750,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 				GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;
 			engine->irq_enable = gen8_ring_enable_irq;
 			engine->irq_disable = gen8_ring_disable_irq;
-			engine->dispatch_execbuffer =
-				gen8_ring_dispatch_execbuffer;
+			engine->emit_bb_start = gen8_emit_bb_start;
 			if (i915.semaphores) {
 				engine->semaphore.sync_to = gen8_ring_sync;
 				engine->semaphore.signal = gen8_xcs_signal;
@@ -2761,8 +2760,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 			engine->irq_enable_mask = GT_BSD_USER_INTERRUPT;
 			engine->irq_enable = gen6_ring_enable_irq;
 			engine->irq_disable = gen6_ring_disable_irq;
-			engine->dispatch_execbuffer =
-				gen6_ring_dispatch_execbuffer;
+			engine->emit_bb_start = gen6_emit_bb_start;
 			if (i915.semaphores) {
 				engine->semaphore.sync_to = gen6_ring_sync;
 				engine->semaphore.signal = gen6_signal;
@@ -2792,7 +2790,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 			engine->irq_enable = i9xx_ring_enable_irq;
 			engine->irq_disable = i9xx_ring_disable_irq;
 		}
-		engine->dispatch_execbuffer = i965_dispatch_execbuffer;
+		engine->emit_bb_start = i965_emit_bb_start;
 	}
 	engine->init_hw = init_ring_common;
 
@@ -2821,8 +2819,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 			GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
 	engine->irq_enable = gen8_ring_enable_irq;
 	engine->irq_disable = gen8_ring_disable_irq;
-	engine->dispatch_execbuffer =
-			gen8_ring_dispatch_execbuffer;
+	engine->emit_bb_start = gen8_emit_bb_start;
 	if (i915.semaphores) {
 		engine->semaphore.sync_to = gen8_ring_sync;
 		engine->semaphore.signal = gen8_xcs_signal;
@@ -2853,7 +2850,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 			GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT;
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
-		engine->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
+		engine->emit_bb_start = gen8_emit_bb_start;
 		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen8_ring_sync;
 			engine->semaphore.signal = gen8_xcs_signal;
@@ -2863,7 +2860,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 		engine->irq_enable_mask = GT_BLT_USER_INTERRUPT;
 		engine->irq_enable = gen6_ring_enable_irq;
 		engine->irq_disable = gen6_ring_disable_irq;
-		engine->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
+		engine->emit_bb_start = gen6_emit_bb_start;
 		if (i915.semaphores) {
 			engine->semaphore.signal = gen6_signal;
 			engine->semaphore.sync_to = gen6_ring_sync;
@@ -2912,7 +2909,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 			GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT;
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
-		engine->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
+		engine->emit_bb_start = gen8_emit_bb_start;
 		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen8_ring_sync;
 			engine->semaphore.signal = gen8_xcs_signal;
@@ -2922,7 +2919,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 		engine->irq_enable_mask = PM_VEBOX_USER_INTERRUPT;
 		engine->irq_enable = hsw_vebox_enable_irq;
 		engine->irq_disable = hsw_vebox_disable_irq;
-		engine->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
+		engine->emit_bb_start = gen6_emit_bb_start;
 		if (i915.semaphores) {
 			engine->semaphore.sync_to = gen6_ring_sync;
 			engine->semaphore.signal = gen6_signal;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 4817a7fa2154..8b8e55a3e62e 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -222,12 +222,6 @@ struct intel_engine_cs {
 	 * monotonic, even if not coherent.
 	 */
 	void		(*irq_seqno_barrier)(struct intel_engine_cs *ring);
-	int		(*dispatch_execbuffer)(struct drm_i915_gem_request *req,
-					       u64 offset, u32 length,
-					       unsigned dispatch_flags);
-#define I915_DISPATCH_SECURE 0x1
-#define I915_DISPATCH_PINNED 0x2
-#define I915_DISPATCH_RS     0x4
 	void		(*cleanup)(struct intel_engine_cs *ring);
 
 	/* GEN8 signal/wait table - never trust comments!
@@ -303,7 +297,11 @@ struct intel_engine_cs {
 				      u32 invalidate_domains,
 				      u32 flush_domains);
 	int		(*emit_bb_start)(struct drm_i915_gem_request *req,
-					 u64 offset, unsigned dispatch_flags);
+					 u64 offset, u32 length,
+					 unsigned dispatch_flags);
+#define I915_DISPATCH_SECURE 0x1
+#define I915_DISPATCH_PINNED 0x2
+#define I915_DISPATCH_RS     0x4
 
 	/**
 	 * List of objects currently involved in rendering from the
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 36/62] drm/i915: Convert engine->write_tail to operate on a request
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (34 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 35/62] drm/i915: Unify legacy/execlists emission of MI_BATCHBUFFER_START Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 37/62] drm/i915: Unify request submission Chris Wilson
                   ` (27 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

If we rewrite the I915_WRITE_TAIL specialisation for the legacy
ringbuffer as submitting the request onto the ringbuffer, we can unify
the vfunc with both execlists and GuC in the next patch.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_request.c |  5 +--
 drivers/gpu/drm/i915/intel_ringbuffer.c | 63 ++++++++++++++++++---------------
 drivers/gpu/drm/i915/intel_ringbuffer.h |  3 +-
 3 files changed, 36 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 06f724ee23dd..5fef1c291b25 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -461,11 +461,8 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 
 	if (i915.enable_execlists)
 		ret = engine->emit_request(request);
-	else {
+	else
 		ret = engine->add_request(request);
-
-		request->tail = intel_ring_get_tail(ring);
-	}
 	/* Not allowed to fail! */
 	WARN(ret, "emit|add_request failed: %d!\n", ret);
 	/* Sanity check that the reserved size was large enough. */
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 943dc08c69df..db38abddfec1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -58,13 +58,6 @@ void intel_ring_update_space(struct intel_ring *ring)
 					 ring->tail, ring->size);
 }
 
-static void __intel_engine_submit(struct intel_engine_cs *engine)
-{
-	struct intel_ring *ring = engine->buffer;
-	ring->tail &= ring->size - 1;
-	engine->write_tail(engine, ring->tail);
-}
-
 static int
 gen2_render_ring_flush(struct drm_i915_gem_request *req,
 		       u32	invalidate_domains,
@@ -420,13 +413,6 @@ gen8_render_ring_flush(struct drm_i915_gem_request *req,
 	return gen8_emit_pipe_control(req, flags, scratch_addr);
 }
 
-static void ring_write_tail(struct intel_engine_cs *engine,
-			    u32 value)
-{
-	struct drm_i915_private *dev_priv = engine->i915;
-	I915_WRITE_TAIL(engine, value);
-}
-
 u64 intel_engine_get_active_head(struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *dev_priv = engine->i915;
@@ -535,7 +521,7 @@ static bool stop_ring(struct intel_engine_cs *engine)
 
 	I915_WRITE_CTL(engine, 0);
 	I915_WRITE_HEAD(engine, 0);
-	engine->write_tail(engine, 0);
+	I915_WRITE_TAIL(engine, 0);
 
 	if (!IS_GEN2(dev_priv)) {
 		(void)I915_READ_CTL(engine);
@@ -1380,7 +1366,11 @@ gen6_add_request(struct drm_i915_gem_request *req)
 	intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
 	intel_ring_emit(ring, req->fence.seqno);
 	intel_ring_emit(ring, MI_USER_INTERRUPT);
-	__intel_engine_submit(req->engine);
+	intel_ring_advance(ring);
+
+	req->tail = intel_ring_get_tail(ring);
+
+	req->engine->submit_request(req);
 
 	return 0;
 }
@@ -1410,7 +1400,8 @@ gen8_render_add_request(struct drm_i915_gem_request *req)
 	intel_ring_emit(ring, 0);
 	intel_ring_emit(ring, MI_USER_INTERRUPT);
 	intel_ring_emit(ring, MI_NOOP);
-	__intel_engine_submit(engine);
+
+	req->engine->submit_request(req);
 
 	return 0;
 }
@@ -1632,11 +1623,21 @@ i9xx_add_request(struct drm_i915_gem_request *req)
 	intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
 	intel_ring_emit(ring, req->fence.seqno);
 	intel_ring_emit(ring, MI_USER_INTERRUPT);
-	__intel_engine_submit(req->engine);
+	intel_ring_advance(ring);
+
+	req->tail = intel_ring_get_tail(ring);
+
+	req->engine->submit_request(req);
 
 	return 0;
 }
 
+static void i9xx_submit_request(struct drm_i915_gem_request *request)
+{
+	struct drm_i915_private *dev_priv = request->i915;
+	I915_WRITE_TAIL(request->engine, request->tail);
+}
+
 static void
 gen6_ring_enable_irq(struct intel_engine_cs *engine)
 {
@@ -2395,10 +2396,9 @@ void intel_engine_init_seqno(struct intel_engine_cs *engine, u32 seqno)
 	engine->hangcheck.seqno = seqno;
 }
 
-static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
-				     u32 value)
+static void gen6_bsd_submit_request(struct drm_i915_gem_request *request)
 {
-	struct drm_i915_private *dev_priv = engine->i915;
+	struct drm_i915_private *dev_priv = request->i915;
 
        /* Every tail move must follow the sequence below */
 
@@ -2418,8 +2418,8 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
 		DRM_ERROR("timed out waiting for the BSD ring to wake up\n");
 
 	/* Now that the ring is fully powered up, update the tail */
-	I915_WRITE_TAIL(engine, value);
-	POSTING_READ(RING_TAIL(engine->mmio_base));
+	I915_WRITE_TAIL(request->engine, request->tail);
+	POSTING_READ(RING_TAIL(request->engine->mmio_base));
 
 	/* Let the ring send IDLE messages to the GT again,
 	 * and so let it sleep to conserve power when idle.
@@ -2609,6 +2609,8 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 	if (HAS_L3_DPF(dev_priv))
 		engine->irq_keep_mask = GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
 
+	engine->submit_request = i9xx_submit_request;
+
 	if (INTEL_GEN(dev_priv) >= 8) {
 		if (i915.semaphores) {
 			obj = i915_gem_object_create(dev, 4096);
@@ -2692,7 +2694,6 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		}
 		engine->irq_enable_mask = I915_USER_INTERRUPT;
 	}
-	engine->write_tail = ring_write_tail;
 
 	if (IS_HASWELL(dev_priv))
 		engine->emit_bb_start = hsw_emit_bb_start;
@@ -2736,12 +2737,13 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 	engine->exec_id = I915_EXEC_BSD;
 	engine->hw_id = 1;
 
-	engine->write_tail = ring_write_tail;
+	engine->submit_request = i9xx_submit_request;
+
 	if (INTEL_GEN(dev_priv) >= 6) {
 		engine->mmio_base = GEN6_BSD_RING_BASE;
 		/* gen6 bsd needs a special wa for tail updates */
 		if (IS_GEN6(dev_priv))
-			engine->write_tail = gen6_bsd_ring_write_tail;
+			engine->submit_request = gen6_bsd_submit_request;
 		engine->emit_flush = gen6_bsd_ring_flush;
 		engine->add_request = gen6_add_request;
 		engine->irq_seqno_barrier = gen6_seqno_barrier;
@@ -2810,10 +2812,11 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 	engine->exec_id = I915_EXEC_BSD;
 	engine->hw_id = 4;
 
-	engine->write_tail = ring_write_tail;
 	engine->mmio_base = GEN8_BSD2_RING_BASE;
 	engine->emit_flush = gen6_bsd_ring_flush;
 	engine->add_request = gen6_add_request;
+	engine->submit_request = i9xx_submit_request;
+
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
 	engine->irq_enable_mask =
 			GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
@@ -2841,9 +2844,10 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 	engine->hw_id = 2;
 
 	engine->mmio_base = BLT_RING_BASE;
-	engine->write_tail = ring_write_tail;
 	engine->emit_flush = gen6_ring_flush;
 	engine->add_request = gen6_add_request;
+	engine->submit_request = i9xx_submit_request;
+
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
 	if (INTEL_GEN(dev_priv) >= 8) {
 		engine->irq_enable_mask =
@@ -2899,9 +2903,10 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 	engine->hw_id = 3;
 
 	engine->mmio_base = VEBOX_RING_BASE;
-	engine->write_tail = ring_write_tail;
 	engine->emit_flush = gen6_ring_flush;
 	engine->add_request = gen6_add_request;
+	engine->submit_request = i9xx_submit_request;
+
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
 
 	if (INTEL_GEN(dev_priv) >= 8) {
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 8b8e55a3e62e..647cc51e6457 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -212,8 +212,6 @@ struct intel_engine_cs {
 
 	int		(*init_context)(struct drm_i915_gem_request *req);
 
-	void		(*write_tail)(struct intel_engine_cs *ring,
-				      u32 value);
 	int		(*add_request)(struct drm_i915_gem_request *req);
 	/* Some chipsets are not quite as coherent as advertised and need
 	 * an expensive kick to force a true read of the up-to-date seqno.
@@ -302,6 +300,7 @@ struct intel_engine_cs {
 #define I915_DISPATCH_SECURE 0x1
 #define I915_DISPATCH_PINNED 0x2
 #define I915_DISPATCH_RS     0x4
+	void		(*submit_request)(struct drm_i915_gem_request *req);
 
 	/**
 	 * List of objects currently involved in rendering from the
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 37/62] drm/i915: Unify request submission
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (35 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 36/62] drm/i915: Convert engine->write_tail to operate on a request Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 38/62] drm/i915: Stop passing caller's num_dwords to engine->semaphore.signal() Chris Wilson
                   ` (26 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Move request submission from emit_request into its own common vfunc
from i915_add_request().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_request.c    |  8 +++---
 drivers/gpu/drm/i915/i915_guc_submission.c |  4 +--
 drivers/gpu/drm/i915/intel_guc.h           |  2 +-
 drivers/gpu/drm/i915/intel_lrc.c           | 13 +++++-----
 drivers/gpu/drm/i915/intel_ringbuffer.c    | 39 ++++++++++++++----------------
 drivers/gpu/drm/i915/intel_ringbuffer.h    | 23 +++++++++---------
 6 files changed, 41 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 5fef1c291b25..a55042ff7994 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -459,12 +459,10 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	 */
 	request->postfix = intel_ring_get_tail(ring);
 
-	if (i915.enable_execlists)
-		ret = engine->emit_request(request);
-	else
-		ret = engine->add_request(request);
 	/* Not allowed to fail! */
+	ret = engine->emit_request(request);
 	WARN(ret, "emit|add_request failed: %d!\n", ret);
+
 	/* Sanity check that the reserved size was large enough. */
 	ret = intel_ring_get_tail(ring) - request_start;
 	if (ret < 0)
@@ -475,6 +473,8 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 		  reserved_tail, ret);
 
 	i915_gem_mark_busy(request->i915, engine);
+
+	engine->submit_request(request);
 }
 
 static unsigned long local_clock_us(unsigned *cpu)
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 8aa3cf8cac45..cc4792df249d 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -562,7 +562,7 @@ static void guc_add_workqueue_item(struct i915_guc_client *gc,
  * The only error here arises if the doorbell hardware isn't functioning
  * as expected, which really shouln't happen.
  */
-int i915_guc_submit(struct drm_i915_gem_request *rq)
+void i915_guc_submit(struct drm_i915_gem_request *rq)
 {
 	unsigned int engine_id = rq->engine->guc_id;
 	struct intel_guc *guc = &rq->i915->guc;
@@ -579,8 +579,6 @@ int i915_guc_submit(struct drm_i915_gem_request *rq)
 
 	guc->submissions[engine_id] += 1;
 	guc->last_seqno[engine_id] = rq->fence.seqno;
-
-	return b_ret;
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/intel_guc.h b/drivers/gpu/drm/i915/intel_guc.h
index 41601c71f529..7f9063385258 100644
--- a/drivers/gpu/drm/i915/intel_guc.h
+++ b/drivers/gpu/drm/i915/intel_guc.h
@@ -159,7 +159,7 @@ extern int intel_guc_resume(struct drm_device *dev);
 int i915_guc_submission_init(struct drm_device *dev);
 int i915_guc_submission_enable(struct drm_device *dev);
 int i915_guc_wq_check_space(struct drm_i915_gem_request *rq);
-int i915_guc_submit(struct drm_i915_gem_request *rq);
+void i915_guc_submit(struct drm_i915_gem_request *rq);
 void i915_guc_submission_disable(struct drm_device *dev);
 void i915_guc_submission_fini(struct drm_device *dev);
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 71960e47277c..eee9274f7516 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -751,12 +751,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 	 */
 	request->previous_context = engine->last_context;
 	engine->last_context = request->ctx;
-
-	if (i915.enable_guc_submission)
-		i915_guc_submit(request);
-	else
-		execlists_context_queue(request);
-
 	return 0;
 }
 
@@ -1834,8 +1828,13 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine)
 {
 	/* Default vfuncs which can be overriden by each engine. */
 	engine->init_hw = gen8_init_common_ring;
-	engine->emit_request = gen8_emit_request;
 	engine->emit_flush = gen8_emit_flush;
+	engine->emit_request = gen8_emit_request;
+	if (i915.enable_guc_submission)
+		engine->submit_request = i915_guc_submit;
+	else
+		engine->submit_request = execlists_context_queue;
+
 	engine->irq_enable = gen8_logical_ring_enable_irq;
 	engine->irq_disable = gen8_logical_ring_disable_irq;
 	engine->emit_bb_start = gen8_emit_bb_start;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index db38abddfec1..b7b5c2d94db5 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1341,15 +1341,14 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 }
 
 /**
- * gen6_add_request - Update the semaphore mailbox registers
+ * gen6_emit_request - Update the semaphore mailbox registers
  *
  * @request - request to write to the ring
  *
  * Update the mailbox registers in the *other* rings with the current seqno.
  * This acts like a signal in the canonical semaphore.
  */
-static int
-gen6_add_request(struct drm_i915_gem_request *req)
+static int gen6_emit_request(struct drm_i915_gem_request *req)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
@@ -1370,13 +1369,10 @@ gen6_add_request(struct drm_i915_gem_request *req)
 
 	req->tail = intel_ring_get_tail(ring);
 
-	req->engine->submit_request(req);
-
 	return 0;
 }
 
-static int
-gen8_render_add_request(struct drm_i915_gem_request *req)
+static int gen8_render_emit_request(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
 	struct intel_ring *ring = req->ring;
@@ -1400,8 +1396,9 @@ gen8_render_add_request(struct drm_i915_gem_request *req)
 	intel_ring_emit(ring, 0);
 	intel_ring_emit(ring, MI_USER_INTERRUPT);
 	intel_ring_emit(ring, MI_NOOP);
+	intel_ring_advance(ring);
 
-	req->engine->submit_request(req);
+	req->tail = intel_ring_get_tail(ring);
 
 	return 0;
 }
@@ -1609,8 +1606,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 	return 0;
 }
 
-static int
-i9xx_add_request(struct drm_i915_gem_request *req)
+static int i9xx_emit_request(struct drm_i915_gem_request *req)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
@@ -1627,8 +1623,6 @@ i9xx_add_request(struct drm_i915_gem_request *req)
 
 	req->tail = intel_ring_get_tail(ring);
 
-	req->engine->submit_request(req);
-
 	return 0;
 }
 
@@ -2630,7 +2624,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		}
 
 		engine->init_context = intel_rcs_ctx_init;
-		engine->add_request = gen8_render_add_request;
+		engine->emit_request = gen8_render_emit_request;
 		engine->emit_flush = gen8_render_ring_flush;
 		engine->irq_enable = gen8_ring_enable_irq;
 		engine->irq_disable = gen8_ring_disable_irq;
@@ -2643,7 +2637,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 		}
 	} else if (INTEL_GEN(dev_priv) >= 6) {
 		engine->init_context = intel_rcs_ctx_init;
-		engine->add_request = gen6_add_request;
+		engine->emit_request = gen6_emit_request;
 		engine->emit_flush = gen7_render_ring_flush;
 		if (IS_GEN6(dev_priv))
 			engine->emit_flush = gen6_render_ring_flush;
@@ -2673,14 +2667,14 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
 			engine->semaphore.mbox.signal[VCS2] = GEN6_NOSYNC;
 		}
 	} else if (IS_GEN5(dev_priv)) {
-		engine->add_request = i9xx_add_request;
+		engine->emit_request = i9xx_emit_request;
 		engine->emit_flush = gen4_render_ring_flush;
 		engine->irq_enable = gen5_ring_enable_irq;
 		engine->irq_disable = gen5_ring_disable_irq;
 		engine->irq_seqno_barrier = gen5_seqno_barrier;
 		engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
 	} else {
-		engine->add_request = i9xx_add_request;
+		engine->emit_request = i9xx_emit_request;
 		if (INTEL_GEN(dev_priv) < 4)
 			engine->emit_flush = gen2_render_ring_flush;
 		else
@@ -2745,7 +2739,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 		if (IS_GEN6(dev_priv))
 			engine->submit_request = gen6_bsd_submit_request;
 		engine->emit_flush = gen6_bsd_ring_flush;
-		engine->add_request = gen6_add_request;
+		engine->emit_request = gen6_emit_request;
 		engine->irq_seqno_barrier = gen6_seqno_barrier;
 		if (INTEL_GEN(dev_priv) >= 8) {
 			engine->irq_enable_mask =
@@ -2781,7 +2775,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
 	} else {
 		engine->mmio_base = BSD_RING_BASE;
 		engine->emit_flush = bsd_ring_flush;
-		engine->add_request = i9xx_add_request;
+		engine->emit_request = i9xx_emit_request;
 		if (IS_GEN5(dev_priv)) {
 			engine->irq_enable_mask = ILK_BSD_USER_INTERRUPT;
 			engine->irq_enable = gen5_ring_enable_irq;
@@ -2813,8 +2807,9 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
 	engine->hw_id = 4;
 
 	engine->mmio_base = GEN8_BSD2_RING_BASE;
+
 	engine->emit_flush = gen6_bsd_ring_flush;
-	engine->add_request = gen6_add_request;
+	engine->emit_request = gen6_emit_request;
 	engine->submit_request = i9xx_submit_request;
 
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
@@ -2844,8 +2839,9 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
 	engine->hw_id = 2;
 
 	engine->mmio_base = BLT_RING_BASE;
+
 	engine->emit_flush = gen6_ring_flush;
-	engine->add_request = gen6_add_request;
+	engine->emit_request = gen6_emit_request;
 	engine->submit_request = i9xx_submit_request;
 
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
@@ -2903,8 +2899,9 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
 	engine->hw_id = 3;
 
 	engine->mmio_base = VEBOX_RING_BASE;
+
 	engine->emit_flush = gen6_ring_flush;
-	engine->add_request = gen6_add_request;
+	engine->emit_request = gen6_emit_request;
 	engine->submit_request = i9xx_submit_request;
 
 	engine->irq_seqno_barrier = gen6_seqno_barrier;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 647cc51e6457..2eb12d92d112 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -212,7 +212,17 @@ struct intel_engine_cs {
 
 	int		(*init_context)(struct drm_i915_gem_request *req);
 
-	int		(*add_request)(struct drm_i915_gem_request *req);
+	int		(*emit_flush)(struct drm_i915_gem_request *request,
+				      u32 invalidate_domains,
+				      u32 flush_domains);
+	int		(*emit_bb_start)(struct drm_i915_gem_request *req,
+					 u64 offset, u32 length,
+					 unsigned dispatch_flags);
+#define I915_DISPATCH_SECURE 0x1
+#define I915_DISPATCH_PINNED 0x2
+#define I915_DISPATCH_RS     0x4
+	int		(*emit_request)(struct drm_i915_gem_request *req);
+	void		(*submit_request)(struct drm_i915_gem_request *req);
 	/* Some chipsets are not quite as coherent as advertised and need
 	 * an expensive kick to force a true read of the up-to-date seqno.
 	 * However, the up-to-date seqno is not always required and the last
@@ -290,17 +300,6 @@ struct intel_engine_cs {
 	unsigned int idle_lite_restore_wa;
 	bool disable_lite_restore_wa;
 	u32 ctx_desc_template;
-	int		(*emit_request)(struct drm_i915_gem_request *request);
-	int		(*emit_flush)(struct drm_i915_gem_request *request,
-				      u32 invalidate_domains,
-				      u32 flush_domains);
-	int		(*emit_bb_start)(struct drm_i915_gem_request *req,
-					 u64 offset, u32 length,
-					 unsigned dispatch_flags);
-#define I915_DISPATCH_SECURE 0x1
-#define I915_DISPATCH_PINNED 0x2
-#define I915_DISPATCH_RS     0x4
-	void		(*submit_request)(struct drm_i915_gem_request *req);
 
 	/**
 	 * List of objects currently involved in rendering from the
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 38/62] drm/i915: Stop passing caller's num_dwords to engine->semaphore.signal()
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (36 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 37/62] drm/i915: Unify request submission Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 39/62] drm/i915: Reuse legacy breadcrumbs + tail emission Chris Wilson
                   ` (25 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Rather than pass in the num_dwords that the caller wishes to use after
the signal command packet, split the breadcrumb emission into two phases
and have both the signal and breadcrumb individiually acquire space on
the ring. This makes the interface simpler for the reader, and will
simplify for patches.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_ringbuffer.c | 51 ++++++++++++++-------------------
 drivers/gpu/drm/i915/intel_ringbuffer.h |  4 +--
 2 files changed, 23 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index b7b5c2d94db5..b4edbdeac27e 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1226,10 +1226,8 @@ static void render_ring_cleanup(struct intel_engine_cs *engine)
 	intel_fini_pipe_control(engine);
 }
 
-static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
-			   unsigned int num_dwords)
+static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req)
 {
-#define MBOX_UPDATE_DWORDS 8
 	struct intel_ring *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
@@ -1237,10 +1235,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 	int ret, num_rings;
 
 	num_rings = hweight32(INTEL_INFO(dev_priv)->ring_mask);
-	num_dwords += (num_rings-1) * MBOX_UPDATE_DWORDS;
-#undef MBOX_UPDATE_DWORDS
-
-	ret = intel_ring_begin(signaller_req, num_dwords);
+	ret = intel_ring_begin(signaller_req, (num_rings-1) * 8);
 	if (ret)
 		return ret;
 
@@ -1262,14 +1257,13 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
 					   MI_SEMAPHORE_TARGET(waiter->hw_id));
 		intel_ring_emit(signaller, 0);
 	}
+	intel_ring_advance(signaller);
 
 	return 0;
 }
 
-static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
-			   unsigned int num_dwords)
+static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req)
 {
-#define MBOX_UPDATE_DWORDS 6
 	struct intel_ring *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
 	struct intel_engine_cs *waiter;
@@ -1277,10 +1271,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 	int ret, num_rings;
 
 	num_rings = hweight32(INTEL_INFO(dev_priv)->ring_mask);
-	num_dwords += (num_rings-1) * MBOX_UPDATE_DWORDS;
-#undef MBOX_UPDATE_DWORDS
-
-	ret = intel_ring_begin(signaller_req, num_dwords);
+	ret = intel_ring_begin(signaller_req, (num_rings-1) * 6);
 	if (ret)
 		return ret;
 
@@ -1300,12 +1291,12 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
 					   MI_SEMAPHORE_TARGET(waiter->hw_id));
 		intel_ring_emit(signaller, 0);
 	}
+	intel_ring_advance(signaller);
 
 	return 0;
 }
 
-static int gen6_signal(struct drm_i915_gem_request *signaller_req,
-		       unsigned int num_dwords)
+static int gen6_signal(struct drm_i915_gem_request *signaller_req)
 {
 	struct intel_ring *signaller = signaller_req->ring;
 	struct drm_i915_private *dev_priv = signaller_req->i915;
@@ -1313,12 +1304,8 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 	enum intel_engine_id id;
 	int ret, num_rings;
 
-#define MBOX_UPDATE_DWORDS 3
 	num_rings = hweight32(INTEL_INFO(dev_priv)->ring_mask);
-	num_dwords += round_up((num_rings-1) * MBOX_UPDATE_DWORDS, 2);
-#undef MBOX_UPDATE_DWORDS
-
-	ret = intel_ring_begin(signaller_req, num_dwords);
+	ret = intel_ring_begin(signaller_req, round_up((num_rings-1) * 3, 2));
 	if (ret)
 		return ret;
 
@@ -1336,6 +1323,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
 	/* If num_dwords was rounded, make sure the tail pointer is correct */
 	if (num_rings % 2 == 0)
 		intel_ring_emit(signaller, MI_NOOP);
+	intel_ring_advance(signaller);
 
 	return 0;
 }
@@ -1353,11 +1341,13 @@ static int gen6_emit_request(struct drm_i915_gem_request *req)
 	struct intel_ring *ring = req->ring;
 	int ret;
 
-	if (req->engine->semaphore.signal)
-		ret = req->engine->semaphore.signal(req, 4);
-	else
-		ret = intel_ring_begin(req, 4);
+	if (req->engine->semaphore.signal) {
+		ret = req->engine->semaphore.signal(req);
+		if (ret)
+			return ret;
+	}
 
+	ret = intel_ring_begin(req, 4);
 	if (ret)
 		return ret;
 
@@ -1378,10 +1368,13 @@ static int gen8_render_emit_request(struct drm_i915_gem_request *req)
 	struct intel_ring *ring = req->ring;
 	int ret;
 
-	if (engine->semaphore.signal)
-		ret = engine->semaphore.signal(req, 8);
-	else
-		ret = intel_ring_begin(req, 8);
+	if (engine->semaphore.signal) {
+		ret = engine->semaphore.signal(req);
+		if (ret)
+			return ret;
+	}
+
+	ret = intel_ring_begin(req, 8);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 2eb12d92d112..e9fb508fae86 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -286,9 +286,7 @@ struct intel_engine_cs {
 		int	(*sync_to)(struct drm_i915_gem_request *to_req,
 				   struct intel_engine_cs *from,
 				   u32 seqno);
-		int	(*signal)(struct drm_i915_gem_request *signaller_req,
-				  /* num_dwords needed by caller */
-				  unsigned int num_dwords);
+		int	(*signal)(struct drm_i915_gem_request *signaller_req);
 	} semaphore;
 
 	/* Execlists */
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 39/62] drm/i915: Reuse legacy breadcrumbs + tail emission
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (37 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 38/62] drm/i915: Stop passing caller's num_dwords to engine->semaphore.signal() Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 40/62] drm/i915: Remove duplicate golden render state init from execlists Chris Wilson
                   ` (24 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

As GEN6+ is now a simple variant on the basic breadcrumbs + tail write,
reuse the common code.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_ringbuffer.c | 68 +++++++++++++--------------------
 1 file changed, 27 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index b4edbdeac27e..97836e6c61f5 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1328,25 +1328,18 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req)
 	return 0;
 }
 
-/**
- * gen6_emit_request - Update the semaphore mailbox registers
- *
- * @request - request to write to the ring
- *
- * Update the mailbox registers in the *other* rings with the current seqno.
- * This acts like a signal in the canonical semaphore.
- */
-static int gen6_emit_request(struct drm_i915_gem_request *req)
+static void i9xx_submit_request(struct drm_i915_gem_request *request)
+{
+	struct drm_i915_private *dev_priv = request->i915;
+	I915_WRITE_TAIL(request->engine, request->tail);
+}
+
+
+static int i9xx_emit_request(struct drm_i915_gem_request *req)
 {
 	struct intel_ring *ring = req->ring;
 	int ret;
 
-	if (req->engine->semaphore.signal) {
-		ret = req->engine->semaphore.signal(req);
-		if (ret)
-			return ret;
-	}
-
 	ret = intel_ring_begin(req, 4);
 	if (ret)
 		return ret;
@@ -1362,6 +1355,25 @@ static int gen6_emit_request(struct drm_i915_gem_request *req)
 	return 0;
 }
 
+/**
+ * gen6_emit_request - Update the semaphore mailbox registers
+ *
+ * @request - request to write to the ring
+ *
+ * Update the mailbox registers in the *other* rings with the current seqno.
+ * This acts like a signal in the canonical semaphore.
+ */
+static int gen6_emit_request(struct drm_i915_gem_request *req)
+{
+	if (req->engine->semaphore.signal) {
+		int ret = req->engine->semaphore.signal(req);
+		if (ret)
+			return ret;
+	}
+
+	return i9xx_emit_request(req);
+}
+
 static int gen8_render_emit_request(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
@@ -1599,32 +1611,6 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
 	return 0;
 }
 
-static int i9xx_emit_request(struct drm_i915_gem_request *req)
-{
-	struct intel_ring *ring = req->ring;
-	int ret;
-
-	ret = intel_ring_begin(req, 4);
-	if (ret)
-		return ret;
-
-	intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
-	intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
-	intel_ring_emit(ring, req->fence.seqno);
-	intel_ring_emit(ring, MI_USER_INTERRUPT);
-	intel_ring_advance(ring);
-
-	req->tail = intel_ring_get_tail(ring);
-
-	return 0;
-}
-
-static void i9xx_submit_request(struct drm_i915_gem_request *request)
-{
-	struct drm_i915_private *dev_priv = request->i915;
-	I915_WRITE_TAIL(request->engine, request->tail);
-}
-
 static void
 gen6_ring_enable_irq(struct intel_engine_cs *engine)
 {
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 40/62] drm/i915: Remove duplicate golden render state init from execlists
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (38 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 39/62] drm/i915: Reuse legacy breadcrumbs + tail emission Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 41/62] drm/i915: Unify legacy/execlists submit_execbuf callbacks Chris Wilson
                   ` (23 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Now that we use the same vfuncs for emitting the batch buffer in both
execlists and legacy, the golden render state initialisation is
identical between both.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_render_state.c | 23 +++++++++++++------
 drivers/gpu/drm/i915/i915_gem_render_state.h | 18 ---------------
 drivers/gpu/drm/i915/intel_lrc.c             | 34 +---------------------------
 drivers/gpu/drm/i915/intel_renderstate.h     | 16 +++++++++----
 4 files changed, 28 insertions(+), 63 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 6aedb913f694..8587dbc302e0 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -28,6 +28,15 @@
 #include "i915_drv.h"
 #include "intel_renderstate.h"
 
+struct render_state {
+	const struct intel_renderstate_rodata *rodata;
+	struct drm_i915_gem_object *obj;
+	u64 ggtt_offset;
+	int gen;
+	u32 aux_batch_size;
+	u32 aux_batch_offset;
+};
+
 static const struct intel_renderstate_rodata *
 render_state_get_rodata(const int gen)
 {
@@ -51,6 +60,7 @@ static int render_state_init(struct render_state *so,
 	int ret;
 
 	so->gen = INTEL_GEN(dev_priv);
+	so->ggtt_offset = 0;
 	so->rodata = render_state_get_rodata(so->gen);
 	if (so->rodata == NULL)
 		return 0;
@@ -164,14 +174,14 @@ err_out:
 
 #undef OUT_BATCH
 
-void i915_gem_render_state_fini(struct render_state *so)
+static void render_state_fini(struct render_state *so)
 {
 	i915_gem_object_ggtt_unpin(so->obj);
 	i915_gem_object_put(so->obj);
 }
 
-int i915_gem_render_state_prepare(struct intel_engine_cs *engine,
-				  struct render_state *so)
+static int render_state_prepare(struct intel_engine_cs *engine,
+				struct render_state *so)
 {
 	int ret;
 
@@ -187,7 +197,7 @@ int i915_gem_render_state_prepare(struct intel_engine_cs *engine,
 
 	ret = render_state_setup(so);
 	if (ret) {
-		i915_gem_render_state_fini(so);
+		render_state_fini(so);
 		return ret;
 	}
 
@@ -199,7 +209,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
 	struct render_state so;
 	int ret;
 
-	ret = i915_gem_render_state_prepare(req->engine, &so);
+	ret = render_state_prepare(req->engine, &so);
 	if (ret)
 		return ret;
 
@@ -223,8 +233,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
 	}
 
 	i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
-
 out:
-	i915_gem_render_state_fini(&so);
+	render_state_fini(&so);
 	return ret;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.h b/drivers/gpu/drm/i915/i915_gem_render_state.h
index 6aaa3a10a630..c44fca8599bb 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.h
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.h
@@ -26,24 +26,6 @@
 
 #include <linux/types.h>
 
-struct intel_renderstate_rodata {
-	const u32 *reloc;
-	const u32 *batch;
-	const u32 batch_items;
-};
-
-struct render_state {
-	const struct intel_renderstate_rodata *rodata;
-	struct drm_i915_gem_object *obj;
-	u64 ggtt_offset;
-	int gen;
-	u32 aux_batch_size;
-	u32 aux_batch_offset;
-};
-
 int i915_gem_render_state_init(struct drm_i915_gem_request *req);
-void i915_gem_render_state_fini(struct render_state *so);
-int i915_gem_render_state_prepare(struct intel_engine_cs *engine,
-				  struct render_state *so);
 
 #endif /* _I915_GEM_RENDER_STATE_H_ */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index eee9274f7516..3f7f7d72487e 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1723,38 +1723,6 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request)
 	return intel_logical_ring_advance_and_submit(request);
 }
 
-static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
-{
-	struct render_state so;
-	int ret;
-
-	ret = i915_gem_render_state_prepare(req->engine, &so);
-	if (ret)
-		return ret;
-
-	if (so.rodata == NULL)
-		return 0;
-
-	ret = req->engine->emit_bb_start(req, so.ggtt_offset,
-					 so.rodata->batch_items * 4,
-					 I915_DISPATCH_SECURE);
-	if (ret)
-		goto out;
-
-	ret = req->engine->emit_bb_start(req,
-					 (so.ggtt_offset + so.aux_batch_offset),
-					 so.aux_batch_size,
-					 I915_DISPATCH_SECURE);
-	if (ret)
-		goto out;
-
-	i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
-
-out:
-	i915_gem_render_state_fini(&so);
-	return ret;
-}
-
 static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
 {
 	int ret;
@@ -1771,7 +1739,7 @@ static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
 	if (ret)
 		DRM_ERROR("MOCS failed to program: expect performance issues.\n");
 
-	return intel_lr_context_render_state_init(req);
+	return i915_gem_render_state_init(req);
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/intel_renderstate.h b/drivers/gpu/drm/i915/intel_renderstate.h
index 5bd69852752c..08f6fea05a2c 100644
--- a/drivers/gpu/drm/i915/intel_renderstate.h
+++ b/drivers/gpu/drm/i915/intel_renderstate.h
@@ -24,12 +24,13 @@
 #ifndef _INTEL_RENDERSTATE_H
 #define _INTEL_RENDERSTATE_H
 
-#include "i915_drv.h"
+#include <linux/types.h>
 
-extern const struct intel_renderstate_rodata gen6_null_state;
-extern const struct intel_renderstate_rodata gen7_null_state;
-extern const struct intel_renderstate_rodata gen8_null_state;
-extern const struct intel_renderstate_rodata gen9_null_state;
+struct intel_renderstate_rodata {
+	const u32 *reloc;
+	const u32 *batch;
+	const u32 batch_items;
+};
 
 #define RO_RENDERSTATE(_g)						\
 	const struct intel_renderstate_rodata gen ## _g ## _null_state = { \
@@ -38,4 +39,9 @@ extern const struct intel_renderstate_rodata gen9_null_state;
 		.batch_items = sizeof(gen ## _g ## _null_state_batch)/4, \
 	}
 
+extern const struct intel_renderstate_rodata gen6_null_state;
+extern const struct intel_renderstate_rodata gen7_null_state;
+extern const struct intel_renderstate_rodata gen8_null_state;
+extern const struct intel_renderstate_rodata gen9_null_state;
+
 #endif /* INTEL_RENDERSTATE_H */
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 41/62] drm/i915: Unify legacy/execlists submit_execbuf callbacks
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (39 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 40/62] drm/i915: Remove duplicate golden render state init from execlists Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 42/62] drm/i915: Simplify calling engine->sync_to Chris Wilson
                   ` (22 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Now that emitting requests is identical between legacy and execlists, we
can use the same function to build up the ring for submitting to either
engine. (With the exception of i915_switch_contexts(), but in time that
will also be handled gracefully.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h            |  20 -----
 drivers/gpu/drm/i915/i915_gem.c            |   2 -
 drivers/gpu/drm/i915/i915_gem_context.c    |   3 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  24 ++++--
 drivers/gpu/drm/i915/intel_lrc.c           | 129 -----------------------------
 drivers/gpu/drm/i915/intel_lrc.h           |   4 -
 6 files changed, 20 insertions(+), 162 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b1e00b42a830..f95378f33f6c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1686,18 +1686,6 @@ struct i915_virtual_gpu {
 	bool active;
 };
 
-struct i915_execbuffer_params {
-	struct drm_device               *dev;
-	struct drm_file                 *file;
-	uint32_t                        dispatch_flags;
-	uint32_t                        args_batch_start_offset;
-	uint64_t                        batch_obj_vm_offset;
-	struct intel_engine_cs *engine;
-	struct drm_i915_gem_object      *batch_obj;
-	struct i915_gem_context            *ctx;
-	struct drm_i915_gem_request     *request;
-};
-
 /* used in computing the new watermarks state */
 struct intel_wm_config {
 	unsigned int num_pipes_active;
@@ -1996,9 +1984,6 @@ struct drm_i915_private {
 
 	/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
 	struct {
-		int (*execbuf_submit)(struct i915_execbuffer_params *params,
-				      struct drm_i915_gem_execbuffer2 *args,
-				      struct list_head *vmas);
 		int (*init_engines)(struct drm_device *dev);
 		void (*cleanup_engine)(struct intel_engine_cs *engine);
 		void (*stop_engine)(struct intel_engine_cs *engine);
@@ -2906,11 +2891,6 @@ int i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 			      struct drm_file *file_priv);
 int i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
 			     struct drm_file *file_priv);
-void i915_gem_execbuffer_move_to_active(struct list_head *vmas,
-					struct drm_i915_gem_request *req);
-int i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
-				   struct drm_i915_gem_execbuffer2 *args,
-				   struct list_head *vmas);
 int i915_gem_execbuffer(struct drm_device *dev, void *data,
 			struct drm_file *file_priv);
 int i915_gem_execbuffer2(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index de1e866276c5..6c4c2c711dc7 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4249,12 +4249,10 @@ int i915_gem_init(struct drm_device *dev)
 	mutex_lock(&dev->struct_mutex);
 
 	if (!i915.enable_execlists) {
-		dev_priv->gt.execbuf_submit = i915_gem_ringbuffer_submission;
 		dev_priv->gt.init_engines = i915_gem_init_engines;
 		dev_priv->gt.cleanup_engine = intel_engine_cleanup;
 		dev_priv->gt.stop_engine = intel_engine_stop;
 	} else {
-		dev_priv->gt.execbuf_submit = intel_execlists_submission;
 		dev_priv->gt.init_engines = intel_logical_rings_init;
 		dev_priv->gt.cleanup_engine = intel_logical_ring_cleanup;
 		dev_priv->gt.stop_engine = intel_logical_ring_stop;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 9eb6ab9cb610..8641783618dc 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -853,8 +853,9 @@ int i915_switch_context(struct drm_i915_gem_request *req)
 {
 	struct intel_engine_cs *engine = req->engine;
 
-	WARN_ON(i915.enable_execlists);
 	lockdep_assert_held(&req->i915->dev->struct_mutex);
+	if (i915.enable_execlists)
+		return 0;
 
 	if (!req->ctx->engine[engine->id].state) {
 		struct i915_gem_context *to = req->ctx;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 49dda93ba63c..c2d703323fc2 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -41,6 +41,18 @@
 
 #define BATCH_OFFSET_BIAS (256*1024)
 
+struct i915_execbuffer_params {
+	struct drm_device               *dev;
+	struct drm_file                 *file;
+	uint32_t                        dispatch_flags;
+	uint32_t                        args_batch_start_offset;
+	uint64_t                        batch_obj_vm_offset;
+	struct intel_engine_cs          *engine;
+	struct drm_i915_gem_object      *batch_obj;
+	struct i915_gem_context         *ctx;
+	struct drm_i915_gem_request     *request;
+};
+
 struct eb_vmas {
 	struct list_head vmas;
 	int and;
@@ -1084,7 +1096,7 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
 	return ctx;
 }
 
-void
+static void
 i915_gem_execbuffer_move_to_active(struct list_head *vmas,
 				   struct drm_i915_gem_request *req)
 {
@@ -1211,10 +1223,10 @@ err:
 		return ERR_PTR(ret);
 }
 
-int
-i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
-			       struct drm_i915_gem_execbuffer2 *args,
-			       struct list_head *vmas)
+static int
+execbuf_submit(struct i915_execbuffer_params *params,
+	       struct drm_i915_gem_execbuffer2 *args,
+	       struct list_head *vmas)
 {
 	struct drm_i915_private *dev_priv = params->request->i915;
 	u64 exec_start, exec_len;
@@ -1623,7 +1635,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	params->batch_obj               = batch_obj;
 	params->ctx                     = ctx;
 
-	ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
+	ret = execbuf_submit(params, args, &eb->vmas);
 err_request:
 	i915_gem_execbuffer_retire_commands(params);
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3f7f7d72487e..2fffba8c3acf 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -620,39 +620,6 @@ static void execlists_context_queue(struct drm_i915_gem_request *request)
 	spin_unlock_bh(&engine->execlist_lock);
 }
 
-static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
-				 struct list_head *vmas)
-{
-	const unsigned other_rings = ~intel_engine_flag(req->engine);
-	struct i915_vma *vma;
-	uint32_t flush_domains = 0;
-	bool flush_chipset = false;
-	int ret;
-
-	list_for_each_entry(vma, vmas, exec_list) {
-		struct drm_i915_gem_object *obj = vma->obj;
-
-		if (obj->active & other_rings) {
-			ret = i915_gem_object_sync(obj, req);
-			if (ret)
-				return ret;
-		}
-
-		if (obj->base.write_domain & I915_GEM_DOMAIN_CPU)
-			flush_chipset |= i915_gem_clflush_object(obj, false);
-
-		flush_domains |= obj->base.write_domain;
-	}
-
-	if (flush_domains & I915_GEM_DOMAIN_GTT)
-		wmb();
-
-	/* Unconditionally invalidate gpu caches and ensure that we do flush
-	 * any residual writes from the previous batch.
-	 */
-	return req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
-}
-
 int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
 {
 	struct intel_engine_cs *engine = request->engine;
@@ -754,102 +721,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
 	return 0;
 }
 
-/**
- * execlists_submission() - submit a batchbuffer for execution, Execlists style
- * @dev: DRM device.
- * @file: DRM file.
- * @ring: Engine Command Streamer to submit to.
- * @ctx: Context to employ for this submission.
- * @args: execbuffer call arguments.
- * @vmas: list of vmas.
- * @batch_obj: the batchbuffer to submit.
- * @exec_start: batchbuffer start virtual address pointer.
- * @dispatch_flags: translated execbuffer call flags.
- *
- * This is the evil twin version of i915_gem_ringbuffer_submission. It abstracts
- * away the submission details of the execbuffer ioctl call.
- *
- * Return: non-zero if the submission fails.
- */
-int intel_execlists_submission(struct i915_execbuffer_params *params,
-			       struct drm_i915_gem_execbuffer2 *args,
-			       struct list_head *vmas)
-{
-	struct drm_device       *dev = params->dev;
-	struct intel_engine_cs *engine = params->engine;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_ring *ring = params->request->ring;
-	u64 exec_start;
-	int instp_mode;
-	u32 instp_mask;
-	int ret;
-
-	instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
-	instp_mask = I915_EXEC_CONSTANTS_MASK;
-	switch (instp_mode) {
-	case I915_EXEC_CONSTANTS_REL_GENERAL:
-	case I915_EXEC_CONSTANTS_ABSOLUTE:
-	case I915_EXEC_CONSTANTS_REL_SURFACE:
-		if (instp_mode != 0 && engine->id != RCS) {
-			DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
-			return -EINVAL;
-		}
-
-		if (instp_mode != dev_priv->relative_constants_mode) {
-			if (instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) {
-				DRM_DEBUG("rel surface constants mode invalid on gen5+\n");
-				return -EINVAL;
-			}
-
-			/* The HW changed the meaning on this bit on gen6 */
-			instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE;
-		}
-		break;
-	default:
-		DRM_DEBUG("execbuf with unknown constants: %d\n", instp_mode);
-		return -EINVAL;
-	}
-
-	if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
-		DRM_DEBUG("sol reset is gen7 only\n");
-		return -EINVAL;
-	}
-
-	ret = execlists_move_to_gpu(params->request, vmas);
-	if (ret)
-		return ret;
-
-	if (engine->id == RCS &&
-	    instp_mode != dev_priv->relative_constants_mode) {
-		ret = intel_ring_begin(params->request, 4);
-		if (ret)
-			return ret;
-
-		intel_ring_emit(ring, MI_NOOP);
-		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
-		intel_ring_emit_reg(ring, INSTPM);
-		intel_ring_emit(ring, instp_mask << 16 | instp_mode);
-		intel_ring_advance(ring);
-
-		dev_priv->relative_constants_mode = instp_mode;
-	}
-
-	exec_start = params->batch_obj_vm_offset +
-		     args->batch_start_offset;
-
-	ret = engine->emit_bb_start(params->request,
-				    exec_start, args->batch_len,
-				    params->dispatch_flags);
-	if (ret)
-		return ret;
-
-	trace_i915_gem_ring_dispatch(params->request, params->dispatch_flags);
-
-	i915_gem_execbuffer_move_to_active(vmas, params->request);
-
-	return 0;
-}
-
 void intel_execlists_cancel_requests(struct intel_engine_cs *engine)
 {
 	struct drm_i915_gem_request *req, *tmp;
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 87db0b6c2e76..aff44b947e3e 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -87,10 +87,6 @@ uint64_t intel_lr_context_descriptor(struct i915_gem_context *ctx,
 /* Execlists */
 int intel_sanitize_enable_execlists(struct drm_i915_private *dev_priv,
 				    int enable_execlists);
-struct i915_execbuffer_params;
-int intel_execlists_submission(struct i915_execbuffer_params *params,
-			       struct drm_i915_gem_execbuffer2 *args,
-			       struct list_head *vmas);
 
 void intel_execlists_cancel_requests(struct intel_engine_cs *engine);
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 42/62] drm/i915: Simplify calling engine->sync_to
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (40 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 41/62] drm/i915: Unify legacy/execlists submit_execbuf callbacks Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 43/62] drm/i915: Introduce i915_gem_active for request tracking Chris Wilson
                   ` (21 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Since requests can no longer be generated as a side-effect of
intel_ring_begin(), we know that the seqno will be unchanged during
ring-emission. This predicatablity then means we do not have to check
for the seqno wrapping around whilst emitting the semaphore for
engine->sync_to().

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h         |  2 +-
 drivers/gpu/drm/i915/i915_gem.c         | 13 ++----
 drivers/gpu/drm/i915/i915_gem_request.c |  9 +---
 drivers/gpu/drm/i915/intel_ringbuffer.c | 77 +++++++++++++--------------------
 drivers/gpu/drm/i915/intel_ringbuffer.h |  5 +--
 5 files changed, 37 insertions(+), 69 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f95378f33f6c..e9b48808deef 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1737,7 +1737,7 @@ struct drm_i915_private {
 	struct i915_gem_context *kernel_context;
 	struct intel_engine_cs engine[I915_NUM_ENGINES];
 	struct drm_i915_gem_object *semaphore_obj;
-	uint32_t last_seqno, next_seqno;
+	uint32_t next_seqno;
 
 	struct drm_dma_handle *status_page_dmah;
 	struct resource mch_res;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6c4c2c711dc7..b75185273b0e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2574,22 +2574,15 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 		i915_gem_object_retire_request(obj, from);
 	} else {
 		int idx = intel_engine_sync_index(from->engine, to->engine);
-		u32 seqno = i915_gem_request_get_seqno(from);
-
-		if (seqno <= from->engine->semaphore.sync_seqno[idx])
+		if (from->fence.seqno <= from->engine->semaphore.sync_seqno[idx])
 			return 0;
 
 		trace_i915_gem_ring_sync_to(to, from);
-		ret = to->engine->semaphore.sync_to(to, from->engine, seqno);
+		ret = to->engine->semaphore.sync_to(to, from);
 		if (ret)
 			return ret;
 
-		/* We use last_read_req because sync_to()
-		 * might have just caused seqno wrap under
-		 * the radar.
-		 */
-		from->engine->semaphore.sync_seqno[idx] =
-			i915_gem_request_get_seqno(obj->last_read_req[from->engine->id]);
+		from->engine->semaphore.sync_seqno[idx] = from->fence.seqno;
 	}
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index a55042ff7994..1e9515cfb506 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -175,14 +175,7 @@ int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
 	if (ret)
 		return ret;
 
-	/* Carefully set the last_seqno value so that wrap
-	 * detection still works
-	 */
 	dev_priv->next_seqno = seqno;
-	dev_priv->last_seqno = seqno - 1;
-	if (dev_priv->last_seqno == 0)
-		dev_priv->last_seqno--;
-
 	return 0;
 }
 
@@ -197,7 +190,7 @@ static int i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
 		dev_priv->next_seqno = 1;
 	}
 
-	*seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
+	*seqno = dev_priv->next_seqno++;
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 97836e6c61f5..8d6249701137 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1408,12 +1408,6 @@ static int gen8_render_emit_request(struct drm_i915_gem_request *req)
 	return 0;
 }
 
-static inline bool i915_gem_has_seqno_wrapped(struct drm_i915_private *dev_priv,
-					      u32 seqno)
-{
-	return dev_priv->last_seqno < seqno;
-}
-
 /**
  * intel_ring_sync - sync the waiter to the signaller on seqno
  *
@@ -1423,29 +1417,29 @@ static inline bool i915_gem_has_seqno_wrapped(struct drm_i915_private *dev_priv,
  */
 
 static int
-gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
-	       struct intel_engine_cs *signaller,
-	       u32 seqno)
+gen8_ring_sync(struct drm_i915_gem_request *wait,
+	       struct drm_i915_gem_request *signal)
 {
-	struct intel_ring *waiter = waiter_req->ring;
-	struct drm_i915_private *dev_priv = waiter_req->i915;
+	struct intel_ring *waiter = wait->ring;
+	struct drm_i915_private *dev_priv = wait->i915;
 	struct i915_hw_ppgtt *ppgtt;
 	int ret;
 
-	ret = intel_ring_begin(waiter_req, 4);
+	ret = intel_ring_begin(wait, 4);
 	if (ret)
 		return ret;
 
-	intel_ring_emit(waiter, MI_SEMAPHORE_WAIT |
-				MI_SEMAPHORE_GLOBAL_GTT |
-				MI_SEMAPHORE_SAD_GTE_SDD);
-	intel_ring_emit(waiter, seqno);
 	intel_ring_emit(waiter,
-			lower_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
-						       signaller->id)));
+			MI_SEMAPHORE_WAIT |
+			MI_SEMAPHORE_GLOBAL_GTT |
+			MI_SEMAPHORE_SAD_GTE_SDD);
+	intel_ring_emit(waiter, signal->fence.seqno);
+	intel_ring_emit(waiter,
+			lower_32_bits(GEN8_WAIT_OFFSET(wait->engine,
+						       signal->engine->id)));
 	intel_ring_emit(waiter,
-			upper_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
-						       signaller->id)));
+			upper_32_bits(GEN8_WAIT_OFFSET(wait->engine,
+						       signal->engine->id)));
 	intel_ring_advance(waiter);
 
 	/* When the !RCS engines idle waiting upon a semaphore, they lose their
@@ -1453,48 +1447,37 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
 	 * We do this on the i915_switch_context() following the wait and
 	 * before the dispatch.
 	 */
-	ppgtt = waiter_req->ctx->ppgtt;
-	if (ppgtt && waiter_req->engine->id != RCS)
-		ppgtt->pd_dirty_rings |= intel_engine_flag(waiter_req->engine);
+	ppgtt = wait->ctx->ppgtt;
+	if (ppgtt && wait->engine->id != RCS)
+		ppgtt->pd_dirty_rings |= intel_engine_flag(wait->engine);
 	return 0;
 }
 
 static int
-gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
-	       struct intel_engine_cs *signaller,
-	       u32 seqno)
+gen6_ring_sync(struct drm_i915_gem_request *wait,
+	       struct drm_i915_gem_request *signal)
 {
-	struct intel_ring *waiter = waiter_req->ring;
+	struct intel_ring *waiter = wait->ring;
 	u32 dw1 = MI_SEMAPHORE_MBOX |
 		  MI_SEMAPHORE_COMPARE |
 		  MI_SEMAPHORE_REGISTER;
-	u32 wait_mbox = signaller->semaphore.mbox.wait[waiter_req->engine->id];
+	u32 wait_mbox = signal->engine->semaphore.mbox.wait[wait->engine->id];
 	int ret;
 
-	/* Throughout all of the GEM code, seqno passed implies our current
-	 * seqno is >= the last seqno executed. However for hardware the
-	 * comparison is strictly greater than.
-	 */
-	seqno -= 1;
-
 	WARN_ON(wait_mbox == MI_SEMAPHORE_SYNC_INVALID);
 
-	ret = intel_ring_begin(waiter_req, 4);
+	ret = intel_ring_begin(wait, 4);
 	if (ret)
 		return ret;
 
-	/* If seqno wrap happened, omit the wait with no-ops */
-	if (likely(!i915_gem_has_seqno_wrapped(waiter_req->i915, seqno))) {
-		intel_ring_emit(waiter, dw1 | wait_mbox);
-		intel_ring_emit(waiter, seqno);
-		intel_ring_emit(waiter, 0);
-		intel_ring_emit(waiter, MI_NOOP);
-	} else {
-		intel_ring_emit(waiter, MI_NOOP);
-		intel_ring_emit(waiter, MI_NOOP);
-		intel_ring_emit(waiter, MI_NOOP);
-		intel_ring_emit(waiter, MI_NOOP);
-	}
+	intel_ring_emit(waiter, dw1 | wait_mbox);
+	/* Throughout all of the GEM code, seqno passed implies our current
+	 * seqno is >= the last seqno executed. However for hardware the
+	 * comparison is strictly greater than.
+	 */
+	intel_ring_emit(waiter, signal->fence.seqno - 1);
+	intel_ring_emit(waiter, 0);
+	intel_ring_emit(waiter, MI_NOOP);
 	intel_ring_advance(waiter);
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index e9fb508fae86..b6a5f48c016f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -283,9 +283,8 @@ struct intel_engine_cs {
 		};
 
 		/* AKA wait() */
-		int	(*sync_to)(struct drm_i915_gem_request *to_req,
-				   struct intel_engine_cs *from,
-				   u32 seqno);
+		int	(*sync_to)(struct drm_i915_gem_request *to,
+				   struct drm_i915_gem_request *from);
 		int	(*signal)(struct drm_i915_gem_request *signaller_req);
 	} semaphore;
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 43/62] drm/i915: Introduce i915_gem_active for request tracking
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (41 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 42/62] drm/i915: Simplify calling engine->sync_to Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 44/62] drm/i915: Prepare i915_gem_active for annotations Chris Wilson
                   ` (20 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

In the next patch, request tracking is made more generic and for that we
need a new expanded struct and to separate out the logic changes from
the mechanical churn, we split out the structure renaming into this
patch.

v2: Writer's block. Add some spiel about why we track requests.
v3: Now i915_gem_active.
v4: Now with i915_gem_active_set() for attaching to the active request.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c        | 10 +++---
 drivers/gpu/drm/i915/i915_drv.h            |  9 +++--
 drivers/gpu/drm/i915/i915_gem.c            | 58 +++++++++++++++---------------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 +--
 drivers/gpu/drm/i915/i915_gem_fence.c      |  6 ++--
 drivers/gpu/drm/i915/i915_gem_request.h    | 41 +++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_tiling.c     |  2 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c    |  2 +-
 drivers/gpu/drm/i915/i915_gpu_error.c      |  6 ++--
 drivers/gpu/drm/i915/intel_display.c       |  8 ++---
 10 files changed, 93 insertions(+), 53 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 48c8f74e6256..2edbf9e95e7f 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -155,10 +155,10 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		   obj->base.write_domain);
 	for_each_engine_id(engine, dev_priv, id)
 		seq_printf(m, "%x ",
-				i915_gem_request_get_seqno(obj->last_read_req[id]));
+			   i915_gem_request_get_seqno(obj->last_read[id].request));
 	seq_printf(m, "] %x %x%s%s%s",
-		   i915_gem_request_get_seqno(obj->last_write_req),
-		   i915_gem_request_get_seqno(obj->last_fenced_req),
+		   i915_gem_request_get_seqno(obj->last_write.request),
+		   i915_gem_request_get_seqno(obj->last_fence.request),
 		   i915_cache_level_str(to_i915(obj->base.dev), obj->cache_level),
 		   obj->dirty ? " dirty" : "",
 		   obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
@@ -192,8 +192,8 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		*t = '\0';
 		seq_printf(m, " (%s mappable)", s);
 	}
-	if (obj->last_write_req != NULL)
-		seq_printf(m, " (%s)", obj->last_write_req->engine->name);
+	if (obj->last_write.request != NULL)
+		seq_printf(m, " (%s)", obj->last_write.request->engine->name);
 	if (obj->frontbuffer_bits)
 		seq_printf(m, " (frontbuffer: 0x%03x)", obj->frontbuffer_bits);
 }
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e9b48808deef..b8df48e0e32b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2220,11 +2220,10 @@ struct drm_i915_gem_object {
 	 * requests on one ring where the write request is older than the
 	 * read request. This allows for the CPU to read from an active
 	 * buffer by only waiting for the write to complete.
-	 * */
-	struct drm_i915_gem_request *last_read_req[I915_NUM_ENGINES];
-	struct drm_i915_gem_request *last_write_req;
-	/** Breadcrumb of last fenced GPU access to the buffer. */
-	struct drm_i915_gem_request *last_fenced_req;
+	 */
+	struct i915_gem_active last_read[I915_NUM_ENGINES];
+	struct i915_gem_active last_write;
+	struct i915_gem_active last_fence;
 
 	/** Current tiling stride for the object, if it's tiled. */
 	uint32_t stride;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b75185273b0e..8c3b39a8e974 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1119,23 +1119,23 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 		return 0;
 
 	if (readonly) {
-		if (obj->last_write_req != NULL) {
-			ret = i915_wait_request(obj->last_write_req);
+		if (obj->last_write.request != NULL) {
+			ret = i915_wait_request(obj->last_write.request);
 			if (ret)
 				return ret;
 
-			i = obj->last_write_req->engine->id;
-			if (obj->last_read_req[i] == obj->last_write_req)
+			i = obj->last_write.request->engine->id;
+			if (obj->last_read[i].request == obj->last_write.request)
 				i915_gem_object_retire__read(obj, i);
 			else
 				i915_gem_object_retire__write(obj);
 		}
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			if (obj->last_read_req[i] == NULL)
+			if (obj->last_read[i].request == NULL)
 				continue;
 
-			ret = i915_wait_request(obj->last_read_req[i]);
+			ret = i915_wait_request(obj->last_read[i].request);
 			if (ret)
 				return ret;
 
@@ -1153,9 +1153,9 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
 {
 	int ring = req->engine->id;
 
-	if (obj->last_read_req[ring] == req)
+	if (obj->last_read[ring].request == req)
 		i915_gem_object_retire__read(obj, ring);
-	else if (obj->last_write_req == req)
+	else if (obj->last_write.request == req)
 		i915_gem_object_retire__write(obj);
 
 	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
@@ -1184,7 +1184,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 	if (readonly) {
 		struct drm_i915_gem_request *req;
 
-		req = obj->last_write_req;
+		req = obj->last_write.request;
 		if (req == NULL)
 			return 0;
 
@@ -1193,7 +1193,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
 			struct drm_i915_gem_request *req;
 
-			req = obj->last_read_req[i];
+			req = obj->last_read[i].request;
 			if (req == NULL)
 				continue;
 
@@ -2109,7 +2109,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 	obj->active |= intel_engine_flag(engine);
 
 	list_move_tail(&obj->engine_list[engine->id], &engine->active_list);
-	i915_gem_request_assign(&obj->last_read_req[engine->id], req);
+	i915_gem_active_set(&obj->last_read[engine->id], req);
 
 	list_move_tail(&vma->vm_link, &vma->vm->active_list);
 }
@@ -2117,10 +2117,10 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 static void
 i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
 {
-	GEM_BUG_ON(obj->last_write_req == NULL);
-	GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write_req->engine)));
+	GEM_BUG_ON(obj->last_write.request == NULL);
+	GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write.request->engine)));
 
-	i915_gem_request_assign(&obj->last_write_req, NULL);
+	i915_gem_request_assign(&obj->last_write.request, NULL);
 	intel_fb_obj_flush(obj, true, ORIGIN_CS);
 }
 
@@ -2129,13 +2129,13 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 {
 	struct i915_vma *vma;
 
-	GEM_BUG_ON(obj->last_read_req[ring] == NULL);
+	GEM_BUG_ON(obj->last_read[ring].request == NULL);
 	GEM_BUG_ON(!(obj->active & (1 << ring)));
 
 	list_del_init(&obj->engine_list[ring]);
-	i915_gem_request_assign(&obj->last_read_req[ring], NULL);
+	i915_gem_request_assign(&obj->last_read[ring].request, NULL);
 
-	if (obj->last_write_req && obj->last_write_req->engine->id == ring)
+	if (obj->last_write.request && obj->last_write.request->engine->id == ring)
 		i915_gem_object_retire__write(obj);
 
 	obj->active &= ~(1 << ring);
@@ -2154,7 +2154,7 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 			list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
 	}
 
-	i915_gem_request_assign(&obj->last_fenced_req, NULL);
+	i915_gem_request_assign(&obj->last_fence.request, NULL);
 	i915_gem_object_put(obj);
 }
 
@@ -2347,7 +2347,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 				       struct drm_i915_gem_object,
 				       engine_list[engine->id]);
 
-		if (!list_empty(&obj->last_read_req[engine->id]->list))
+		if (!list_empty(&obj->last_read[engine->id].request->list))
 			break;
 
 		i915_gem_object_retire__read(obj, engine->id);
@@ -2453,7 +2453,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = obj->last_read_req[i];
+		req = obj->last_read[i].request;
 		if (req == NULL)
 			continue;
 
@@ -2527,10 +2527,10 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	i915_gem_object_put(obj);
 
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
-		if (obj->last_read_req[i] == NULL)
+		if (obj->last_read[i].request == NULL)
 			continue;
 
-		req[n++] = i915_gem_request_get(obj->last_read_req[i]);
+		req[n++] = i915_gem_request_get(obj->last_read[i].request);
 	}
 
 	mutex_unlock(&dev->struct_mutex);
@@ -2621,12 +2621,12 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
 
 	n = 0;
 	if (readonly) {
-		if (obj->last_write_req)
-			req[n++] = obj->last_write_req;
+		if (obj->last_write.request)
+			req[n++] = obj->last_write.request;
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++)
-			if (obj->last_read_req[i])
-				req[n++] = obj->last_read_req[i];
+			if (obj->last_read[i].request)
+				req[n++] = obj->last_read[i].request;
 	}
 	for (i = 0; i < n; i++) {
 		ret = __i915_gem_object_sync(obj, to, req[i]);
@@ -3707,12 +3707,12 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
 			struct drm_i915_gem_request *req;
 
-			req = obj->last_read_req[i];
+			req = obj->last_read[i].request;
 			if (req)
 				args->busy |= 1 << (16 + req->engine->exec_id);
 		}
-		if (obj->last_write_req)
-			args->busy |= obj->last_write_req->engine->exec_id;
+		if (obj->last_write.request)
+			args->busy |= obj->last_write.request->engine->exec_id;
 	}
 
 unref:
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index c2d703323fc2..5c7eb3c93a86 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1117,7 +1117,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
 
 		i915_vma_move_to_active(vma, req);
 		if (obj->base.write_domain) {
-			i915_gem_request_assign(&obj->last_write_req, req);
+			i915_gem_active_set(&obj->last_write, req);
 
 			intel_fb_obj_invalidate(obj, ORIGIN_CS);
 
@@ -1125,7 +1125,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
 			obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
 		}
 		if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
-			i915_gem_request_assign(&obj->last_fenced_req, req);
+			i915_gem_active_set(&obj->last_fence, req);
 			if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
 				struct drm_i915_private *dev_priv = engine->i915;
 				list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
diff --git a/drivers/gpu/drm/i915/i915_gem_fence.c b/drivers/gpu/drm/i915/i915_gem_fence.c
index 2b6bdc267fb5..9f8ce13d2f77 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence.c
@@ -261,12 +261,12 @@ static inline void i915_gem_object_fence_lost(struct drm_i915_gem_object *obj)
 static int
 i915_gem_object_wait_fence(struct drm_i915_gem_object *obj)
 {
-	if (obj->last_fenced_req) {
-		int ret = i915_wait_request(obj->last_fenced_req);
+	if (obj->last_fence.request) {
+		int ret = i915_wait_request(obj->last_fence.request);
 		if (ret)
 			return ret;
 
-		i915_gem_request_assign(&obj->last_fenced_req, NULL);
+		i915_gem_request_assign(&obj->last_fence.request, NULL);
 	}
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 500ae6066864..89d7bb651f67 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -243,4 +243,45 @@ static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
 		__i915_spin_request(request, state, timeout_us));
 }
 
+/* We treat requests as fences. This is not be to confused with our
+ * "fence registers" but pipeline synchronisation objects ala GL_ARB_sync.
+ * We use the fences to synchronize access from the CPU with activity on the
+ * GPU, for example, we should not rewrite an object's PTE whilst the GPU
+ * is reading them. We also track fences at a higher level to provide
+ * implicit synchronisation around GEM objects, e.g. set-domain will wait
+ * for outstanding GPU rendering before marking the object ready for CPU
+ * access, or a pageflip will wait until the GPU is complete before showing
+ * the frame on the scanout.
+ *
+ * In order to use a fence, the object must track the fence it needs to
+ * serialise with. For example, GEM objects want to track both read and
+ * write access so that we can perform concurrent read operations between
+ * the CPU and GPU engines, as well as waiting for all rendering to
+ * complete, or waiting for the last GPU user of a "fence register". The
+ * object then embeds a @i915_gem_active to track the most recent (in
+ * retirment order) request relevant for the desired mode of access.
+ * The @i915_gem_active is updated with i915_gem_request_mark_active() to
+ * track the most recent fence request, typically this is done as part of
+ * i915_vma_move_to_active().
+ *
+ * When the @i915_gem_active completes (is retired), it will
+ * signal its completion to the owner through a callback as well as mark
+ * itself as idle (i915_gem_active.request == NULL). The owner
+ * can then perform any action, such as delayed freeing of an active
+ * resource including itself.
+ */
+struct i915_gem_active {
+	struct drm_i915_gem_request *request;
+};
+
+static inline void
+i915_gem_active_set(struct i915_gem_active *active,
+		    struct drm_i915_gem_request *request)
+{
+	i915_gem_request_assign(&active->request, request);
+}
+
+#define for_each_active(mask, idx) \
+	for (; mask ? idx = ffs(mask) - 1, 1 : 0; mask &= ~(1 << idx))
+
 #endif /* I915_GEM_REQUEST_H */
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index adeb0621e1f1..fc78f49c5815 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -242,7 +242,7 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 			}
 
 			obj->fence_dirty =
-				obj->last_fenced_req ||
+				obj->last_fence.request ||
 				obj->fence_reg != I915_FENCE_REG_NONE;
 
 			obj->tiling_mode = args->tiling_mode;
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index ca8b82ab93d6..93c2dea90a89 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -74,7 +74,7 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = obj->last_read_req[i];
+		req = obj->last_read[i].request;
 		if (req == NULL)
 			continue;
 
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 934663166b28..e68718265619 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -749,8 +749,8 @@ static void capture_bo(struct drm_i915_error_buffer *err,
 	err->size = obj->base.size;
 	err->name = obj->base.name;
 	for (i = 0; i < I915_NUM_ENGINES; i++)
-		err->rseqno[i] = i915_gem_request_get_seqno(obj->last_read_req[i]);
-	err->wseqno = i915_gem_request_get_seqno(obj->last_write_req);
+		err->rseqno[i] = i915_gem_request_get_seqno(obj->last_read[i].request);
+	err->wseqno = i915_gem_request_get_seqno(obj->last_write.request);
 	err->gtt_offset = vma->node.start;
 	err->read_domains = obj->base.read_domains;
 	err->write_domain = obj->base.write_domain;
@@ -762,7 +762,7 @@ static void capture_bo(struct drm_i915_error_buffer *err,
 	err->dirty = obj->dirty;
 	err->purgeable = obj->madv != I915_MADV_WILLNEED;
 	err->userptr = obj->userptr.mm != NULL;
-	err->ring = obj->last_write_req ? obj->last_write_req->engine->id : -1;
+	err->ring = obj->last_write.request ? obj->last_write.request->engine->id : -1;
 	err->cache_level = obj->cache_level;
 }
 
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 175a553dc6c8..92e7bc76cce0 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11427,7 +11427,7 @@ static bool use_mmio_flip(struct intel_engine_cs *engine,
 						       false))
 		return true;
 	else
-		return engine != i915_gem_request_get_engine(obj->last_write_req);
+		return engine != i915_gem_request_get_engine(obj->last_write.request);
 }
 
 static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
@@ -11727,7 +11727,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
 		engine = &dev_priv->engine[BCS];
 	} else if (INTEL_INFO(dev)->gen >= 7) {
-		engine = i915_gem_request_get_engine(obj->last_write_req);
+		engine = i915_gem_request_get_engine(obj->last_write.request);
 		if (engine == NULL || engine->id != RCS)
 			engine = &dev_priv->engine[BCS];
 	} else {
@@ -11749,7 +11749,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 		INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
 
 		i915_gem_request_assign(&work->flip_queued_req,
-					obj->last_write_req);
+					obj->last_write.request);
 
 		schedule_work(&work->mmio_work);
 	} else {
@@ -13972,7 +13972,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
 				to_intel_plane_state(new_state);
 
 			i915_gem_request_assign(&plane_state->wait_req,
-						obj->last_write_req);
+						obj->last_write.request);
 		}
 
 		i915_gem_track_fb(old_obj, obj, intel_plane->frontbuffer_bit);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 44/62] drm/i915: Prepare i915_gem_active for annotations
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (42 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 43/62] drm/i915: Introduce i915_gem_active for request tracking Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 45/62] drm/i915: Mark up i915_gem_active for locking annotation Chris Wilson
                   ` (19 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

In the future, we will want to add annotations to the i915_gem_active
struct. The API is thus expanded to hide direct access to the contents
of i915_gem_active and mediated instead through a number of helpers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |  13 ++--
 drivers/gpu/drm/i915/i915_gem.c         |  91 +++++++++++++----------
 drivers/gpu/drm/i915/i915_gem_fence.c   |  11 ++-
 drivers/gpu/drm/i915/i915_gem_request.h | 128 +++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_tiling.c  |   2 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c |   8 +-
 drivers/gpu/drm/i915/i915_gpu_error.c   |   9 ++-
 drivers/gpu/drm/i915/intel_display.c    |  12 ++-
 8 files changed, 206 insertions(+), 68 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 2edbf9e95e7f..fefb35c4becc 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -155,10 +155,10 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		   obj->base.write_domain);
 	for_each_engine_id(engine, dev_priv, id)
 		seq_printf(m, "%x ",
-			   i915_gem_request_get_seqno(obj->last_read[id].request));
+			   i915_gem_active_get_seqno(&obj->last_read[id]));
 	seq_printf(m, "] %x %x%s%s%s",
-		   i915_gem_request_get_seqno(obj->last_write.request),
-		   i915_gem_request_get_seqno(obj->last_fence.request),
+		   i915_gem_active_get_seqno(&obj->last_write),
+		   i915_gem_active_get_seqno(&obj->last_fence),
 		   i915_cache_level_str(to_i915(obj->base.dev), obj->cache_level),
 		   obj->dirty ? " dirty" : "",
 		   obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
@@ -192,8 +192,11 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		*t = '\0';
 		seq_printf(m, " (%s mappable)", s);
 	}
-	if (obj->last_write.request != NULL)
-		seq_printf(m, " (%s)", obj->last_write.request->engine->name);
+
+	engine = i915_gem_active_get_engine(&obj->last_write);
+	if (engine)
+		seq_printf(m, " (%s)", engine->name);
+
 	if (obj->frontbuffer_bits)
 		seq_printf(m, " (frontbuffer: 0x%03x)", obj->frontbuffer_bits);
 }
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 8c3b39a8e974..99e3b269b4b9 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1113,29 +1113,32 @@ int
 i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 			       bool readonly)
 {
+	struct drm_i915_gem_request *request;
 	int ret, i;
 
 	if (!obj->active)
 		return 0;
 
 	if (readonly) {
-		if (obj->last_write.request != NULL) {
-			ret = i915_wait_request(obj->last_write.request);
+		request = i915_gem_active_peek(&obj->last_write);
+		if (request) {
+			ret = i915_wait_request(request);
 			if (ret)
 				return ret;
 
-			i = obj->last_write.request->engine->id;
-			if (obj->last_read[i].request == obj->last_write.request)
+			i = request->engine->id;
+			if (i915_gem_active_peek(&obj->last_read[i]) == request)
 				i915_gem_object_retire__read(obj, i);
 			else
 				i915_gem_object_retire__write(obj);
 		}
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			if (obj->last_read[i].request == NULL)
+			request = i915_gem_active_peek(&obj->last_read[i]);
+			if (!request)
 				continue;
 
-			ret = i915_wait_request(obj->last_read[i].request);
+			ret = i915_wait_request(request);
 			if (ret)
 				return ret;
 
@@ -1153,9 +1156,9 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
 {
 	int ring = req->engine->id;
 
-	if (obj->last_read[ring].request == req)
+	if (i915_gem_active_peek(&obj->last_read[ring]) == req)
 		i915_gem_object_retire__read(obj, ring);
-	else if (obj->last_write.request == req)
+	else if (i915_gem_active_peek(&obj->last_write) == req)
 		i915_gem_object_retire__write(obj);
 
 	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
@@ -1184,20 +1187,20 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 	if (readonly) {
 		struct drm_i915_gem_request *req;
 
-		req = obj->last_write.request;
+		req = i915_gem_active_peek(&obj->last_write);
 		if (req == NULL)
 			return 0;
 
-		requests[n++] = i915_gem_request_get(req);
+		requests[n++] = req;
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
 			struct drm_i915_gem_request *req;
 
-			req = obj->last_read[i].request;
+			req = i915_gem_active_peek(&obj->last_read[i]);
 			if (req == NULL)
 				continue;
 
-			requests[n++] = i915_gem_request_get(req);
+			requests[n++] = req;
 		}
 	}
 
@@ -2117,25 +2120,27 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 static void
 i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
 {
-	GEM_BUG_ON(obj->last_write.request == NULL);
-	GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write.request->engine)));
+	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_write));
+	GEM_BUG_ON(!(obj->active & intel_engine_flag(i915_gem_active_get_engine(&obj->last_write))));
 
-	i915_gem_request_assign(&obj->last_write.request, NULL);
+	i915_gem_active_set(&obj->last_write, NULL);
 	intel_fb_obj_flush(obj, true, ORIGIN_CS);
 }
 
 static void
 i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 {
+	struct intel_engine_cs *engine;
 	struct i915_vma *vma;
 
-	GEM_BUG_ON(obj->last_read[ring].request == NULL);
+	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_read[ring]));
 	GEM_BUG_ON(!(obj->active & (1 << ring)));
 
 	list_del_init(&obj->engine_list[ring]);
-	i915_gem_request_assign(&obj->last_read[ring].request, NULL);
+	i915_gem_active_set(&obj->last_read[ring], NULL);
 
-	if (obj->last_write.request && obj->last_write.request->engine->id == ring)
+	engine = i915_gem_active_get_engine(&obj->last_write);
+	if (engine && engine->id == ring)
 		i915_gem_object_retire__write(obj);
 
 	obj->active &= ~(1 << ring);
@@ -2154,7 +2159,7 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 			list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
 	}
 
-	i915_gem_request_assign(&obj->last_fence.request, NULL);
+	i915_gem_active_set(&obj->last_fence, NULL);
 	i915_gem_object_put(obj);
 }
 
@@ -2347,7 +2352,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 				       struct drm_i915_gem_object,
 				       engine_list[engine->id]);
 
-		if (!list_empty(&obj->last_read[engine->id].request->list))
+		if (!list_empty(&i915_gem_active_peek(&obj->last_read[engine->id])->list))
 			break;
 
 		i915_gem_object_retire__read(obj, engine->id);
@@ -2453,7 +2458,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = obj->last_read[i].request;
+		req = i915_gem_active_peek(&obj->last_read[i]);
 		if (req == NULL)
 			continue;
 
@@ -2491,7 +2496,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 {
 	struct drm_i915_gem_wait *args = data;
 	struct drm_i915_gem_object *obj;
-	struct drm_i915_gem_request *req[I915_NUM_ENGINES];
+	struct drm_i915_gem_request *requests[I915_NUM_ENGINES];
 	int i, n = 0;
 	int ret;
 
@@ -2527,20 +2532,21 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	i915_gem_object_put(obj);
 
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
-		if (obj->last_read[i].request == NULL)
-			continue;
+		struct drm_i915_gem_request *req;
 
-		req[n++] = i915_gem_request_get(obj->last_read[i].request);
+		req = i915_gem_active_get(&obj->last_read[i]);
+		if (req)
+			requests[n++] = req;
 	}
 
 	mutex_unlock(&dev->struct_mutex);
 
 	for (i = 0; i < n; i++) {
 		if (ret == 0)
-			ret = __i915_wait_request(req[i], true,
+			ret = __i915_wait_request(requests[i], true,
 						  args->timeout_ns > 0 ? &args->timeout_ns : NULL,
 						  to_rps_client(file));
-		i915_gem_request_put(req[i]);
+		i915_gem_request_put(requests[i]);
 	}
 	return ret;
 
@@ -2613,7 +2619,7 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
 		     struct drm_i915_gem_request *to)
 {
 	const bool readonly = obj->base.pending_write_domain == 0;
-	struct drm_i915_gem_request *req[I915_NUM_ENGINES];
+	struct drm_i915_gem_request *requests[I915_NUM_ENGINES];
 	int ret, i, n;
 
 	if (!obj->active)
@@ -2621,15 +2627,22 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
 
 	n = 0;
 	if (readonly) {
-		if (obj->last_write.request)
-			req[n++] = obj->last_write.request;
+		struct drm_i915_gem_request *req;
+
+		req = i915_gem_active_peek(&obj->last_write);
+		if (req)
+			requests[n++] = req;
 	} else {
-		for (i = 0; i < I915_NUM_ENGINES; i++)
-			if (obj->last_read[i].request)
-				req[n++] = obj->last_read[i].request;
+		for (i = 0; i < I915_NUM_ENGINES; i++) {
+			struct drm_i915_gem_request *req;
+
+			req = i915_gem_active_peek(&obj->last_read[i]);
+			if (req)
+				requests[n++] = req;
+		}
 	}
 	for (i = 0; i < n; i++) {
-		ret = __i915_gem_object_sync(obj, to, req[i]);
+		ret = __i915_gem_object_sync(obj, to, requests[i]);
 		if (ret)
 			return ret;
 	}
@@ -3702,17 +3715,17 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 
 	args->busy = 0;
 	if (obj->active) {
+		struct drm_i915_gem_request *req;
 		int i;
 
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			struct drm_i915_gem_request *req;
-
-			req = obj->last_read[i].request;
+			req = i915_gem_active_peek(&obj->last_read[i]);
 			if (req)
 				args->busy |= 1 << (16 + req->engine->exec_id);
 		}
-		if (obj->last_write.request)
-			args->busy |= obj->last_write.request->engine->exec_id;
+		req = i915_gem_active_peek(&obj->last_write);
+		if (req)
+			args->busy |= req->engine->exec_id;
 	}
 
 unref:
diff --git a/drivers/gpu/drm/i915/i915_gem_fence.c b/drivers/gpu/drm/i915/i915_gem_fence.c
index 9f8ce13d2f77..301344252b18 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence.c
@@ -261,14 +261,13 @@ static inline void i915_gem_object_fence_lost(struct drm_i915_gem_object *obj)
 static int
 i915_gem_object_wait_fence(struct drm_i915_gem_object *obj)
 {
-	if (obj->last_fence.request) {
-		int ret = i915_wait_request(obj->last_fence.request);
-		if (ret)
-			return ret;
+	int ret;
 
-		i915_gem_request_assign(&obj->last_fence.request, NULL);
-	}
+	ret = i915_gem_active_wait(&obj->last_fence);
+	if (ret)
+		return ret;
 
+	i915_gem_active_set(&obj->last_fence, NULL);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 89d7bb651f67..56e312b95407 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -271,14 +271,138 @@ static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
  * resource including itself.
  */
 struct i915_gem_active {
-	struct drm_i915_gem_request *request;
+	struct drm_i915_gem_request *__request;
 };
 
+/**
+ * i915_gem_active_set - updates the tracker to watch the current request
+ * @active - the active tracker
+ * @request - the request to watch
+ *
+ * i915_gem_active_set() watches the given @request for completion. Whilst
+ * that @request is busy, the @active reports busy. When that @request is
+ * retired, the @active tracker is updated to report idle.
+ */
 static inline void
 i915_gem_active_set(struct i915_gem_active *active,
 		    struct drm_i915_gem_request *request)
 {
-	i915_gem_request_assign(&active->request, request);
+	i915_gem_request_assign(&active->__request, request);
+}
+
+/**
+ * i915_gem_active_peek - report the request being monitored
+ * @active - the active tracker
+ *
+ * i915_gem_active_peek() returns the current request being tracked, or NULL.
+ * It does not obtain a reference on the request for the caller, so the
+ * caller must hold struct_mutex.
+ */
+static inline struct drm_i915_gem_request *
+i915_gem_active_peek(const struct i915_gem_active *active)
+{
+	return active->__request;
+}
+
+/**
+ * i915_gem_active_get - return a reference to the active request
+ * @active - the active tracker
+ *
+ * i915_gem_active_get() returns a reference to the active request, or NULL
+ * if the active tracker is idle. The caller must hold struct_mutex.
+ */
+static inline struct drm_i915_gem_request *
+i915_gem_active_get(const struct i915_gem_active *active)
+{
+	struct drm_i915_gem_request *request;
+
+	request = i915_gem_active_peek(active);
+	if (!request || i915_gem_request_completed(request))
+		return NULL;
+
+	return i915_gem_request_get(request);
+}
+
+/**
+ * __i915_gem_active_is_busy - report whether the active tracker is assigned
+ * @active - the active tracker
+ *
+ * __i915_gem_active_is_busy() returns true if the active tracker is currently
+ * assigned to a request. Due to the lazy retiring, that request may be idle
+ * and this may report stale information.
+ */
+static inline bool
+__i915_gem_active_is_busy(const struct i915_gem_active *active)
+{
+	return i915_gem_active_peek(active);
+}
+
+/**
+ * i915_gem_active_is_idle - report whether the active tracker is idle
+ * @active - the active tracker
+ *
+ * i915_gem_active_is_idle() returns true if the active tracker is currently
+ * unassigned or if the request is complete (but not yet retired). Requires
+ * the caller to hold struct_mutex (but that can be relaxed if desired).
+ */
+static inline bool
+i915_gem_active_is_idle(const struct i915_gem_active *active)
+{
+	struct drm_i915_gem_request *request;
+
+	request = i915_gem_active_peek(active);
+	if (!request || i915_gem_request_completed(request))
+		return true;
+
+	return false;
+}
+
+/**
+ * i915_gem_active_wait - waits until the request is completed
+ * @active - the active request on which to wait
+ *
+ * i915_gem_active_wait() waits until the request is completed before
+ * returning.
+ */
+static inline int __must_check
+i915_gem_active_wait(const struct i915_gem_active *active)
+{
+	struct drm_i915_gem_request *request;
+
+	request = i915_gem_active_peek(active);
+	if (!request)
+		return 0;
+
+	return i915_wait_request(request);
+}
+
+/**
+ * i915_gem_active_retire - waits until the request is retired
+ * @active - the active request on which to wait
+ *
+ * Unlike i915_gem_active_eait(), this i915_gem_active_retire() will
+ * make sure the request is retired before returning.
+ */
+static inline int __must_check
+i915_gem_active_retire(const struct i915_gem_active *active)
+{
+	return i915_gem_active_wait(active);
+}
+
+/* Convenience functions for peeking at state inside active's request whilst
+ * guarded by the struct_mutex.
+ */
+
+static inline uint32_t
+i915_gem_active_get_seqno(const struct i915_gem_active *active)
+{
+	return i915_gem_request_get_seqno(i915_gem_active_peek(active));
+}
+
+static inline struct intel_engine_cs *
+i915_gem_active_get_engine(const struct i915_gem_active *active)
+{
+	return i915_gem_request_get_engine(i915_gem_active_peek(active));
 }
 
 #define for_each_active(mask, idx) \
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index fc78f49c5815..9bc824421b66 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -242,7 +242,7 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 			}
 
 			obj->fence_dirty =
-				obj->last_fence.request ||
+				!i915_gem_active_is_idle(&obj->last_fence) ||
 				obj->fence_reg != I915_FENCE_REG_NONE;
 
 			obj->tiling_mode = args->tiling_mode;
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 93c2dea90a89..d688558606f9 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -74,11 +74,9 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = obj->last_read[i].request;
-		if (req == NULL)
-			continue;
-
-		requests[n++] = i915_gem_request_get(req);
+		req = i915_gem_active_get(&obj->last_read[i]);
+		if (req)
+			requests[n++] = req;
 	}
 
 	mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index e68718265619..1abcf316a825 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -744,13 +744,14 @@ static void capture_bo(struct drm_i915_error_buffer *err,
 		       struct i915_vma *vma)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
+	struct intel_engine_cs *engine;
 	int i;
 
 	err->size = obj->base.size;
 	err->name = obj->base.name;
 	for (i = 0; i < I915_NUM_ENGINES; i++)
-		err->rseqno[i] = i915_gem_request_get_seqno(obj->last_read[i].request);
-	err->wseqno = i915_gem_request_get_seqno(obj->last_write.request);
+		err->rseqno[i] = i915_gem_active_get_seqno(&obj->last_read[i]);
+	err->wseqno = i915_gem_active_get_seqno(&obj->last_write);
 	err->gtt_offset = vma->node.start;
 	err->read_domains = obj->base.read_domains;
 	err->write_domain = obj->base.write_domain;
@@ -762,8 +763,10 @@ static void capture_bo(struct drm_i915_error_buffer *err,
 	err->dirty = obj->dirty;
 	err->purgeable = obj->madv != I915_MADV_WILLNEED;
 	err->userptr = obj->userptr.mm != NULL;
-	err->ring = obj->last_write.request ? obj->last_write.request->engine->id : -1;
 	err->cache_level = obj->cache_level;
+
+	engine = i915_gem_active_get_engine(&obj->last_write);
+	err->ring = engine ? engine->id : -1;
 }
 
 static u32 capture_active_bo(struct drm_i915_error_buffer *err,
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 92e7bc76cce0..839c46a007b5 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11427,7 +11427,7 @@ static bool use_mmio_flip(struct intel_engine_cs *engine,
 						       false))
 		return true;
 	else
-		return engine != i915_gem_request_get_engine(obj->last_write.request);
+		return engine != i915_gem_active_get_engine(&obj->last_write);
 }
 
 static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
@@ -11727,7 +11727,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
 		engine = &dev_priv->engine[BCS];
 	} else if (INTEL_INFO(dev)->gen >= 7) {
-		engine = i915_gem_request_get_engine(obj->last_write.request);
+		engine = i915_gem_active_get_engine(&obj->last_write);
 		if (engine == NULL || engine->id != RCS)
 			engine = &dev_priv->engine[BCS];
 	} else {
@@ -11748,9 +11748,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	if (mmio_flip) {
 		INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
 
-		i915_gem_request_assign(&work->flip_queued_req,
-					obj->last_write.request);
-
+		work->flip_queued_req = i915_gem_active_get(&obj->last_write);
 		schedule_work(&work->mmio_work);
 	} else {
 		request = i915_gem_request_alloc(engine, engine->last_context);
@@ -13971,8 +13969,8 @@ intel_prepare_plane_fb(struct drm_plane *plane,
 			struct intel_plane_state *plane_state =
 				to_intel_plane_state(new_state);
 
-			i915_gem_request_assign(&plane_state->wait_req,
-						obj->last_write.request);
+			plane_state->wait_req =
+				i915_gem_active_get(&obj->last_write);
 		}
 
 		i915_gem_track_fb(old_obj, obj, intel_plane->frontbuffer_bit);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 45/62] drm/i915: Mark up i915_gem_active for locking annotation
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (43 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 44/62] drm/i915: Prepare i915_gem_active for annotations Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 46/62] drm/i915: Refactor blocking waits Chris Wilson
                   ` (18 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

The future annotations will track the locking used for access to ensure
that it is always sufficient. We make the preparations now to present
the API ahead and to make sure that GCC can eliminate the unused
parameter.

Before:	6298417 3619610  696320 10614347         a1f64b vmlinux
After:	6298417 3619610  696320 10614347         a1f64b vmlinux

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c     | 12 +++++---
 drivers/gpu/drm/i915/i915_gem.c         | 49 ++++++++++++++++++++++-----------
 drivers/gpu/drm/i915/i915_gem_fence.c   |  3 +-
 drivers/gpu/drm/i915/i915_gem_request.h | 38 +++++++++++++++----------
 drivers/gpu/drm/i915/i915_gem_tiling.c  |  3 +-
 drivers/gpu/drm/i915/i915_gem_userptr.c |  3 +-
 drivers/gpu/drm/i915/i915_gpu_error.c   | 29 +++++++++++++++----
 drivers/gpu/drm/i915/intel_display.c    | 12 +++++---
 8 files changed, 102 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index fefb35c4becc..d35454d5683e 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -155,10 +155,13 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		   obj->base.write_domain);
 	for_each_engine_id(engine, dev_priv, id)
 		seq_printf(m, "%x ",
-			   i915_gem_active_get_seqno(&obj->last_read[id]));
+			   i915_gem_active_get_seqno(&obj->last_read[id],
+						     &obj->base.dev->struct_mutex));
 	seq_printf(m, "] %x %x%s%s%s",
-		   i915_gem_active_get_seqno(&obj->last_write),
-		   i915_gem_active_get_seqno(&obj->last_fence),
+		   i915_gem_active_get_seqno(&obj->last_write,
+					     &obj->base.dev->struct_mutex),
+		   i915_gem_active_get_seqno(&obj->last_fence,
+					     &obj->base.dev->struct_mutex),
 		   i915_cache_level_str(to_i915(obj->base.dev), obj->cache_level),
 		   obj->dirty ? " dirty" : "",
 		   obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
@@ -193,7 +196,8 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 		seq_printf(m, " (%s mappable)", s);
 	}
 
-	engine = i915_gem_active_get_engine(&obj->last_write);
+	engine = i915_gem_active_get_engine(&obj->last_write,
+					    &obj->base.dev->struct_mutex);
 	if (engine)
 		seq_printf(m, " (%s)", engine->name);
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 99e3b269b4b9..610378bd1be4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1120,21 +1120,24 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 		return 0;
 
 	if (readonly) {
-		request = i915_gem_active_peek(&obj->last_write);
+		request = i915_gem_active_peek(&obj->last_write,
+					       &obj->base.dev->struct_mutex);
 		if (request) {
 			ret = i915_wait_request(request);
 			if (ret)
 				return ret;
 
 			i = request->engine->id;
-			if (i915_gem_active_peek(&obj->last_read[i]) == request)
+			if (i915_gem_active_peek(&obj->last_read[i],
+						 &obj->base.dev->struct_mutex) == request)
 				i915_gem_object_retire__read(obj, i);
 			else
 				i915_gem_object_retire__write(obj);
 		}
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			request = i915_gem_active_peek(&obj->last_read[i]);
+			request = i915_gem_active_peek(&obj->last_read[i],
+						       &obj->base.dev->struct_mutex);
 			if (!request)
 				continue;
 
@@ -1156,9 +1159,11 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
 {
 	int ring = req->engine->id;
 
-	if (i915_gem_active_peek(&obj->last_read[ring]) == req)
+	if (i915_gem_active_peek(&obj->last_read[ring],
+				 &obj->base.dev->struct_mutex) == req)
 		i915_gem_object_retire__read(obj, ring);
-	else if (i915_gem_active_peek(&obj->last_write) == req)
+	else if (i915_gem_active_peek(&obj->last_write,
+				      &obj->base.dev->struct_mutex) == req)
 		i915_gem_object_retire__write(obj);
 
 	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
@@ -1187,7 +1192,8 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 	if (readonly) {
 		struct drm_i915_gem_request *req;
 
-		req = i915_gem_active_peek(&obj->last_write);
+		req = i915_gem_active_peek(&obj->last_write,
+					   &obj->base.dev->struct_mutex);
 		if (req == NULL)
 			return 0;
 
@@ -1196,7 +1202,8 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
 			struct drm_i915_gem_request *req;
 
-			req = i915_gem_active_peek(&obj->last_read[i]);
+			req = i915_gem_active_peek(&obj->last_read[i],
+						   &obj->base.dev->struct_mutex);
 			if (req == NULL)
 				continue;
 
@@ -2121,7 +2128,9 @@ static void
 i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
 {
 	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_write));
-	GEM_BUG_ON(!(obj->active & intel_engine_flag(i915_gem_active_get_engine(&obj->last_write))));
+	GEM_BUG_ON(!(obj->active &
+		     intel_engine_flag(i915_gem_active_get_engine(&obj->last_write,
+								  &obj->base.dev->struct_mutex))));
 
 	i915_gem_active_set(&obj->last_write, NULL);
 	intel_fb_obj_flush(obj, true, ORIGIN_CS);
@@ -2139,7 +2148,8 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 	list_del_init(&obj->engine_list[ring]);
 	i915_gem_active_set(&obj->last_read[ring], NULL);
 
-	engine = i915_gem_active_get_engine(&obj->last_write);
+	engine = i915_gem_active_get_engine(&obj->last_write,
+					    &obj->base.dev->struct_mutex);
 	if (engine && engine->id == ring)
 		i915_gem_object_retire__write(obj);
 
@@ -2352,7 +2362,8 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 				       struct drm_i915_gem_object,
 				       engine_list[engine->id]);
 
-		if (!list_empty(&i915_gem_active_peek(&obj->last_read[engine->id])->list))
+		if (!list_empty(&i915_gem_active_peek(&obj->last_read[engine->id],
+						      &obj->base.dev->struct_mutex)->list))
 			break;
 
 		i915_gem_object_retire__read(obj, engine->id);
@@ -2458,7 +2469,8 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = i915_gem_active_peek(&obj->last_read[i]);
+		req = i915_gem_active_peek(&obj->last_read[i],
+					   &obj->base.dev->struct_mutex);
 		if (req == NULL)
 			continue;
 
@@ -2534,7 +2546,8 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = i915_gem_active_get(&obj->last_read[i]);
+		req = i915_gem_active_get(&obj->last_read[i],
+					  &obj->base.dev->struct_mutex);
 		if (req)
 			requests[n++] = req;
 	}
@@ -2629,14 +2642,16 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
 	if (readonly) {
 		struct drm_i915_gem_request *req;
 
-		req = i915_gem_active_peek(&obj->last_write);
+		req = i915_gem_active_peek(&obj->last_write,
+					   &obj->base.dev->struct_mutex);
 		if (req)
 			requests[n++] = req;
 	} else {
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
 			struct drm_i915_gem_request *req;
 
-			req = i915_gem_active_peek(&obj->last_read[i]);
+			req = i915_gem_active_peek(&obj->last_read[i],
+						   &obj->base.dev->struct_mutex);
 			if (req)
 				requests[n++] = req;
 		}
@@ -3719,11 +3734,13 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 		int i;
 
 		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			req = i915_gem_active_peek(&obj->last_read[i]);
+			req = i915_gem_active_peek(&obj->last_read[i],
+						   &obj->base.dev->struct_mutex);
 			if (req)
 				args->busy |= 1 << (16 + req->engine->exec_id);
 		}
-		req = i915_gem_active_peek(&obj->last_write);
+		req = i915_gem_active_peek(&obj->last_write,
+					   &obj->base.dev->struct_mutex);
 		if (req)
 			args->busy |= req->engine->exec_id;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gem_fence.c b/drivers/gpu/drm/i915/i915_gem_fence.c
index 301344252b18..6c39da8dd6ea 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence.c
@@ -263,7 +263,8 @@ i915_gem_object_wait_fence(struct drm_i915_gem_object *obj)
 {
 	int ret;
 
-	ret = i915_gem_active_wait(&obj->last_fence);
+	ret = i915_gem_active_wait(&obj->last_fence,
+				   &obj->base.dev->struct_mutex);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 56e312b95407..d6b8e801bb93 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -290,6 +290,12 @@ i915_gem_active_set(struct i915_gem_active *active,
 	i915_gem_request_assign(&active->__request, request);
 }
 
+static inline struct drm_i915_gem_request *
+__i915_gem_active_peek(const struct i915_gem_active *active)
+{
+	return active->__request;
+}
+
 /**
  * i915_gem_active_peek - report the request being monitored
  * @active - the active tracker
@@ -299,7 +305,7 @@ i915_gem_active_set(struct i915_gem_active *active,
  * caller must hold struct_mutex.
  */
 static inline struct drm_i915_gem_request *
-i915_gem_active_peek(const struct i915_gem_active *active)
+i915_gem_active_peek(const struct i915_gem_active *active, struct mutex *mutex)
 {
 	return active->__request;
 }
@@ -312,11 +318,11 @@ i915_gem_active_peek(const struct i915_gem_active *active)
  * if the active tracker is idle. The caller must hold struct_mutex.
  */
 static inline struct drm_i915_gem_request *
-i915_gem_active_get(const struct i915_gem_active *active)
+i915_gem_active_get(const struct i915_gem_active *active, struct mutex *mutex)
 {
 	struct drm_i915_gem_request *request;
 
-	request = i915_gem_active_peek(active);
+	request = i915_gem_active_peek(active, mutex);
 	if (!request || i915_gem_request_completed(request))
 		return NULL;
 
@@ -334,7 +340,7 @@ i915_gem_active_get(const struct i915_gem_active *active)
 static inline bool
 __i915_gem_active_is_busy(const struct i915_gem_active *active)
 {
-	return i915_gem_active_peek(active);
+	return __i915_gem_active_peek(active);
 }
 
 /**
@@ -346,11 +352,12 @@ __i915_gem_active_is_busy(const struct i915_gem_active *active)
  * the caller to hold struct_mutex (but that can be relaxed if desired).
  */
 static inline bool
-i915_gem_active_is_idle(const struct i915_gem_active *active)
+i915_gem_active_is_idle(const struct i915_gem_active *active,
+			struct mutex *mutex)
 {
 	struct drm_i915_gem_request *request;
 
-	request = i915_gem_active_peek(active);
+	request = i915_gem_active_peek(active, mutex);
 	if (!request || i915_gem_request_completed(request))
 		return true;
 
@@ -365,11 +372,11 @@ i915_gem_active_is_idle(const struct i915_gem_active *active)
  * returning.
  */
 static inline int __must_check
-i915_gem_active_wait(const struct i915_gem_active *active)
+i915_gem_active_wait(const struct i915_gem_active *active, struct mutex *mutex)
 {
 	struct drm_i915_gem_request *request;
 
-	request = i915_gem_active_peek(active);
+	request = i915_gem_active_peek(active, mutex);
 	if (!request)
 		return 0;
 
@@ -384,9 +391,10 @@ i915_gem_active_wait(const struct i915_gem_active *active)
  * make sure the request is retired before returning.
  */
 static inline int __must_check
-i915_gem_active_retire(const struct i915_gem_active *active)
+i915_gem_active_retire(const struct i915_gem_active *active,
+		       struct mutex *mutex)
 {
-	return i915_gem_active_wait(active);
+	return i915_gem_active_wait(active, mutex);
 }
 
 /* Convenience functions for peeking at state inside active's request whilst
@@ -394,15 +402,17 @@ i915_gem_active_retire(const struct i915_gem_active *active)
  */
 
 static inline uint32_t
-i915_gem_active_get_seqno(const struct i915_gem_active *active)
+i915_gem_active_get_seqno(const struct i915_gem_active *active,
+			  struct mutex *mutex)
 {
-	return i915_gem_request_get_seqno(i915_gem_active_peek(active));
+	return i915_gem_request_get_seqno(i915_gem_active_peek(active, mutex));
 }
 
 static inline struct intel_engine_cs *
-i915_gem_active_get_engine(const struct i915_gem_active *active)
+i915_gem_active_get_engine(const struct i915_gem_active *active,
+			   struct mutex *mutex)
 {
-	return i915_gem_request_get_engine(i915_gem_active_peek(active));
+	return i915_gem_request_get_engine(i915_gem_active_peek(active, mutex));
 }
 
 #define for_each_active(mask, idx) \
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index 9bc824421b66..326de7eae101 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -242,7 +242,8 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 			}
 
 			obj->fence_dirty =
-				!i915_gem_active_is_idle(&obj->last_fence) ||
+				!i915_gem_active_is_idle(&obj->last_fence,
+							 &dev->struct_mutex) ||
 				obj->fence_reg != I915_FENCE_REG_NONE;
 
 			obj->tiling_mode = args->tiling_mode;
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index d688558606f9..dd6d823ac3e2 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -74,7 +74,8 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
-		req = i915_gem_active_get(&obj->last_read[i]);
+		req = i915_gem_active_get(&obj->last_read[i],
+					  &obj->base.dev->struct_mutex);
 		if (req)
 			requests[n++] = req;
 	}
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 1abcf316a825..1bcdda9680d4 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -740,18 +740,38 @@ unwind:
 #define i915_error_ggtt_object_create(dev_priv, src) \
 	i915_error_object_create((dev_priv), (src), &(dev_priv)->ggtt.base)
 
+/* The error capture is special as tries to run underneath the normal
+ * locking rules - so we use the raw version of the i915_gem_active lookup.
+ */
+static inline uint32_t
+__active_get_seqno(struct i915_gem_active *active)
+{
+	return i915_gem_request_get_seqno(__i915_gem_active_peek(active));
+}
+
+static inline int
+__active_get_engine_id(struct i915_gem_active *active)
+{
+	struct intel_engine_cs *engine;
+
+	engine = i915_gem_request_get_engine(__i915_gem_active_peek(active));
+	return engine ? engine->id : -1;
+}
+
 static void capture_bo(struct drm_i915_error_buffer *err,
 		       struct i915_vma *vma)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
-	struct intel_engine_cs *engine;
 	int i;
 
 	err->size = obj->base.size;
 	err->name = obj->base.name;
+
 	for (i = 0; i < I915_NUM_ENGINES; i++)
-		err->rseqno[i] = i915_gem_active_get_seqno(&obj->last_read[i]);
-	err->wseqno = i915_gem_active_get_seqno(&obj->last_write);
+		err->rseqno[i] = __active_get_seqno(&obj->last_read[i]);
+	err->wseqno = __active_get_seqno(&obj->last_write);
+	err->ring = __active_get_engine_id(&obj->last_write);
+
 	err->gtt_offset = vma->node.start;
 	err->read_domains = obj->base.read_domains;
 	err->write_domain = obj->base.write_domain;
@@ -764,9 +784,6 @@ static void capture_bo(struct drm_i915_error_buffer *err,
 	err->purgeable = obj->madv != I915_MADV_WILLNEED;
 	err->userptr = obj->userptr.mm != NULL;
 	err->cache_level = obj->cache_level;
-
-	engine = i915_gem_active_get_engine(&obj->last_write);
-	err->ring = engine ? engine->id : -1;
 }
 
 static u32 capture_active_bo(struct drm_i915_error_buffer *err,
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 839c46a007b5..82533f1da54c 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11427,7 +11427,8 @@ static bool use_mmio_flip(struct intel_engine_cs *engine,
 						       false))
 		return true;
 	else
-		return engine != i915_gem_active_get_engine(&obj->last_write);
+		return engine != i915_gem_active_get_engine(&obj->last_write,
+							    &obj->base.dev->struct_mutex);
 }
 
 static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
@@ -11727,7 +11728,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
 		engine = &dev_priv->engine[BCS];
 	} else if (INTEL_INFO(dev)->gen >= 7) {
-		engine = i915_gem_active_get_engine(&obj->last_write);
+		engine = i915_gem_active_get_engine(&obj->last_write,
+						    &obj->base.dev->struct_mutex);
 		if (engine == NULL || engine->id != RCS)
 			engine = &dev_priv->engine[BCS];
 	} else {
@@ -11748,7 +11750,8 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	if (mmio_flip) {
 		INIT_WORK(&work->mmio_work, intel_mmio_flip_work_func);
 
-		work->flip_queued_req = i915_gem_active_get(&obj->last_write);
+		work->flip_queued_req = i915_gem_active_get(&obj->last_write,
+							    &obj->base.dev->struct_mutex);
 		schedule_work(&work->mmio_work);
 	} else {
 		request = i915_gem_request_alloc(engine, engine->last_context);
@@ -13970,7 +13973,8 @@ intel_prepare_plane_fb(struct drm_plane *plane,
 				to_intel_plane_state(new_state);
 
 			plane_state->wait_req =
-				i915_gem_active_get(&obj->last_write);
+				i915_gem_active_get(&obj->last_write,
+						    &obj->base.dev->struct_mutex);
 		}
 
 		i915_gem_track_fb(old_obj, obj, intel_plane->frontbuffer_bit);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 46/62] drm/i915: Refactor blocking waits
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (44 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 45/62] drm/i915: Mark up i915_gem_active for locking annotation Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 47/62] drm/i915: Rename request->list to link for consistency Chris Wilson
                   ` (17 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Tidy up the for loops that handle waiting for read/write vs read-only
access.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 163 +++++++++++++++++++---------------------
 1 file changed, 78 insertions(+), 85 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 610378bd1be4..ad3330adfa41 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1105,6 +1105,23 @@ put_rpm:
 	return ret;
 }
 
+static void
+i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
+			       struct drm_i915_gem_request *req)
+{
+	int ring = req->engine->id;
+
+	if (i915_gem_active_peek(&obj->last_read[ring],
+				 &obj->base.dev->struct_mutex) == req)
+		i915_gem_object_retire__read(obj, ring);
+	else if (i915_gem_active_peek(&obj->last_write,
+				      &obj->base.dev->struct_mutex) == req)
+		i915_gem_object_retire__write(obj);
+
+	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
+		i915_gem_request_retire_upto(req);
+}
+
 /**
  * Ensures that all rendering to the object has completed and the object is
  * safe to unbind from the GTT or access from the CPU.
@@ -1113,61 +1130,40 @@ int
 i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 			       bool readonly)
 {
-	struct drm_i915_gem_request *request;
-	int ret, i;
+	struct i915_gem_active *active;
+	unsigned long active_mask;
+	int idx;
 
-	if (!obj->active)
-		return 0;
+	lockdep_assert_held(&obj->base.dev->struct_mutex);
 
-	if (readonly) {
-		request = i915_gem_active_peek(&obj->last_write,
-					       &obj->base.dev->struct_mutex);
-		if (request) {
-			ret = i915_wait_request(request);
-			if (ret)
-				return ret;
+	active_mask = obj->active;
+	if (!active_mask)
+		return 0;
 
-			i = request->engine->id;
-			if (i915_gem_active_peek(&obj->last_read[i],
-						 &obj->base.dev->struct_mutex) == request)
-				i915_gem_object_retire__read(obj, i);
-			else
-				i915_gem_object_retire__write(obj);
-		}
+	if (!readonly) {
+		active = obj->last_read;
 	} else {
-		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			request = i915_gem_active_peek(&obj->last_read[i],
-						       &obj->base.dev->struct_mutex);
-			if (!request)
-				continue;
-
-			ret = i915_wait_request(request);
-			if (ret)
-				return ret;
-
-			i915_gem_object_retire__read(obj, i);
-		}
-		GEM_BUG_ON(obj->active);
+		active_mask = 1;
+		active = &obj->last_write;
 	}
 
-	return 0;
-}
+	for_each_active(active_mask, idx) {
+		struct drm_i915_gem_request *request;
+		int ret;
 
-static void
-i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
-			       struct drm_i915_gem_request *req)
-{
-	int ring = req->engine->id;
+		request = i915_gem_active_peek(&active[idx],
+					       &obj->base.dev->struct_mutex);
+		if (!request)
+			continue;
 
-	if (i915_gem_active_peek(&obj->last_read[ring],
-				 &obj->base.dev->struct_mutex) == req)
-		i915_gem_object_retire__read(obj, ring);
-	else if (i915_gem_active_peek(&obj->last_write,
-				      &obj->base.dev->struct_mutex) == req)
-		i915_gem_object_retire__write(obj);
+		ret = i915_wait_request(request);
+		if (ret)
+			return ret;
 
-	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
-		i915_gem_request_retire_upto(req);
+		i915_gem_object_retire_request(obj, request);
+	}
+
+	return 0;
 }
 
 /* A nonblocking variant of the above wait. This is a highly dangerous routine
@@ -1181,34 +1177,31 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 	struct drm_device *dev = obj->base.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_i915_gem_request *requests[I915_NUM_ENGINES];
+	struct i915_gem_active *active;
+	unsigned long active_mask;
 	int ret, i, n = 0;
 
 	BUG_ON(!mutex_is_locked(&dev->struct_mutex));
 	BUG_ON(!dev_priv->mm.interruptible);
 
-	if (!obj->active)
+	active_mask = obj->active;
+	if (!active_mask)
 		return 0;
 
-	if (readonly) {
-		struct drm_i915_gem_request *req;
-
-		req = i915_gem_active_peek(&obj->last_write,
-					   &obj->base.dev->struct_mutex);
-		if (req == NULL)
-			return 0;
-
-		requests[n++] = req;
+	if (!readonly) {
+		active = obj->last_read;
 	} else {
-		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			struct drm_i915_gem_request *req;
+		active_mask = 1;
+		active = &obj->last_write;
+	}
 
-			req = i915_gem_active_peek(&obj->last_read[i],
-						   &obj->base.dev->struct_mutex);
-			if (req == NULL)
-				continue;
+	for_each_active(active_mask, i) {
+		struct drm_i915_gem_request *req;
 
+		req = i915_gem_active_get(&active[i],
+					  &obj->base.dev->struct_mutex);
+		if (req)
 			requests[n++] = req;
-		}
 	}
 
 	mutex_unlock(&dev->struct_mutex);
@@ -2631,33 +2624,33 @@ int
 i915_gem_object_sync(struct drm_i915_gem_object *obj,
 		     struct drm_i915_gem_request *to)
 {
-	const bool readonly = obj->base.pending_write_domain == 0;
-	struct drm_i915_gem_request *requests[I915_NUM_ENGINES];
-	int ret, i, n;
+	struct i915_gem_active *active;
+	unsigned long active_mask;
+	int idx;
 
-	if (!obj->active)
-		return 0;
+	lockdep_assert_held(&obj->base.dev->struct_mutex);
 
-	n = 0;
-	if (readonly) {
-		struct drm_i915_gem_request *req;
+	active_mask = obj->active;
+	if (!active_mask)
+		return 0;
 
-		req = i915_gem_active_peek(&obj->last_write,
-					   &obj->base.dev->struct_mutex);
-		if (req)
-			requests[n++] = req;
+	if (obj->base.pending_write_domain) {
+		active = obj->last_read;
 	} else {
-		for (i = 0; i < I915_NUM_ENGINES; i++) {
-			struct drm_i915_gem_request *req;
-
-			req = i915_gem_active_peek(&obj->last_read[i],
-						   &obj->base.dev->struct_mutex);
-			if (req)
-				requests[n++] = req;
-		}
+		active_mask = 1;
+		active = &obj->last_write;
 	}
-	for (i = 0; i < n; i++) {
-		ret = __i915_gem_object_sync(obj, to, requests[i]);
+
+	for_each_active(active_mask, idx) {
+		struct drm_i915_gem_request *request;
+		int ret;
+
+		request = i915_gem_active_peek(&active[idx],
+					       &obj->base.dev->struct_mutex);
+		if (!request)
+			continue;
+
+		ret = __i915_gem_object_sync(obj, to, request);
 		if (ret)
 			return ret;
 	}
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 47/62] drm/i915: Rename request->list to link for consistency
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (45 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 46/62] drm/i915: Refactor blocking waits Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 48/62] drm/i915: Remove obsolete i915_gem_object_flush_active() Chris Wilson
                   ` (16 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

We use "list" to denote the list and "link" to denote an element on that
list. Rename request->list to match this idiom.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |  4 ++--
 drivers/gpu/drm/i915/i915_gem.c         | 10 +++++-----
 drivers/gpu/drm/i915/i915_gem_request.c | 10 +++++-----
 drivers/gpu/drm/i915/i915_gem_request.h |  4 ++--
 drivers/gpu/drm/i915/i915_gpu_error.c   |  4 ++--
 drivers/gpu/drm/i915/intel_ringbuffer.c |  6 +++---
 6 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index d35454d5683e..345caf2e1841 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -760,13 +760,13 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
 		int count;
 
 		count = 0;
-		list_for_each_entry(req, &engine->request_list, list)
+		list_for_each_entry(req, &engine->request_list, link)
 			count++;
 		if (count == 0)
 			continue;
 
 		seq_printf(m, "%s requests: %d\n", engine->name, count);
-		list_for_each_entry(req, &engine->request_list, list) {
+		list_for_each_entry(req, &engine->request_list, link) {
 			struct task_struct *task;
 
 			rcu_read_lock();
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ad3330adfa41..2bddd1386788 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2210,7 +2210,7 @@ i915_gem_find_active_request(struct intel_engine_cs *engine)
 	 * extra delay for a recent interrupt is pointless. Hence, we do
 	 * not need an engine->irq_seqno_barrier() before the seqno reads.
 	 */
-	list_for_each_entry(request, &engine->request_list, list) {
+	list_for_each_entry(request, &engine->request_list, link) {
 		if (i915_gem_request_completed(request))
 			continue;
 
@@ -2232,7 +2232,7 @@ static void i915_gem_reset_engine_status(struct intel_engine_cs *engine)
 	ring_hung = engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG;
 
 	i915_set_reset_status(request->ctx, ring_hung);
-	list_for_each_entry_continue(request, &engine->request_list, list)
+	list_for_each_entry_continue(request, &engine->request_list, link)
 		i915_set_reset_status(request->ctx, false);
 }
 
@@ -2275,7 +2275,7 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
 
 		request = list_last_entry(&engine->request_list,
 					  struct drm_i915_gem_request,
-					  list);
+					  link);
 
 		i915_gem_request_retire_upto(request);
 	}
@@ -2336,7 +2336,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 
 		request = list_first_entry(&engine->request_list,
 					   struct drm_i915_gem_request,
-					   list);
+					   link);
 
 		if (!i915_gem_request_completed(request))
 			break;
@@ -2356,7 +2356,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 				       engine_list[engine->id]);
 
 		if (!list_empty(&i915_gem_active_peek(&obj->last_read[engine->id],
-						      &obj->base.dev->struct_mutex)->list))
+						      &obj->base.dev->struct_mutex)->link))
 			break;
 
 		i915_gem_object_retire__read(obj, engine->id);
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 1e9515cfb506..20ad95d9a65f 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -314,7 +314,7 @@ i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
 static void i915_gem_request_retire(struct drm_i915_gem_request *request)
 {
 	trace_i915_gem_request_retire(request);
-	list_del_init(&request->list);
+	list_del_init(&request->link);
 
 	/* We know the GPU must have read the request to have
 	 * sent us the seqno + interrupt, so use the position
@@ -345,12 +345,12 @@ void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
 
 	lockdep_assert_held(&req->i915->dev->struct_mutex);
 
-	if (list_empty(&req->list))
+	if (list_empty(&req->link))
 		return;
 
 	do {
 		tmp = list_first_entry(&engine->request_list,
-				       typeof(*tmp), list);
+				       typeof(*tmp), link);
 
 		i915_gem_request_retire(tmp);
 	} while (tmp != req);
@@ -443,7 +443,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
 	request->emitted_jiffies = jiffies;
 	request->previous_seqno = engine->last_submitted_seqno;
 	smp_store_mb(engine->last_submitted_seqno, request->fence.seqno);
-	list_add_tail(&request->list, &engine->request_list);
+	list_add_tail(&request->link, &engine->request_list);
 
 	/* Record the position of the start of the request so that
 	 * should we detect the updated seqno part-way through the
@@ -563,7 +563,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
 
 	might_sleep();
 
-	if (list_empty(&req->list))
+	if (list_empty(&req->link))
 		return 0;
 
 	if (i915_gem_request_completed(req))
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index d6b8e801bb93..1599e7bc3e48 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -105,8 +105,8 @@ struct drm_i915_gem_request {
 	/** Time at which this request was emitted, in jiffies. */
 	unsigned long emitted_jiffies;
 
-	/** global list entry for this request */
-	struct list_head list;
+	/** engine->request_list entry for this request */
+	struct list_head link;
 
 	struct drm_i915_file_private *file_priv;
 	/** file_priv list entry for this request */
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 1bcdda9680d4..70f2911cd78f 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1162,7 +1162,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 		i915_gem_record_active_context(engine, error, &error->ring[i]);
 
 		count = 0;
-		list_for_each_entry(request, &engine->request_list, list)
+		list_for_each_entry(request, &engine->request_list, link)
 			count++;
 
 		error->ring[i].num_requests = count;
@@ -1175,7 +1175,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
 		}
 
 		count = 0;
-		list_for_each_entry(request, &engine->request_list, list) {
+		list_for_each_entry(request, &engine->request_list, link) {
 			struct drm_i915_error_request *erq;
 
 			if (count >= error->ring[i].num_requests) {
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 8d6249701137..f6a6306a598d 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2162,7 +2162,7 @@ int intel_engine_idle(struct intel_engine_cs *engine)
 
 	req = list_entry(engine->request_list.prev,
 			 struct drm_i915_gem_request,
-			 list);
+			 link);
 
 	/* Make sure we do not trigger any retires */
 	return __i915_wait_request(req,
@@ -2211,7 +2211,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 	 */
 	GEM_BUG_ON(!req->reserved_space);
 
-	list_for_each_entry(target, &engine->request_list, list) {
+	list_for_each_entry(target, &engine->request_list, link) {
 		unsigned space;
 
 		/*
@@ -2229,7 +2229,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
 			break;
 	}
 
-	if (WARN_ON(&target->list == &engine->request_list))
+	if (WARN_ON(&target->link == &engine->request_list))
 		return -ENOSPC;
 
 	return i915_wait_request(target);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 48/62] drm/i915: Remove obsolete i915_gem_object_flush_active()
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (46 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 47/62] drm/i915: Rename request->list to link for consistency Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 49/62] drm/i915: Refactor activity tracking for requests Chris Wilson
                   ` (15 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Since we track requests, and requests are always added to the GPU fully
formed, we never have to flush the incomplete request and know that the
given request will eventually complete without any further action on our
part.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 58 +++--------------------------------------
 1 file changed, 3 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 2bddd1386788..f517bc151af1 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2447,34 +2447,6 @@ out:
 }
 
 /**
- * Ensures that an object will eventually get non-busy by flushing any required
- * write domains, emitting any outstanding lazy request and retiring and
- * completed requests.
- */
-static int
-i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
-{
-	int i;
-
-	if (!obj->active)
-		return 0;
-
-	for (i = 0; i < I915_NUM_ENGINES; i++) {
-		struct drm_i915_gem_request *req;
-
-		req = i915_gem_active_peek(&obj->last_read[i],
-					   &obj->base.dev->struct_mutex);
-		if (req == NULL)
-			continue;
-
-		if (i915_gem_request_completed(req))
-			i915_gem_object_retire__read(obj, i);
-	}
-
-	return 0;
-}
-
-/**
  * i915_gem_wait_ioctl - implements DRM_IOCTL_I915_GEM_WAIT
  * @DRM_IOCTL_ARGS: standard ioctl arguments
  *
@@ -2518,24 +2490,9 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		return -ENOENT;
 	}
 
-	/* Need to make sure the object gets inactive eventually. */
-	ret = i915_gem_object_flush_active(obj);
-	if (ret)
-		goto out;
-
 	if (!obj->active)
 		goto out;
 
-	/* Do this after OLR check to make sure we make forward progress polling
-	 * on this IOCTL with a timeout == 0 (like busy ioctl)
-	 */
-	if (args->timeout_ns == 0) {
-		ret = -ETIME;
-		goto out;
-	}
-
-	i915_gem_object_put(obj);
-
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
 		struct drm_i915_gem_request *req;
 
@@ -2545,6 +2502,8 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 			requests[n++] = req;
 	}
 
+out:
+	i915_gem_object_put(obj);
 	mutex_unlock(&dev->struct_mutex);
 
 	for (i = 0; i < n; i++) {
@@ -2555,11 +2514,6 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		i915_gem_request_put(requests[i]);
 	}
 	return ret;
-
-out:
-	i915_gem_object_put(obj);
-	mutex_unlock(&dev->struct_mutex);
-	return ret;
 }
 
 static int
@@ -3714,13 +3668,8 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 
 	/* Count all active objects as busy, even if they are currently not used
 	 * by the gpu. Users of this interface expect objects to eventually
-	 * become non-busy without any further actions, therefore emit any
-	 * necessary flushes here.
+	 * become non-busy without any further actions.
 	 */
-	ret = i915_gem_object_flush_active(obj);
-	if (ret)
-		goto unref;
-
 	args->busy = 0;
 	if (obj->active) {
 		struct drm_i915_gem_request *req;
@@ -3738,7 +3687,6 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 			args->busy |= req->engine->exec_id;
 	}
 
-unref:
 	i915_gem_object_put(obj);
 unlock:
 	mutex_unlock(&dev->struct_mutex);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 49/62] drm/i915: Refactor activity tracking for requests
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (47 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 48/62] drm/i915: Remove obsolete i915_gem_object_flush_active() Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 50/62] drm/i915: Double check activity before relocations Chris Wilson
                   ` (14 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

With the introduction of requests, we amplified the number of atomic
refcounted objects we use and update every execbuffer; from none to
several references, and a set of references that need to be changed. We
also introduced interesting side-effects in the order of retiring
requests and objects.

Instead of independently tracking the last request for an object, track
the active objects for each request. The object will reside in the
buffer list of its most recent active request and so we reduce the kref
interchange to a list_move. Now retirements are entirely driven by the
request, dramatically simplifying activity tracking on the object
themselves, and removing the ambiguity between retiring objects and
retiring requests.

Furthermore with the consolidation of managing the activity tracking
centrally, we can look forward to using RCU to enable lockless lookup of
the current active requests for an object. In the future, we will be
able to query the status or wait upon rendering to an object without
even touching the struct_mutex BKL.

All told, less code, simpler and faster, and more extensible.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/Makefile           |   1 -
 drivers/gpu/drm/i915/i915_drv.h         |  10 ---
 drivers/gpu/drm/i915/i915_gem.c         | 135 +++++++-------------------------
 drivers/gpu/drm/i915/i915_gem_debug.c   |  70 -----------------
 drivers/gpu/drm/i915/i915_gem_fence.c   |   9 +--
 drivers/gpu/drm/i915/i915_gem_request.c |  39 ++++++---
 drivers/gpu/drm/i915/i915_gem_request.h |  73 +++++++++++------
 drivers/gpu/drm/i915/intel_lrc.c        |   1 -
 drivers/gpu/drm/i915/intel_ringbuffer.c |   1 -
 drivers/gpu/drm/i915/intel_ringbuffer.h |  12 ---
 10 files changed, 105 insertions(+), 246 deletions(-)
 delete mode 100644 drivers/gpu/drm/i915/i915_gem_debug.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 14cef1d2343c..99347343ac59 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -24,7 +24,6 @@ i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o
 i915-y += i915_cmd_parser.o \
 	  i915_gem_batch_pool.o \
 	  i915_gem_context.o \
-	  i915_gem_debug.o \
 	  i915_gem_dmabuf.o \
 	  i915_gem_evict.o \
 	  i915_gem_execbuffer.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b8df48e0e32b..089415f51a0b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -421,8 +421,6 @@ void intel_link_compute_m_n(int bpp, int nlanes,
 #define DRIVER_MINOR		6
 #define DRIVER_PATCHLEVEL	0
 
-#define WATCH_LISTS	0
-
 struct opregion_header;
 struct opregion_acpi;
 struct opregion_swsci;
@@ -2134,7 +2132,6 @@ struct drm_i915_gem_object {
 	struct drm_mm_node *stolen;
 	struct list_head global_list;
 
-	struct list_head engine_list[I915_NUM_ENGINES];
 	/** Used in execbuf to temporarily hold a ref */
 	struct list_head obj_exec_link;
 
@@ -3354,13 +3351,6 @@ static inline bool i915_gem_object_needs_bit17_swizzle(struct drm_i915_gem_objec
 		obj->tiling_mode != I915_TILING_NONE;
 }
 
-/* i915_gem_debug.c */
-#if WATCH_LISTS
-int i915_verify_lists(struct drm_device *dev);
-#else
-#define i915_verify_lists(dev) 0
-#endif
-
 /* i915_debugfs.c */
 #ifdef CONFIG_DEBUG_FS
 int i915_debugfs_register(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f517bc151af1..3b3a3b834e80 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -41,10 +41,6 @@
 
 static void i915_gem_object_flush_gtt_write_domain(struct drm_i915_gem_object *obj);
 static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj);
-static void
-i915_gem_object_retire__write(struct drm_i915_gem_object *obj);
-static void
-i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring);
 
 static bool cpu_cache_is_coherent(struct drm_device *dev,
 				  enum i915_cache_level level)
@@ -118,7 +114,6 @@ int i915_mutex_lock_interruptible(struct drm_device *dev)
 	if (ret)
 		return ret;
 
-	WARN_ON(i915_verify_lists(dev));
 	return 0;
 }
 
@@ -1105,23 +1100,6 @@ put_rpm:
 	return ret;
 }
 
-static void
-i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
-			       struct drm_i915_gem_request *req)
-{
-	int ring = req->engine->id;
-
-	if (i915_gem_active_peek(&obj->last_read[ring],
-				 &obj->base.dev->struct_mutex) == req)
-		i915_gem_object_retire__read(obj, ring);
-	else if (i915_gem_active_peek(&obj->last_write,
-				      &obj->base.dev->struct_mutex) == req)
-		i915_gem_object_retire__write(obj);
-
-	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
-		i915_gem_request_retire_upto(req);
-}
-
 /**
  * Ensures that all rendering to the object has completed and the object is
  * safe to unbind from the GTT or access from the CPU.
@@ -1148,19 +1126,10 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 	}
 
 	for_each_active(active_mask, idx) {
-		struct drm_i915_gem_request *request;
-		int ret;
-
-		request = i915_gem_active_peek(&active[idx],
-					       &obj->base.dev->struct_mutex);
-		if (!request)
-			continue;
-
-		ret = i915_wait_request(request);
+		int ret = i915_gem_active_retire(&active[idx],
+						 &obj->base.dev->struct_mutex);
 		if (ret)
 			return ret;
-
-		i915_gem_object_retire_request(obj, request);
 	}
 
 	return 0;
@@ -1210,11 +1179,8 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
 		ret = __i915_wait_request(requests[i], true, NULL, rps);
 	mutex_lock(&dev->struct_mutex);
 
-	for (i = 0; i < n; i++) {
-		if (ret == 0)
-			i915_gem_object_retire_request(obj, requests[i]);
+	for (i = 0; i < n; i++)
 		i915_gem_request_put(requests[i]);
-	}
 
 	return ret;
 }
@@ -2111,40 +2077,38 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 		i915_gem_object_get(obj);
 	obj->active |= intel_engine_flag(engine);
 
-	list_move_tail(&obj->engine_list[engine->id], &engine->active_list);
 	i915_gem_active_set(&obj->last_read[engine->id], req);
 
 	list_move_tail(&vma->vm_link, &vma->vm->active_list);
 }
 
 static void
-i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
+i915_gem_object_retire__fence(struct i915_gem_active *active,
+			      struct drm_i915_gem_request *req)
 {
-	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_write));
-	GEM_BUG_ON(!(obj->active &
-		     intel_engine_flag(i915_gem_active_get_engine(&obj->last_write,
-								  &obj->base.dev->struct_mutex))));
+}
 
-	i915_gem_active_set(&obj->last_write, NULL);
-	intel_fb_obj_flush(obj, true, ORIGIN_CS);
+static void
+i915_gem_object_retire__write(struct i915_gem_active *active,
+			      struct drm_i915_gem_request *request)
+{
+	intel_fb_obj_flush(container_of(active,
+					struct drm_i915_gem_object,
+					last_write),
+			   true,
+			   ORIGIN_CS);
 }
 
 static void
-i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
+i915_gem_object_retire__read(struct i915_gem_active *active,
+			     struct drm_i915_gem_request *request)
 {
-	struct intel_engine_cs *engine;
+	int ring = request->engine->id;
+	struct drm_i915_gem_object *obj =
+		container_of(active, struct drm_i915_gem_object, last_read[ring]);
 	struct i915_vma *vma;
 
-	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_read[ring]));
-	GEM_BUG_ON(!(obj->active & (1 << ring)));
-
-	list_del_init(&obj->engine_list[ring]);
-	i915_gem_active_set(&obj->last_read[ring], NULL);
-
-	engine = i915_gem_active_get_engine(&obj->last_write,
-					    &obj->base.dev->struct_mutex);
-	if (engine && engine->id == ring)
-		i915_gem_object_retire__write(obj);
+	GEM_BUG_ON((obj->active & (1 << ring)) == 0);
 
 	obj->active &= ~(1 << ring);
 	if (obj->active)
@@ -2154,15 +2118,13 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
 	 * so that we don't steal from recently used but inactive objects
 	 * (unless we are forced to ofc!)
 	 */
-	list_move_tail(&obj->global_list,
-		       &to_i915(obj->base.dev)->mm.bound_list);
+	list_move_tail(&obj->global_list, &request->i915->mm.bound_list);
 
 	list_for_each_entry(vma, &obj->vma_list, obj_link) {
 		if (!list_empty(&vma->vm_link))
 			list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
 	}
 
-	i915_gem_active_set(&obj->last_fence, NULL);
 	i915_gem_object_put(obj);
 }
 
@@ -2240,16 +2202,6 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
 {
 	struct intel_ring *ring;
 
-	while (!list_empty(&engine->active_list)) {
-		struct drm_i915_gem_object *obj;
-
-		obj = list_first_entry(&engine->active_list,
-				       struct drm_i915_gem_object,
-				       engine_list[engine->id]);
-
-		i915_gem_object_retire__read(obj, engine->id);
-	}
-
 	/*
 	 * Clear the execlists queue up before freeing the requests, as those
 	 * are the ones that keep the context and ringbuffer backing objects
@@ -2314,8 +2266,6 @@ void i915_gem_reset(struct drm_device *dev)
 	i915_gem_context_reset(dev);
 
 	i915_gem_restore_fences(dev);
-
-	WARN_ON(i915_verify_lists(dev));
 }
 
 /**
@@ -2324,13 +2274,6 @@ void i915_gem_reset(struct drm_device *dev)
 void
 i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 {
-	WARN_ON(i915_verify_lists(engine->dev));
-
-	/* Retire requests first as we use it above for the early return.
-	 * If we retire requests last, we may use a later seqno and so clear
-	 * the requests lists without clearing the active list, leading to
-	 * confusion.
-	 */
 	while (!list_empty(&engine->request_list)) {
 		struct drm_i915_gem_request *request;
 
@@ -2343,26 +2286,6 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
 
 		i915_gem_request_retire_upto(request);
 	}
-
-	/* Move any buffers on the active list that are no longer referenced
-	 * by the ringbuffer to the flushing/inactive lists as appropriate,
-	 * before we free the context associated with the requests.
-	 */
-	while (!list_empty(&engine->active_list)) {
-		struct drm_i915_gem_object *obj;
-
-		obj = list_first_entry(&engine->active_list,
-				       struct drm_i915_gem_object,
-				       engine_list[engine->id]);
-
-		if (!list_empty(&i915_gem_active_peek(&obj->last_read[engine->id],
-						      &obj->base.dev->struct_mutex)->link))
-			break;
-
-		i915_gem_object_retire__read(obj, engine->id);
-	}
-
-	WARN_ON(i915_verify_lists(engine->dev));
 }
 
 void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
@@ -2526,9 +2449,6 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 	if (to->engine == from->engine)
 		return 0;
 
-	if (i915_gem_request_completed(from))
-		return 0;
-
 	if (!i915.semaphores) {
 		ret = __i915_wait_request(from,
 					  from->i915->mm.interruptible,
@@ -2536,8 +2456,6 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
 					  NO_WAITBOOST);
 		if (ret)
 			return ret;
-
-		i915_gem_object_retire_request(obj, from);
 	} else {
 		int idx = intel_engine_sync_index(from->engine, to->engine);
 		if (from->fence.seqno <= from->engine->semaphore.sync_seqno[idx])
@@ -2739,7 +2657,6 @@ int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv)
 			return ret;
 	}
 
-	WARN_ON(i915_verify_lists(dev));
 	return 0;
 }
 
@@ -3764,7 +3681,12 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 
 	INIT_LIST_HEAD(&obj->global_list);
 	for (i = 0; i < I915_NUM_ENGINES; i++)
-		INIT_LIST_HEAD(&obj->engine_list[i]);
+		init_request_active(&obj->last_read[i],
+				    i915_gem_object_retire__read);
+	init_request_active(&obj->last_write,
+			    i915_gem_object_retire__write);
+	init_request_active(&obj->last_fence,
+			    i915_gem_object_retire__fence);
 	INIT_LIST_HEAD(&obj->obj_exec_link);
 	INIT_LIST_HEAD(&obj->vma_list);
 	INIT_LIST_HEAD(&obj->batch_pool_link);
@@ -4272,7 +4194,6 @@ i915_gem_cleanup_engines(struct drm_device *dev)
 static void
 init_engine_lists(struct intel_engine_cs *engine)
 {
-	INIT_LIST_HEAD(&engine->active_list);
 	INIT_LIST_HEAD(&engine->request_list);
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
deleted file mode 100644
index a56516482394..000000000000
--- a/drivers/gpu/drm/i915/i915_gem_debug.c
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Copyright © 2008 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Keith Packard <keithp@keithp.com>
- *
- */
-
-#include <drm/drmP.h>
-#include <drm/i915_drm.h>
-#include "i915_drv.h"
-
-#if WATCH_LISTS
-int
-i915_verify_lists(struct drm_device *dev)
-{
-	static int warned;
-	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct drm_i915_gem_object *obj;
-	struct intel_engine_cs *engine;
-	int err = 0;
-
-	if (warned)
-		return 0;
-
-	for_each_engine(engine, dev_priv) {
-		list_for_each_entry(obj, &engine->active_list,
-				    engine_list[engine->id]) {
-			if (obj->base.dev != dev ||
-			    !atomic_read(&obj->base.refcount.refcount)) {
-				DRM_ERROR("%s: freed active obj %p\n",
-					  engine->name, obj);
-				err++;
-				break;
-			} else if (!obj->active ||
-				   obj->last_read_req[engine->id] == NULL) {
-				DRM_ERROR("%s: invalid active obj %p\n",
-					  engine->name, obj);
-				err++;
-			} else if (obj->base.write_domain) {
-				DRM_ERROR("%s: invalid write obj %p (w %x)\n",
-					  engine->name,
-					  obj, obj->base.write_domain);
-				err++;
-			}
-		}
-	}
-
-	return warned = err;
-}
-#endif /* WATCH_LIST */
diff --git a/drivers/gpu/drm/i915/i915_gem_fence.c b/drivers/gpu/drm/i915/i915_gem_fence.c
index 6c39da8dd6ea..ee91705734bc 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence.c
@@ -261,15 +261,8 @@ static inline void i915_gem_object_fence_lost(struct drm_i915_gem_object *obj)
 static int
 i915_gem_object_wait_fence(struct drm_i915_gem_object *obj)
 {
-	int ret;
-
-	ret = i915_gem_active_wait(&obj->last_fence,
+	return i915_gem_active_wait(&obj->last_fence,
 				   &obj->base.dev->struct_mutex);
-	if (ret)
-		return ret;
-
-	i915_gem_active_set(&obj->last_fence, NULL);
-	return 0;
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 20ad95d9a65f..2e13934041f3 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -22,6 +22,8 @@
  *
  */
 
+#include <linux/prefetch.h>
+
 #include "i915_drv.h"
 
 static inline struct drm_i915_gem_request *
@@ -239,6 +241,7 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
 		   engine->fence_context,
 		   seqno);
 
+	INIT_LIST_HEAD(&req->active_list);
 	req->i915 = dev_priv;
 	req->engine = engine;
 	req->reset_counter = reset_counter;
@@ -313,6 +316,8 @@ i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
 
 static void i915_gem_request_retire(struct drm_i915_gem_request *request)
 {
+	struct i915_gem_active *active, *next;
+
 	trace_i915_gem_request_retire(request);
 	list_del_init(&request->link);
 
@@ -326,6 +331,24 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
 	 */
 	request->ring->last_retired_head = request->postfix;
 
+	/* Walk through the active list, calling retire on each. This allows
+	 * objects to track their GPU activity and mark themselves as idle
+	 * when their *last* active request is completed (updating state
+	 * tracking lists for eviction, active references for GEM, etc).
+	 *
+	 * As the ->retire() may free the node, we decouple it first and
+	 * pass along the auxiliary information (to avoid dereferencing
+	 * the node after the callback).
+	 */
+	list_for_each_entry_safe(active, next, &request->active_list, link) {
+		prefetchw(next);
+
+		INIT_LIST_HEAD(&active->link);
+		active->__request = NULL;
+
+		active->retire(active, request);
+	}
+
 	i915_gem_request_remove_from_client(request);
 
 	if (request->previous_context) {
@@ -344,7 +367,6 @@ void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
 	struct drm_i915_gem_request *tmp;
 
 	lockdep_assert_held(&req->i915->dev->struct_mutex);
-
 	if (list_empty(&req->link))
 		return;
 
@@ -354,8 +376,6 @@ void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
 
 		i915_gem_request_retire(tmp);
 	} while (tmp != req);
-
-	WARN_ON(i915_verify_lists(engine->dev));
 }
 
 static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
@@ -563,9 +583,6 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
 
 	might_sleep();
 
-	if (list_empty(&req->link))
-		return 0;
-
 	if (i915_gem_request_completed(req))
 		return 0;
 
@@ -700,11 +717,13 @@ int i915_wait_request(struct drm_i915_gem_request *req)
 {
 	int ret;
 
-	BUG_ON(req == NULL);
-	BUG_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));
+	lockdep_assert_held(&req->i915->dev->struct_mutex);
+	GEM_BUG_ON(list_empty(&req->link));
 
-	ret = __i915_wait_request(req, req->i915->mm.interruptible,
-				  NULL, NULL);
+	ret = __i915_wait_request(req,
+				  req->i915->mm.interruptible,
+				  NULL,
+				  NULL);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 1599e7bc3e48..e794801baf07 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -101,6 +101,7 @@ struct drm_i915_gem_request {
 	/** Batch buffer related to this request if any (used for
 	 * error state dump only) */
 	struct drm_i915_gem_object *batch_obj;
+	struct list_head active_list;
 
 	/** Time at which this request was emitted, in jiffies. */
 	unsigned long emitted_jiffies;
@@ -209,8 +210,12 @@ struct intel_rps_client;
 int __i915_wait_request(struct drm_i915_gem_request *req,
 			bool interruptible,
 			s64 *timeout,
-			struct intel_rps_client *rps);
-int __must_check i915_wait_request(struct drm_i915_gem_request *req);
+			struct intel_rps_client *rps)
+	__attribute__((nonnull(1)));
+
+int __must_check
+i915_wait_request(struct drm_i915_gem_request *req)
+	__attribute__((nonnull));
 
 static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine);
 
@@ -272,6 +277,9 @@ static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
  */
 struct i915_gem_active {
 	struct drm_i915_gem_request *__request;
+	struct list_head link;
+	void (*retire)(struct i915_gem_active *,
+		       struct drm_i915_gem_request *);
 };
 
 /**
@@ -284,10 +292,20 @@ struct i915_gem_active {
  * retired, the @active tracker is updated to report idle.
  */
 static inline void
+init_request_active(struct i915_gem_active *active,
+		    void (*func)(struct i915_gem_active *,
+				 struct drm_i915_gem_request *))
+{
+	INIT_LIST_HEAD(&active->link);
+	active->retire = func;
+}
+
+static inline void
 i915_gem_active_set(struct i915_gem_active *active,
 		    struct drm_i915_gem_request *request)
 {
-	i915_gem_request_assign(&active->__request, request);
+	list_move(&active->link, &request->active_list);
+	active->__request = request;
 }
 
 static inline struct drm_i915_gem_request *
@@ -297,17 +315,23 @@ __i915_gem_active_peek(const struct i915_gem_active *active)
 }
 
 /**
- * i915_gem_active_peek - report the request being monitored
+ * i915_gem_active_peek - report the active request being monitored
  * @active - the active tracker
  *
- * i915_gem_active_peek() returns the current request being tracked, or NULL.
- * It does not obtain a reference on the request for the caller, so the
- * caller must hold struct_mutex.
+ * i915_gem_active_peek() returns the current request being tracked if
+ * still active, or NULL. It does not obtain a reference on the request
+ * for the caller, so the caller must hold struct_mutex.
  */
 static inline struct drm_i915_gem_request *
 i915_gem_active_peek(const struct i915_gem_active *active, struct mutex *mutex)
 {
-	return active->__request;
+	struct drm_i915_gem_request *request;
+       
+	request = active->__request;
+	if (!request || i915_gem_request_completed(request))
+		return NULL;
+
+	return request;
 }
 
 /**
@@ -320,13 +344,7 @@ i915_gem_active_peek(const struct i915_gem_active *active, struct mutex *mutex)
 static inline struct drm_i915_gem_request *
 i915_gem_active_get(const struct i915_gem_active *active, struct mutex *mutex)
 {
-	struct drm_i915_gem_request *request;
-
-	request = i915_gem_active_peek(active, mutex);
-	if (!request || i915_gem_request_completed(request))
-		return NULL;
-
-	return i915_gem_request_get(request);
+	return i915_gem_request_get(i915_gem_active_peek(active, mutex));
 }
 
 /**
@@ -355,13 +373,7 @@ static inline bool
 i915_gem_active_is_idle(const struct i915_gem_active *active,
 			struct mutex *mutex)
 {
-	struct drm_i915_gem_request *request;
-
-	request = i915_gem_active_peek(active, mutex);
-	if (!request || i915_gem_request_completed(request))
-		return true;
-
-	return false;
+	return !i915_gem_active_peek(active, mutex);
 }
 
 /**
@@ -369,7 +381,9 @@ i915_gem_active_is_idle(const struct i915_gem_active *active,
  * @active - the active request on which to wait
  *
  * i915_gem_active_wait() waits until the request is completed before
- * returning.
+ * returning. i915_gem_active_wait() returns immediately if the active
+ * request is already complete, that is it will not run the retirement
+ * callbacks unless it has to wait for a busy request.
  */
 static inline int __must_check
 i915_gem_active_wait(const struct i915_gem_active *active, struct mutex *mutex)
@@ -387,14 +401,21 @@ i915_gem_active_wait(const struct i915_gem_active *active, struct mutex *mutex)
  * i915_gem_active_retire - waits until the request is retired
  * @active - the active request on which to wait
  *
- * Unlike i915_gem_active_eait(), this i915_gem_active_retire() will
- * make sure the request is retired before returning.
+ * Unlike i915_gem_active_eait(), i915_gem_active_retire() will
+ * make sure the request is retired (i.e. has completed and run all the
+ * retirement callbacks) before returning.
  */
 static inline int __must_check
 i915_gem_active_retire(const struct i915_gem_active *active,
 		       struct mutex *mutex)
 {
-	return i915_gem_active_wait(active, mutex);
+	struct drm_i915_gem_request *request;
+
+	request = active->__request;
+	if (!request)
+		return 0;
+
+	return i915_wait_request(request);
 }
 
 /* Convenience functions for peeking at state inside active's request whilst
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 2fffba8c3acf..69fca2f27f8b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1785,7 +1785,6 @@ logical_ring_setup(struct drm_device *dev, enum intel_engine_id id)
 
 	engine->fw_domains = fw_domains;
 
-	INIT_LIST_HEAD(&engine->active_list);
 	INIT_LIST_HEAD(&engine->request_list);
 	INIT_LIST_HEAD(&engine->buffers);
 	INIT_LIST_HEAD(&engine->execlist_queue);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index f6a6306a598d..33d2c019576e 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2057,7 +2057,6 @@ static int intel_init_engine(struct drm_device *dev,
 
 	engine->i915 = dev_priv;
 	engine->fence_context = fence_context_alloc(1);
-	INIT_LIST_HEAD(&engine->active_list);
 	INIT_LIST_HEAD(&engine->request_list);
 	INIT_LIST_HEAD(&engine->execlist_queue);
 	INIT_LIST_HEAD(&engine->buffers);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index b6a5f48c016f..0976e155edc0 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -299,18 +299,6 @@ struct intel_engine_cs {
 	u32 ctx_desc_template;
 
 	/**
-	 * List of objects currently involved in rendering from the
-	 * ringbuffer.
-	 *
-	 * Includes buffers having the contents of their GPU caches
-	 * flushed, not necessarily primitives.  last_read_req
-	 * represents when the rendering involved will be completed.
-	 *
-	 * A reference is held on the buffer while on this list.
-	 */
-	struct list_head active_list;
-
-	/**
 	 * List of breadcrumbs associated with GPU requests currently
 	 * outstanding.
 	 */
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 50/62] drm/i915: Double check activity before relocations
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (48 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 49/62] drm/i915: Refactor activity tracking for requests Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 51/62] drm/i915: Move request list retirement to i915_gem_request.c Chris Wilson
                   ` (13 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

If the object is active and we need to perform a relocation upon it, we
need to take the slow relocation path. Before we do, double check the
active requests to see if they have completed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_execbuffer.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 5c7eb3c93a86..6fa13c618a6b 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -411,6 +411,20 @@ relocate_entry_clflush(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
+static bool object_is_idle(struct drm_i915_gem_object *obj)
+{
+	unsigned long active = obj->active;
+	int idx;
+
+	for_each_active(active, idx) {
+		if (!i915_gem_active_is_idle(&obj->last_read[idx],
+					     &obj->base.dev->struct_mutex))
+			return false;
+	}
+
+	return true;
+}
+
 static int
 i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
 				   struct eb_vmas *eb,
@@ -494,7 +508,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
 	}
 
 	/* We can't wait for rendering with pagefaults disabled */
-	if (obj->active && pagefault_disabled())
+	if (pagefault_disabled() && !object_is_idle(obj))
 		return -EFAULT;
 
 	if (use_cpu_reloc(obj))
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 51/62] drm/i915: Move request list retirement to i915_gem_request.c
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (49 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 50/62] drm/i915: Double check activity before relocations Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 52/62] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers Chris Wilson
                   ` (12 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

As the list retirement is now clean of implementation details, we can
move it closer to the request management.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c         | 41 ---------------------------------
 drivers/gpu/drm/i915/i915_gem_request.c | 33 ++++++++++++++++++++++++++
 2 files changed, 33 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3b3a3b834e80..20e174f7fc9e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2268,47 +2268,6 @@ void i915_gem_reset(struct drm_device *dev)
 	i915_gem_restore_fences(dev);
 }
 
-/**
- * This function clears the request list as sequence numbers are passed.
- */
-void
-i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
-{
-	while (!list_empty(&engine->request_list)) {
-		struct drm_i915_gem_request *request;
-
-		request = list_first_entry(&engine->request_list,
-					   struct drm_i915_gem_request,
-					   link);
-
-		if (!i915_gem_request_completed(request))
-			break;
-
-		i915_gem_request_retire_upto(request);
-	}
-}
-
-void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
-{
-	struct intel_engine_cs *engine;
-
-	if (dev_priv->gt.active_engines == 0)
-		return;
-
-	GEM_BUG_ON(!dev_priv->gt.awake);
-
-	for_each_engine(engine, dev_priv) {
-		i915_gem_retire_requests_ring(engine);
-		if (list_empty(&engine->request_list))
-			dev_priv->gt.active_engines &= ~intel_engine_flag(engine);
-	}
-
-	if (dev_priv->gt.active_engines == 0)
-		queue_delayed_work(dev_priv->wq,
-				   &dev_priv->gt.idle_work,
-				   msecs_to_jiffies(100));
-}
-
 static void
 i915_gem_retire_work_handler(struct work_struct *work)
 {
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 2e13934041f3..38e5daecd8f5 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -733,3 +733,36 @@ int i915_wait_request(struct drm_i915_gem_request *req)
 
 	return 0;
 }
+
+void i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
+{
+	struct drm_i915_gem_request *request, *next;
+
+	list_for_each_entry_safe(request, next, &engine->request_list, link) {
+		if (!i915_gem_request_completed(request))
+			break;
+
+		i915_gem_request_retire(request);
+	}
+}
+
+void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
+{
+	struct intel_engine_cs *engine;
+
+	if (dev_priv->gt.active_engines == 0)
+		return;
+
+	GEM_BUG_ON(!dev_priv->gt.awake);
+
+	for_each_engine(engine, dev_priv) {
+		i915_gem_retire_requests_ring(engine);
+		if (list_empty(&engine->request_list))
+			dev_priv->gt.active_engines &= ~intel_engine_flag(engine);
+	}
+
+	if (dev_priv->gt.active_engines == 0)
+		queue_delayed_work(dev_priv->wq,
+				   &dev_priv->gt.idle_work,
+				   msecs_to_jiffies(100));
+}
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 52/62] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (50 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 51/62] drm/i915: Move request list retirement to i915_gem_request.c Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 53/62] drm/i915: Split early global GTT initialisation Chris Wilson
                   ` (11 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

As we can now have multiple VMA inside the global GTT (with partial
mappings, rotations, etc), it is no longer true that there may just be a
single GGTT entry and so we should walk the full vma_list to count up
the actual usage. In addition to unifying the two walkers, switch from
multiplying the object size for each vma to summing the bound vma sizes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 46 +++++++++++++++----------------------
 1 file changed, 18 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 345caf2e1841..338c85a5ab27 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -341,6 +341,7 @@ static int per_file_stats(int id, void *ptr, void *data)
 	struct drm_i915_gem_object *obj = ptr;
 	struct file_stats *stats = data;
 	struct i915_vma *vma;
+	int bound = 0;
 
 	stats->count++;
 	stats->total += obj->base.size;
@@ -348,41 +349,30 @@ static int per_file_stats(int id, void *ptr, void *data)
 	if (obj->base.name || obj->base.dma_buf)
 		stats->shared += obj->base.size;
 
-	if (USES_FULL_PPGTT(obj->base.dev)) {
-		list_for_each_entry(vma, &obj->vma_list, obj_link) {
-			struct i915_hw_ppgtt *ppgtt;
+	list_for_each_entry(vma, &obj->vma_list, obj_link) {
+		if (!drm_mm_node_allocated(&vma->node))
+			continue;
 
-			if (!drm_mm_node_allocated(&vma->node))
-				continue;
+		bound++;
 
-			if (vma->is_ggtt) {
-				stats->global += obj->base.size;
-				continue;
-			}
-
-			ppgtt = container_of(vma->vm, struct i915_hw_ppgtt, base);
+		if (vma->is_ggtt) {
+			stats->global += vma->node.size;
+		} else {
+			struct i915_hw_ppgtt *ppgtt
+				= container_of(vma->vm,
+					       struct i915_hw_ppgtt,
+					       base);
 			if (ppgtt->file_priv != stats->file_priv)
 				continue;
-
-			if (obj->active) /* XXX per-vma statistic */
-				stats->active += obj->base.size;
-			else
-				stats->inactive += obj->base.size;
-
-			return 0;
-		}
-	} else {
-		if (i915_gem_obj_ggtt_bound(obj)) {
-			stats->global += obj->base.size;
-			if (obj->active)
-				stats->active += obj->base.size;
-			else
-				stats->inactive += obj->base.size;
-			return 0;
 		}
+
+		if (obj->active) /* XXX per-vma statistic */
+			stats->active += vma->node.size;
+		else
+			stats->inactive += vma->node.size;
 	}
 
-	if (!list_empty(&obj->global_list))
+	if (!bound)
 		stats->unbound += obj->base.size;
 
 	return 0;
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 53/62] drm/i915: Split early global GTT initialisation
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (51 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 52/62] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 54/62] drm/i915: Store owning file on the i915_address_space Chris Wilson
                   ` (10 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Initialising the global GTT is tricky as we wish to use the drm_mm range
manager during the modesetting initialisation (to capture stolen
allocations from the BIOS) before we actually enable GEM. To overcome
this, we currently setup the drm_mm first and then carefully rebind
them.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.c        | 19 -------
 drivers/gpu/drm/i915/i915_gem.c        |  6 ++-
 drivers/gpu/drm/i915/i915_gem_gtt.c    | 98 +++++++++++++---------------------
 drivers/gpu/drm/i915/i915_gem_gtt.h    |  2 +-
 drivers/gpu/drm/i915/i915_gem_stolen.c | 17 +++---
 5 files changed, 49 insertions(+), 93 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index babeee1a6127..4483f9e75aa5 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1340,8 +1340,6 @@ static void i915_driver_cleanup_mmio(struct drm_i915_private *dev_priv)
 static int i915_driver_init_hw(struct drm_i915_private *dev_priv)
 {
 	struct drm_device *dev = dev_priv->dev;
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	uint32_t aperture_size;
 	int ret;
 
 	if (i915_inject_load_failure())
@@ -1385,7 +1383,6 @@ static int i915_driver_init_hw(struct drm_i915_private *dev_priv)
 		}
 	}
 
-
 	/* 965GM sometimes incorrectly writes to hardware status page (HWS)
 	 * using 32bit addressing, overwriting memory if HWS is located
 	 * above 4GB.
@@ -1404,19 +1401,6 @@ static int i915_driver_init_hw(struct drm_i915_private *dev_priv)
 		}
 	}
 
-	aperture_size = ggtt->mappable_end;
-
-	ggtt->mappable =
-		io_mapping_create_wc(ggtt->mappable_base,
-				     aperture_size);
-	if (!ggtt->mappable) {
-		ret = -EIO;
-		goto out_ggtt;
-	}
-
-	ggtt->mtrr = arch_phys_wc_add(ggtt->mappable_base,
-					      aperture_size);
-
 	pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY,
 			   PM_QOS_DEFAULT_VALUE);
 
@@ -1457,14 +1441,11 @@ out_ggtt:
 static void i915_driver_cleanup_hw(struct drm_i915_private *dev_priv)
 {
 	struct drm_device *dev = dev_priv->dev;
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
 
 	if (dev->pdev->msi_enabled)
 		pci_disable_msi(dev->pdev);
 
 	pm_qos_remove_request(&dev_priv->pm_qos);
-	arch_phys_wc_del(ggtt->mtrr);
-	io_mapping_free(ggtt->mappable);
 	i915_ggtt_cleanup_hw(dev);
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 20e174f7fc9e..b51d20a4f1ea 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4112,7 +4112,10 @@ int i915_gem_init(struct drm_device *dev)
 	intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
 
 	i915_gem_init_userptr(dev_priv);
-	i915_gem_init_ggtt(dev);
+
+	ret = i915_gem_init_ggtt(dev);
+	if (ret)
+		goto out_unlock;
 
 	ret = i915_gem_context_init(dev);
 	if (ret)
@@ -4202,7 +4205,6 @@ i915_gem_load_init(struct drm_device *dev)
 				  SLAB_HWCACHE_ALIGN,
 				  NULL);
 
-	INIT_LIST_HEAD(&dev_priv->vm_list);
 	INIT_LIST_HEAD(&dev_priv->context_list);
 	INIT_LIST_HEAD(&dev_priv->mm.unbound_list);
 	INIT_LIST_HEAD(&dev_priv->mm.bound_list);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 5d718c488f23..1cdd26ea94ed 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2706,10 +2706,7 @@ static void i915_gtt_color_adjust(struct drm_mm_node *node,
 	}
 }
 
-static int i915_gem_setup_global_gtt(struct drm_device *dev,
-				     u64 start,
-				     u64 mappable_end,
-				     u64 end)
+int i915_gem_init_ggtt(struct drm_device *dev)
 {
 	/* Let GEM Manage all of the aperture.
 	 *
@@ -2722,48 +2719,16 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,
 	 */
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	struct drm_mm_node *entry;
-	struct drm_i915_gem_object *obj;
 	unsigned long hole_start, hole_end;
+	struct drm_mm_node *entry;
 	int ret;
 
-	BUG_ON(mappable_end > end);
-
-	ggtt->base.start = start;
-
-	/* Subtract the guard page before address space initialization to
-	 * shrink the range used by drm_mm */
-	ggtt->base.total = end - start - PAGE_SIZE;
-	i915_address_space_init(&ggtt->base, dev_priv);
-	ggtt->base.total += PAGE_SIZE;
-
 	if (intel_vgpu_active(dev_priv)) {
 		ret = intel_vgt_balloon(dev);
 		if (ret)
 			return ret;
 	}
 
-	if (!HAS_LLC(dev))
-		ggtt->base.mm.color_adjust = i915_gtt_color_adjust;
-
-	/* Mark any preallocated objects as occupied */
-	list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
-		struct i915_vma *vma = i915_gem_obj_to_vma(obj, &ggtt->base);
-
-		DRM_DEBUG_KMS("reserving preallocated space: %llx + %zx\n",
-			      i915_gem_obj_ggtt_offset(obj), obj->base.size);
-
-		WARN_ON(i915_gem_obj_ggtt_bound(obj));
-		ret = drm_mm_reserve_node(&ggtt->base.mm, &vma->node);
-		if (ret) {
-			DRM_DEBUG_KMS("Reservation failed: %i\n", ret);
-			return ret;
-		}
-		vma->bound |= GLOBAL_BIND;
-		__i915_vma_set_map_and_fenceable(vma);
-		list_add_tail(&vma->vm_link, &ggtt->base.inactive_list);
-	}
-
 	/* Clear any non-preallocated blocks */
 	drm_mm_for_each_hole(entry, &ggtt->base.mm, hole_start, hole_end) {
 		DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n",
@@ -2773,9 +2738,11 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,
 	}
 
 	/* And finally clear the reserved guard page */
-	ggtt->base.clear_range(&ggtt->base, end - PAGE_SIZE, PAGE_SIZE, true);
+	ggtt->base.clear_range(&ggtt->base,
+			       ggtt->base.total - PAGE_SIZE, PAGE_SIZE,
+			       true);
 
-	if (USES_PPGTT(dev) && !USES_FULL_PPGTT(dev)) {
+	if (USES_PPGTT(dev_priv) && !USES_FULL_PPGTT(dev_priv)) {
 		struct i915_hw_ppgtt *ppgtt;
 
 		ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
@@ -2811,16 +2778,20 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,
 	return 0;
 }
 
-/**
- * i915_gem_init_ggtt - Initialize GEM for Global GTT
- * @dev: DRM device
- */
-void i915_gem_init_ggtt(struct drm_device *dev)
+static void init_global_gtt(struct drm_i915_private *dev_priv)
 {
-	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
+	struct i915_address_space *ggtt = &dev_priv->ggtt.base;
+
+	INIT_LIST_HEAD(&dev_priv->vm_list);
 
-	i915_gem_setup_global_gtt(dev, 0, ggtt->mappable_end, ggtt->base.total);
+	/* Subtract the guard page before address space initialization to
+	 * shrink the range used by drm_mm */
+	ggtt->total -= PAGE_SIZE;
+	i915_address_space_init(ggtt, dev_priv);
+	ggtt->total += PAGE_SIZE;
+
+	if (!HAS_LLC(dev_priv))
+		ggtt->mm.color_adjust = i915_gtt_color_adjust;
 }
 
 /**
@@ -2849,6 +2820,9 @@ void i915_ggtt_cleanup_hw(struct drm_device *dev)
 	}
 
 	ggtt->base.cleanup(&ggtt->base);
+
+	arch_phys_wc_del(ggtt->mtrr);
+	io_mapping_free(ggtt->mappable);
 }
 
 static unsigned int gen6_get_total_gtt_size(u16 snb_gmch_ctl)
@@ -3207,21 +3181,14 @@ int i915_ggtt_init_hw(struct drm_device *dev)
 	if (ret)
 		return ret;
 
-	if ((ggtt->base.total - 1) >> 32) {
-		DRM_ERROR("We never expected a Global GTT with more than 32bits"
-			  "of address space! Found %lldM!\n",
-			  ggtt->base.total >> 20);
-		ggtt->base.total = 1ULL << 32;
-		ggtt->mappable_end = min(ggtt->mappable_end, ggtt->base.total);
-	}
+	init_global_gtt(dev_priv);
 
-	/*
-	 * Initialise stolen early so that we may reserve preallocated
-	 * objects for the BIOS to KMS transition.
-	 */
-	ret = i915_gem_init_stolen(dev);
-	if (ret)
-		goto out_gtt_cleanup;
+	ggtt->mappable =
+		io_mapping_create_wc(ggtt->mappable_base, ggtt->mappable_end);
+	if (ggtt->mappable == NULL)
+		return -EIO;
+
+	ggtt->mtrr = arch_phys_wc_add(ggtt->mappable_base, ggtt->mappable_end);
 
 	/* GMADR is the PCI mmio aperture into the global GTT. */
 	DRM_INFO("Memory usable by graphics device = %lluM\n",
@@ -3233,11 +3200,18 @@ int i915_ggtt_init_hw(struct drm_device *dev)
 		DRM_INFO("VT-d active for gfx access\n");
 #endif
 
+	/*
+	 * Initialise stolen early so that we may reserve preallocated
+	 * objects for the BIOS to KMS transition.
+	 */
+	ret = i915_gem_init_stolen(dev);
+	if (ret)
+		goto out_gtt_cleanup;
+
 	return 0;
 
 out_gtt_cleanup:
 	ggtt->base.cleanup(&ggtt->base);
-
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 62be77cac5cd..2a7221b6c9c5 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -518,7 +518,7 @@ i915_page_dir_dma_addr(const struct i915_hw_ppgtt *ppgtt, const unsigned n)
 
 int i915_ggtt_init_hw(struct drm_device *dev);
 int i915_ggtt_enable_hw(struct drm_device *dev);
-void i915_gem_init_ggtt(struct drm_device *dev);
+int i915_gem_init_ggtt(struct drm_device *dev);
 void i915_ggtt_cleanup_hw(struct drm_device *dev);
 
 int i915_ppgtt_init_hw(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index ecf920b1f986..4bd71d6956e2 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -696,18 +696,17 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	 */
 	vma->node.start = gtt_offset;
 	vma->node.size = size;
-	if (drm_mm_initialized(&ggtt->base.mm)) {
-		ret = drm_mm_reserve_node(&ggtt->base.mm, &vma->node);
-		if (ret) {
-			DRM_DEBUG_KMS("failed to allocate stolen GTT space\n");
-			goto err;
-		}
 
-		vma->bound |= GLOBAL_BIND;
-		__i915_vma_set_map_and_fenceable(vma);
-		list_add_tail(&vma->vm_link, &ggtt->base.inactive_list);
+	ret = drm_mm_reserve_node(&ggtt->base.mm, &vma->node);
+	if (ret) {
+		DRM_DEBUG_KMS("failed to allocate stolen GTT space\n");
+		goto err;
 	}
 
+	vma->bound |= GLOBAL_BIND;
+	__i915_vma_set_map_and_fenceable(vma);
+	list_add_tail(&vma->vm_link, &ggtt->base.inactive_list);
+
 	list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
 	i915_gem_object_pin_pages(obj);
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 54/62] drm/i915: Store owning file on the i915_address_space
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (52 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 53/62] drm/i915: Split early global GTT initialisation Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 55/62] drm/i915: i915_vma_move_to_active prep patch Chris Wilson
                   ` (9 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

For the global GTT (and aliasing GTT), the address space is owned by the
device (it is a global resource) and so the per-file owner field is
NULL. For per-process GTT (where we create an address space per
context), each is owned by the opening file. We can use this ownership
information to both distinguish GGTT and ppGTT address spaces, as well
as occasionally inspect the owner.

v2: Whitespace, tells us who owns i915_address_space

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |  2 +-
 drivers/gpu/drm/i915/i915_drv.h         |  1 -
 drivers/gpu/drm/i915/i915_gem_context.c |  3 ++-
 drivers/gpu/drm/i915/i915_gem_gtt.c     | 29 +++++++++++++++--------------
 drivers/gpu/drm/i915/i915_gem_gtt.h     | 17 +++++++++++------
 5 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 338c85a5ab27..2e0eb8f5cf35 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -362,7 +362,7 @@ static int per_file_stats(int id, void *ptr, void *data)
 				= container_of(vma->vm,
 					       struct i915_hw_ppgtt,
 					       base);
-			if (ppgtt->file_priv != stats->file_priv)
+			if (ppgtt->base.file != stats->file_priv)
 				continue;
 		}
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 089415f51a0b..92cd3744783c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3192,7 +3192,6 @@ i915_vm_to_ppgtt(struct i915_address_space *vm)
 	return container_of(vm, struct i915_hw_ppgtt, base);
 }
 
-
 static inline bool i915_gem_obj_ggtt_bound(struct drm_i915_gem_object *obj)
 {
 	return i915_gem_obj_ggtt_bound_view(obj, &i915_ggtt_view_normal);
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 8641783618dc..a649f6eabf98 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -323,7 +323,8 @@ i915_gem_create_context(struct drm_device *dev,
 		return ctx;
 
 	if (USES_FULL_PPGTT(dev)) {
-		struct i915_hw_ppgtt *ppgtt = i915_ppgtt_create(dev, file_priv);
+		struct i915_hw_ppgtt *ppgtt =
+			i915_ppgtt_create(to_i915(dev), file_priv);
 
 		if (IS_ERR(ppgtt)) {
 			DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 1cdd26ea94ed..57fc84b9b633 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2124,11 +2124,12 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 	return 0;
 }
 
-static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+static int __hw_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
+			   struct drm_i915_private *dev_priv)
 {
-	ppgtt->base.dev = dev;
+	ppgtt->base.dev = dev_priv->dev;
 
-	if (INTEL_INFO(dev)->gen < 8)
+	if (INTEL_INFO(dev_priv)->gen < 8)
 		return gen6_ppgtt_init(ppgtt);
 	else
 		return gen8_ppgtt_init(ppgtt);
@@ -2163,15 +2164,17 @@ static void gtt_write_workarounds(struct drm_device *dev)
 		I915_WRITE(GEN8_L3_LRA_1_GPGPU, GEN9_L3_LRA_1_GPGPU_DEFAULT_VALUE_BXT);
 }
 
-static int i915_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+static int i915_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
+			   struct drm_i915_private *dev_priv,
+			   struct drm_i915_file_private *file_priv)
 {
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	int ret = 0;
+	int ret;
 
-	ret = __hw_ppgtt_init(dev, ppgtt);
+	ret = __hw_ppgtt_init(ppgtt, dev_priv);
 	if (ret == 0) {
 		kref_init(&ppgtt->ref);
 		i915_address_space_init(&ppgtt->base, dev_priv);
+		ppgtt->base.file = file_priv;
 	}
 
 	return ret;
@@ -2203,7 +2206,8 @@ int i915_ppgtt_init_hw(struct drm_device *dev)
 }
 
 struct i915_hw_ppgtt *
-i915_ppgtt_create(struct drm_device *dev, struct drm_i915_file_private *fpriv)
+i915_ppgtt_create(struct drm_i915_private *dev_priv,
+		  struct drm_i915_file_private *fpriv)
 {
 	struct i915_hw_ppgtt *ppgtt;
 	int ret;
@@ -2212,14 +2216,12 @@ i915_ppgtt_create(struct drm_device *dev, struct drm_i915_file_private *fpriv)
 	if (!ppgtt)
 		return ERR_PTR(-ENOMEM);
 
-	ret = i915_ppgtt_init(dev, ppgtt);
+	ret = i915_ppgtt_init(ppgtt, dev_priv, fpriv);
 	if (ret) {
 		kfree(ppgtt);
 		return ERR_PTR(ret);
 	}
 
-	ppgtt->file_priv = fpriv;
-
 	trace_i915_ppgtt_create(&ppgtt->base);
 
 	return ppgtt;
@@ -2749,7 +2751,7 @@ int i915_gem_init_ggtt(struct drm_device *dev)
 		if (!ppgtt)
 			return -ENOMEM;
 
-		ret = __hw_ppgtt_init(dev, ppgtt);
+		ret = __hw_ppgtt_init(ppgtt, dev_priv);
 		if (ret) {
 			ppgtt->base.cleanup(&ppgtt->base);
 			kfree(ppgtt);
@@ -3175,7 +3177,6 @@ int i915_ggtt_init_hw(struct drm_device *dev)
 	}
 
 	ggtt->base.dev = dev;
-	ggtt->base.is_ggtt = true;
 
 	ret = ggtt->probe(ggtt);
 	if (ret)
@@ -3267,7 +3268,7 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
 
 			struct i915_hw_ppgtt *ppgtt;
 
-			if (vm->is_ggtt)
+			if (i915_is_ggtt(vm))
 				ppgtt = dev_priv->mm.aliasing_ppgtt;
 			else
 				ppgtt = i915_vm_to_ppgtt(vm);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 2a7221b6c9c5..4cabf891fd1d 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -272,12 +272,19 @@ struct i915_pml4 {
 struct i915_address_space {
 	struct drm_mm mm;
 	struct drm_device *dev;
+	/* Every address space belongs to a struct file - except for the global
+	 * GTT that is owned by the driver (and so @file is set to NULL). In
+	 * principle, no information should leak from one context to another
+	 * (or between files/processes etc) unless explicitly shared by the
+	 * owner. Tracking the owner is important in order to free up per-file
+	 * objects along with the file, to aide resource tracking, and to
+	 * assign blame.
+	 */
+	struct drm_i915_file_private *file;
 	struct list_head global_link;
 	u64 start;		/* Start offset always 0 for dri2 */
 	u64 total;		/* size addr space maps (ex. 2GB for ggtt) */
 
-	bool is_ggtt;
-
 	struct i915_page_scratch *scratch_page;
 	struct i915_page_table *scratch_pt;
 	struct i915_page_directory *scratch_pd;
@@ -333,7 +340,7 @@ struct i915_address_space {
 			u32 flags);
 };
 
-#define i915_is_ggtt(V) ((V)->is_ggtt)
+#define i915_is_ggtt(V) ((V)->file == NULL)
 
 /* The Graphics Translation Table is the way in which GEN hardware translates a
  * Graphics Virtual Address into a Physical Address. In addition to the normal
@@ -375,8 +382,6 @@ struct i915_hw_ppgtt {
 		struct i915_page_directory pd;		/* GEN6-7 */
 	};
 
-	struct drm_i915_file_private *file_priv;
-
 	gen6_pte_t __iomem *pd_addr;
 
 	int (*enable)(struct i915_hw_ppgtt *ppgtt);
@@ -523,7 +528,7 @@ void i915_ggtt_cleanup_hw(struct drm_device *dev);
 
 int i915_ppgtt_init_hw(struct drm_device *dev);
 void i915_ppgtt_release(struct kref *kref);
-struct i915_hw_ppgtt *i915_ppgtt_create(struct drm_device *dev,
+struct i915_hw_ppgtt *i915_ppgtt_create(struct drm_i915_private *dev_priv,
 					struct drm_i915_file_private *fpriv);
 static inline void i915_ppgtt_get(struct i915_hw_ppgtt *ppgtt)
 {
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 55/62] drm/i915: i915_vma_move_to_active prep patch
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (53 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 54/62] drm/i915: Store owning file on the i915_address_space Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 56/62] drm/i915: Count how many VMA are bound for an object Chris Wilson
                   ` (8 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

This patch is broken out of the next just to remove the code motion from
that patch and make it more readable. What we do here is move the
i915_vma_move_to_active() to i915_gem_execbuffer.c and put the three
stages (read, write, fenced) together so that future modifications to
active handling are all located in the same spot. The importance of this
is so that we can more simply control the order in which the requests
are place in the retirement list (i.e. control the order at which we
retire and so control the lifetimes to avoid having to hold onto
references).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h              |  3 +-
 drivers/gpu/drm/i915/i915_gem.c              | 16 -------
 drivers/gpu/drm/i915/i915_gem_context.c      |  9 ++--
 drivers/gpu/drm/i915/i915_gem_execbuffer.c   | 64 ++++++++++++++++++----------
 drivers/gpu/drm/i915/i915_gem_render_state.c |  2 +-
 5 files changed, 50 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 92cd3744783c..912d54b6998a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3045,7 +3045,8 @@ int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
 int i915_gem_object_sync(struct drm_i915_gem_object *obj,
 			 struct drm_i915_gem_request *to);
 void i915_vma_move_to_active(struct i915_vma *vma,
-			     struct drm_i915_gem_request *req);
+			     struct drm_i915_gem_request *req,
+			     unsigned flags);
 int i915_gem_dumb_create(struct drm_file *file_priv,
 			 struct drm_device *dev,
 			 struct drm_mode_create_dumb *args);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b51d20a4f1ea..ca6b55f52f8b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2066,22 +2066,6 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj)
 	return obj->mapping;
 }
 
-void i915_vma_move_to_active(struct i915_vma *vma,
-			     struct drm_i915_gem_request *req)
-{
-	struct drm_i915_gem_object *obj = vma->obj;
-	struct intel_engine_cs *engine = req->engine;
-
-	/* Add a reference if we're newly entering the active list. */
-	if (obj->active == 0)
-		i915_gem_object_get(obj);
-	obj->active |= intel_engine_flag(engine);
-
-	i915_gem_active_set(&obj->last_read[engine->id], req);
-
-	list_move_tail(&vma->vm_link, &vma->vm->active_list);
-}
-
 static void
 i915_gem_object_retire__fence(struct i915_gem_active *active,
 			      struct drm_i915_gem_request *req)
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index a649f6eabf98..cace85998204 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -775,8 +775,8 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
 	 * MI_SET_CONTEXT instead of when the next seqno has completed.
 	 */
 	if (from != NULL) {
-		from->engine[RCS].state->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
-		i915_vma_move_to_active(i915_gem_obj_to_ggtt(from->engine[RCS].state), req);
+		struct drm_i915_gem_object *obj = from->engine[RCS].state;
+
 		/* As long as MI_SET_CONTEXT is serializing, ie. it flushes the
 		 * whole damn pipeline, we don't need to explicitly mark the
 		 * object dirty. The only exception is that the context must be
@@ -784,10 +784,11 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
 		 * able to defer doing this until we know the object would be
 		 * swapped, but there is no way to do that yet.
 		 */
-		from->engine[RCS].state->dirty = 1;
+		obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+		i915_vma_move_to_active(i915_gem_obj_to_ggtt(obj), req, 0);
 
 		/* obj is kept alive until the next request by its active ref */
-		i915_gem_object_ggtt_unpin(from->engine[RCS].state);
+		i915_gem_object_ggtt_unpin(obj);
 		i915_gem_context_put(from);
 	}
 	engine->last_context = i915_gem_context_get(to);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 6fa13c618a6b..e099080b3b5b 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1110,43 +1110,63 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
 	return ctx;
 }
 
+void i915_vma_move_to_active(struct i915_vma *vma,
+			     struct drm_i915_gem_request *req,
+			     unsigned flags)
+{
+	struct drm_i915_gem_object *obj = vma->obj;
+	const unsigned idx = req->engine->id;
+
+	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+
+	obj->dirty = 1; /* be paranoid  */
+
+	/* Add a reference if we're newly entering the active list. */
+	if (obj->active == 0)
+		i915_gem_object_get(obj);
+	obj->active |= 1 << idx;
+	i915_gem_active_set(&obj->last_read[idx], req);
+
+	if (flags & EXEC_OBJECT_WRITE) {
+		i915_gem_active_set(&obj->last_write, req);
+
+		intel_fb_obj_invalidate(obj, ORIGIN_CS);
+
+		/* update for the implicit flush after a batch */
+		obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
+	}
+
+	if (flags & EXEC_OBJECT_NEEDS_FENCE) {
+		i915_gem_active_set(&obj->last_fence, req);
+		if (flags & __EXEC_OBJECT_HAS_FENCE) {
+			struct drm_i915_private *dev_priv = req->i915;
+			list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
+				       &dev_priv->mm.fence_list);
+		}
+	}
+
+	list_move_tail(&vma->vm_link, &vma->vm->active_list);
+}
+
 static void
 i915_gem_execbuffer_move_to_active(struct list_head *vmas,
 				   struct drm_i915_gem_request *req)
 {
-	struct intel_engine_cs *engine = i915_gem_request_get_engine(req);
 	struct i915_vma *vma;
 
 	list_for_each_entry(vma, vmas, exec_list) {
-		struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
 		struct drm_i915_gem_object *obj = vma->obj;
 		u32 old_read = obj->base.read_domains;
 		u32 old_write = obj->base.write_domain;
 
-		obj->dirty = 1; /* be paranoid  */
 		obj->base.write_domain = obj->base.pending_write_domain;
-		if (obj->base.write_domain == 0)
+		if (obj->base.write_domain)
+			vma->exec_entry->flags |= EXEC_OBJECT_WRITE;
+		else
 			obj->base.pending_read_domains |= obj->base.read_domains;
 		obj->base.read_domains = obj->base.pending_read_domains;
 
-		i915_vma_move_to_active(vma, req);
-		if (obj->base.write_domain) {
-			i915_gem_active_set(&obj->last_write, req);
-
-			intel_fb_obj_invalidate(obj, ORIGIN_CS);
-
-			/* update for the implicit flush after a batch */
-			obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
-		}
-		if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
-			i915_gem_active_set(&obj->last_fence, req);
-			if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
-				struct drm_i915_private *dev_priv = engine->i915;
-				list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
-					       &dev_priv->mm.fence_list);
-			}
-		}
-
+		i915_vma_move_to_active(vma, req, vma->exec_entry->flags);
 		trace_i915_gem_object_change_domain(obj, old_read, old_write);
 	}
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 8587dbc302e0..c0abe9a2210f 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -232,7 +232,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
 			goto out;
 	}
 
-	i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
+	i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req, 0);
 out:
 	render_state_fini(&so);
 	return ret;
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 56/62] drm/i915: Count how many VMA are bound for an object
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (54 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 55/62] drm/i915: i915_vma_move_to_active prep patch Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 57/62] drm/i915: Be more careful when unbinding vma Chris Wilson
                   ` (7 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Since we may have VMA allocated for an object, but we interrupted their
binding, there is a disparity between have elements on the obj->vma_list
and being bound. i915_gem_obj_bound_any() does this check, but this is
not rigorously observed - add an explicit count to make it easier.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c      | 12 +++++------
 drivers/gpu/drm/i915/i915_drv.h          |  3 ++-
 drivers/gpu/drm/i915/i915_gem.c          | 34 +++++++++++++-------------------
 drivers/gpu/drm/i915/i915_gem_shrinker.c | 17 +---------------
 drivers/gpu/drm/i915/i915_gem_stolen.c   |  1 +
 5 files changed, 23 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 2e0eb8f5cf35..51f84dd37675 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -177,6 +177,9 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 	if (obj->fence_reg != I915_FENCE_REG_NONE)
 		seq_printf(m, " (fence: %d)", obj->fence_reg);
 	list_for_each_entry(vma, &obj->vma_list, obj_link) {
+		if (!drm_mm_node_allocated(&vma->node))
+			continue;
+
 		seq_printf(m, " (%sgtt offset: %08llx, size: %08llx",
 			   vma->is_ggtt ? "g" : "pp",
 			   vma->node.start, vma->node.size);
@@ -341,11 +344,11 @@ static int per_file_stats(int id, void *ptr, void *data)
 	struct drm_i915_gem_object *obj = ptr;
 	struct file_stats *stats = data;
 	struct i915_vma *vma;
-	int bound = 0;
 
 	stats->count++;
 	stats->total += obj->base.size;
-
+	if (!obj->bind_count)
+		stats->unbound += obj->base.size;
 	if (obj->base.name || obj->base.dma_buf)
 		stats->shared += obj->base.size;
 
@@ -353,8 +356,6 @@ static int per_file_stats(int id, void *ptr, void *data)
 		if (!drm_mm_node_allocated(&vma->node))
 			continue;
 
-		bound++;
-
 		if (vma->is_ggtt) {
 			stats->global += vma->node.size;
 		} else {
@@ -372,9 +373,6 @@ static int per_file_stats(int id, void *ptr, void *data)
 			stats->inactive += vma->node.size;
 	}
 
-	if (!bound)
-		stats->unbound += obj->base.size;
-
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 912d54b6998a..dd3f7afdf423 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2198,6 +2198,8 @@ struct drm_i915_gem_object {
 
 	unsigned int frontbuffer_bits:INTEL_FRONTBUFFER_BITS;
 
+	/** Count of VMA actually bound by this object */
+	unsigned int bind_count;
 	unsigned int pin_display;
 
 	struct sg_table *pages;
@@ -3159,7 +3161,6 @@ i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *o)
 	return i915_gem_obj_ggtt_offset_view(o, &i915_ggtt_view_normal);
 }
 
-bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o);
 bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
 				  const struct i915_ggtt_view *view);
 bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ca6b55f52f8b..2ba467c0b0b7 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1822,7 +1822,7 @@ i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 	if (obj->pages_pin_count)
 		return -EBUSY;
 
-	BUG_ON(i915_gem_obj_bound_any(obj));
+	BUG_ON(obj->bind_count);
 
 	/* ->put_pages might need to allocate memory for the bit17 swizzle
 	 * array, hence protect them from being reaped by removing them from gtt
@@ -2508,7 +2508,6 @@ static void __i915_vma_iounmap(struct i915_vma *vma)
 static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
-	struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
 	int ret;
 
 	if (list_empty(&vma->obj_link))
@@ -2522,7 +2521,8 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 	if (vma->pin_count)
 		return -EBUSY;
 
-	BUG_ON(obj->pages == NULL);
+	GEM_BUG_ON(obj->bind_count == 0);
+	GEM_BUG_ON(obj->pages == NULL);
 
 	if (wait) {
 		ret = i915_gem_object_wait_rendering(obj, false);
@@ -2562,8 +2562,9 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 
 	/* Since the unbound list is global, only move to that list if
 	 * no more VMAs exist. */
-	if (list_empty(&obj->vma_list))
-		list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
+	if (--obj->bind_count == 0)
+		list_move_tail(&obj->global_list,
+			       &to_i915(obj->base.dev)->mm.unbound_list);
 
 	/* And finally now the object is completely decoupled from this vma,
 	 * we can drop its hold on the backing storage and allow it to be
@@ -2792,6 +2793,7 @@ search_free:
 
 	list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
 	list_add_tail(&vma->vm_link, &vm->inactive_list);
+	obj->bind_count++;
 
 	return vma;
 
@@ -2983,7 +2985,6 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 {
 	struct drm_device *dev = obj->base.dev;
 	struct i915_vma *vma, *next;
-	bool bound = false;
 	int ret = 0;
 
 	if (obj->cache_level == cache_level)
@@ -3007,8 +3008,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 			ret = i915_vma_unbind(vma);
 			if (ret)
 				return ret;
-		} else
-			bound = true;
+		}
 	}
 
 	/* We can reuse the existing drm_mm nodes but need to change the
@@ -3018,7 +3018,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 	 * rewrite the PTE in the belief that doing so tramples upon less
 	 * state and so involves less work.
 	 */
-	if (bound) {
+	if (obj->bind_count) {
 		/* Before we change the PTE, the GPU must not be accessing it.
 		 * If we wait upon the object, we know that all the bound
 		 * VMA are no longer active.
@@ -3227,6 +3227,9 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
 					    old_read_domains,
 					    old_write_domain);
 
+	/* Increment the pages_pin_count to guard against the shrinker */
+	obj->pages_pin_count++;
+
 	return 0;
 
 err_unpin_display:
@@ -3243,6 +3246,7 @@ i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj,
 
 	i915_gem_object_ggtt_unpin_view(obj, view);
 
+	obj->pages_pin_count--;
 	obj->pin_display--;
 }
 
@@ -3757,6 +3761,7 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
 			dev_priv->mm.interruptible = was_interruptible;
 		}
 	}
+	GEM_BUG_ON(obj->bind_count);
 
 	/* Stolen objects don't hold a ref, but do hold pin count. Fix that up
 	 * before progressing. */
@@ -4398,17 +4403,6 @@ bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
 	return false;
 }
 
-bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o)
-{
-	struct i915_vma *vma;
-
-	list_for_each_entry(vma, &o->vma_list, obj_link)
-		if (drm_mm_node_allocated(&vma->node))
-			return true;
-
-	return false;
-}
-
 unsigned long i915_gem_obj_ggtt_size(struct drm_i915_gem_object *o)
 {
 	struct i915_vma *vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index c4858c12f69e..a02903007f9a 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -48,21 +48,6 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
 #endif
 }
 
-static int num_vma_bound(struct drm_i915_gem_object *obj)
-{
-	struct i915_vma *vma;
-	int count = 0;
-
-	list_for_each_entry(vma, &obj->vma_list, obj_link) {
-		if (drm_mm_node_allocated(&vma->node))
-			count++;
-		if (vma->pin_count)
-			count++;
-	}
-
-	return count;
-}
-
 static bool swap_available(void)
 {
 	return get_nr_swap_pages() > 0;
@@ -82,7 +67,7 @@ static bool can_release_pages(struct drm_i915_gem_object *obj)
 	 * to the GPU, simply unbinding from the GPU is not going to succeed
 	 * in releasing our pin count on the pages themselves.
 	 */
-	if (obj->pages_pin_count != num_vma_bound(obj))
+	if (obj->pages_pin_count != obj->bind_count)
 		return false;
 
 	/* We can only return physical pages to the system if we can either
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 4bd71d6956e2..21584e86908c 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -706,6 +706,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 	vma->bound |= GLOBAL_BIND;
 	__i915_vma_set_map_and_fenceable(vma);
 	list_add_tail(&vma->vm_link, &ggtt->base.inactive_list);
+	obj->bind_count++;
 
 	list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
 	i915_gem_object_pin_pages(obj);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 57/62] drm/i915: Be more careful when unbinding vma
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (55 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 56/62] drm/i915: Count how many VMA are bound for an object Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 58/62] drm/i915: Kill drop_pages() Chris Wilson
                   ` (6 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

When we call i915_vma_unbind(), we will wait upon outstanding rendering.
This will also trigger a retirement phase, which may update the object
lists. If, we extend request tracking to the VMA itself (rather than
keep it at the encompassing object), then there is a potential that the
obj->vma_list be modified for other elements upon i915_vma_unbind(). As
a result, if we walk over the object list and call i915_vma_unbind(), we
need to be prepared for that list to change.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h          |  2 ++
 drivers/gpu/drm/i915/i915_gem.c          | 57 +++++++++++++++++++++++---------
 drivers/gpu/drm/i915/i915_gem_shrinker.c |  7 +---
 drivers/gpu/drm/i915/i915_gem_userptr.c  |  4 +--
 4 files changed, 46 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index dd3f7afdf423..83c8dcc744fb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2961,6 +2961,8 @@ int __must_check i915_vma_unbind(struct i915_vma *vma);
  * _guarantee_ VMA in question is _not in use_ anywhere.
  */
 int __must_check __i915_vma_unbind_no_wait(struct i915_vma *vma);
+
+int i915_gem_object_unbind(struct drm_i915_gem_object *obj);
 int i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
 void i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv);
 void i915_gem_release_mmap(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 2ba467c0b0b7..e5189155e729 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -255,18 +255,38 @@ static const struct drm_i915_gem_object_ops i915_gem_phys_ops = {
 	.release = i915_gem_object_release_phys,
 };
 
+int
+i915_gem_object_unbind(struct drm_i915_gem_object *obj)
+{
+	struct i915_vma *vma;
+	LIST_HEAD(still_in_list);
+	int ret;
+
+	/* The vma will only be freed if it is marked as closed, and if we wait
+	 * upon rendering to the vma, we may unbind anything in the list.
+	 */
+	while ((vma = list_first_entry_or_null(&obj->vma_list,
+					       struct i915_vma,
+					       obj_link))) {
+		list_move_tail(&vma->obj_link, &still_in_list);
+		ret = i915_vma_unbind(vma);
+		if (ret)
+			break;
+	}
+	list_splice(&still_in_list, &obj->vma_list);
+
+	return ret;
+}
+
 static int
 drop_pages(struct drm_i915_gem_object *obj)
 {
-	struct i915_vma *vma, *next;
 	int ret;
 
 	i915_gem_object_get(obj);
-	list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link)
-		if (i915_vma_unbind(vma))
-			break;
-
-	ret = i915_gem_object_put_pages(obj);
+	ret = i915_gem_object_unbind(obj);
+	if (ret == 0)
+		ret = i915_gem_object_put_pages(obj);
 	i915_gem_object_put(obj);
 
 	return ret;
@@ -2983,8 +3003,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
 int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 				    enum i915_cache_level cache_level)
 {
-	struct drm_device *dev = obj->base.dev;
-	struct i915_vma *vma, *next;
+	struct i915_vma *vma;
 	int ret = 0;
 
 	if (obj->cache_level == cache_level)
@@ -2995,7 +3014,8 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 	 * catch the issue of the CS prefetch crossing page boundaries and
 	 * reading an invalid PTE on older architectures.
 	 */
-	list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link) {
+restart:
+	list_for_each_entry(vma, &obj->vma_list, obj_link) {
 		if (!drm_mm_node_allocated(&vma->node))
 			continue;
 
@@ -3004,11 +3024,18 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 			return -EBUSY;
 		}
 
-		if (!i915_gem_valid_gtt_space(vma, cache_level)) {
-			ret = i915_vma_unbind(vma);
-			if (ret)
-				return ret;
-		}
+		if (i915_gem_valid_gtt_space(vma, cache_level))
+			continue;
+
+		ret = i915_vma_unbind(vma);
+		if (ret)
+			return ret;
+
+		/* As unbinding may affect other elements in the
+		 * obj->vma_list (due to side-effects from retiring
+		 * an active vma), play safe and restart the iterator.
+		 */
+		goto restart;
 	}
 
 	/* We can reuse the existing drm_mm nodes but need to change the
@@ -3027,7 +3054,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
 		if (ret)
 			return ret;
 
-		if (!HAS_LLC(dev) && cache_level != I915_CACHE_NONE) {
+		if (!HAS_LLC(obj->base.dev) && cache_level != I915_CACHE_NONE) {
 			/* Access to snoopable pages through the GTT is
 			 * incoherent and on some machines causes a hard
 			 * lockup. Relinquish the CPU mmaping to force
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index a02903007f9a..71ad58836f48 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -155,7 +155,6 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
 		INIT_LIST_HEAD(&still_in_list);
 		while (count < target && !list_empty(phase->list)) {
 			struct drm_i915_gem_object *obj;
-			struct i915_vma *vma, *v;
 
 			obj = list_first_entry(phase->list,
 					       typeof(*obj), global_list);
@@ -178,11 +177,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
 			i915_gem_object_get(obj);
 
 			/* For the unbound phase, this should be a no-op! */
-			list_for_each_entry_safe(vma, v,
-						 &obj->vma_list, obj_link)
-				if (i915_vma_unbind(vma))
-					break;
-
+			i915_gem_object_unbind(obj);
 			if (i915_gem_object_put_pages(obj) == 0)
 				count += obj->base.size >> PAGE_SHIFT;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index dd6d823ac3e2..e57521dbddc6 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -103,7 +103,6 @@ static void cancel_userptr(struct work_struct *work)
 
 	if (obj->pages != NULL) {
 		struct drm_i915_private *dev_priv = to_i915(dev);
-		struct i915_vma *vma, *tmp;
 		bool was_interruptible;
 
 		wait_rendering(obj);
@@ -111,8 +110,7 @@ static void cancel_userptr(struct work_struct *work)
 		was_interruptible = dev_priv->mm.interruptible;
 		dev_priv->mm.interruptible = false;
 
-		list_for_each_entry_safe(vma, tmp, &obj->vma_list, obj_link)
-			WARN_ON(i915_vma_unbind(vma));
+		WARN_ON(i915_gem_object_unbind(obj));
 		WARN_ON(i915_gem_object_put_pages(obj));
 
 		dev_priv->mm.interruptible = was_interruptible;
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 58/62] drm/i915: Kill drop_pages()
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (56 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 57/62] drm/i915: Be more careful when unbinding vma Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 59/62] drm/i915: Track active vma requests Chris Wilson
                   ` (5 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

The drop_pages() function is a dangerous trap in that it can release the
passed in object pointer and so unless the caller is aware, it can
easily trick us into using the stale object afterwards. Move it into its
solitary callsite where we know it is safe.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 20 +++++---------------
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e5189155e729..a39d767d8137 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -278,20 +278,6 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj)
 	return ret;
 }
 
-static int
-drop_pages(struct drm_i915_gem_object *obj)
-{
-	int ret;
-
-	i915_gem_object_get(obj);
-	ret = i915_gem_object_unbind(obj);
-	if (ret == 0)
-		ret = i915_gem_object_put_pages(obj);
-	i915_gem_object_put(obj);
-
-	return ret;
-}
-
 int
 i915_gem_object_attach_phys(struct drm_i915_gem_object *obj,
 			    int align)
@@ -312,7 +298,11 @@ i915_gem_object_attach_phys(struct drm_i915_gem_object *obj,
 	if (obj->base.filp == NULL)
 		return -EINVAL;
 
-	ret = drop_pages(obj);
+	ret = i915_gem_object_unbind(obj);
+	if (ret)
+		return ret;
+
+	ret = i915_gem_object_put_pages(obj);
 	if (ret)
 		return ret;
 
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 59/62] drm/i915: Track active vma requests
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (57 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 58/62] drm/i915: Kill drop_pages() Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 60/62] drm/i915: Release vma when the handle is closed Chris Wilson
                   ` (4 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

Hook the vma itself into the i915_gem_request_retire() so that we can
accurately track when a solitary vma is inactive (as opposed to having
to wait for the entire object to be idle). This improves the interaction
when using multiple contexts (with full-ppgtt) and eliminates some
frequent list walking when retiring objects after a completed request.

A side-effect is that we get an active vma reference for free. The
consequence of this is shown in the next patch...

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_debugfs.c        |  2 +-
 drivers/gpu/drm/i915/i915_gem.c            | 20 +++++++-------------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c | 10 +++++++++-
 drivers/gpu/drm/i915/i915_gem_gtt.c        | 20 ++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h        | 26 ++++++++++++++++++++++++++
 5 files changed, 63 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 51f84dd37675..99857ee0bb8b 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -367,7 +367,7 @@ static int per_file_stats(int id, void *ptr, void *data)
 				continue;
 		}
 
-		if (obj->active) /* XXX per-vma statistic */
+		if (i915_vma_is_active(vma))
 			stats->active += vma->node.size;
 		else
 			stats->inactive += vma->node.size;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a39d767d8137..ef68a9183d7d 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2100,7 +2100,6 @@ i915_gem_object_retire__read(struct i915_gem_active *active,
 	int ring = request->engine->id;
 	struct drm_i915_gem_object *obj =
 		container_of(active, struct drm_i915_gem_object, last_read[ring]);
-	struct i915_vma *vma;
 
 	GEM_BUG_ON((obj->active & (1 << ring)) == 0);
 
@@ -2112,12 +2111,9 @@ i915_gem_object_retire__read(struct i915_gem_active *active,
 	 * so that we don't steal from recently used but inactive objects
 	 * (unless we are forced to ofc!)
 	 */
-	list_move_tail(&obj->global_list, &request->i915->mm.bound_list);
-
-	list_for_each_entry(vma, &obj->vma_list, obj_link) {
-		if (!list_empty(&vma->vm_link))
-			list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
-	}
+	if (obj->bind_count)
+		list_move_tail(&obj->global_list,
+			       &request->i915->mm.bound_list);
 
 	i915_gem_object_put(obj);
 }
@@ -2915,9 +2911,6 @@ i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj)
 int
 i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
 {
-	struct drm_device *dev = obj->base.dev;
-	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
 	uint32_t old_write_domain, old_read_domains;
 	struct i915_vma *vma;
 	int ret;
@@ -2970,9 +2963,10 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
 
 	/* And bump the LRU for this access */
 	vma = i915_gem_obj_to_ggtt(obj);
-	if (vma && drm_mm_node_allocated(&vma->node) && !obj->active)
-		list_move_tail(&vma->vm_link,
-			       &ggtt->base.inactive_list);
+	if (vma &&
+	    drm_mm_node_allocated(&vma->node) &&
+	    !i915_vma_is_active(vma))
+		list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index e099080b3b5b..7b381358512e 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1121,7 +1121,13 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 
 	obj->dirty = 1; /* be paranoid  */
 
-	/* Add a reference if we're newly entering the active list. */
+	/* Add a reference if we're newly entering the active list.
+	 * The order in which we add operations to the retirement queue is
+	 * vital here: mark_active adds to the start of the callback list,
+	 * such that subsequent callbacks are called first. Therefore we
+	 * add the active reference first and queue for it to be dropped
+	 * *last*.
+	 */
 	if (obj->active == 0)
 		i915_gem_object_get(obj);
 	obj->active |= 1 << idx;
@@ -1145,6 +1151,8 @@ void i915_vma_move_to_active(struct i915_vma *vma,
 		}
 	}
 
+	i915_vma_set_active(vma, idx);
+	i915_gem_active_set(&vma->last_read[idx], req);
 	list_move_tail(&vma->vm_link, &vma->vm->active_list);
 }
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 57fc84b9b633..4d3179e15b94 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3281,12 +3281,30 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
 	i915_ggtt_flush(dev_priv);
 }
 
+static void
+i915_vma_retire(struct i915_gem_active *active,
+		struct drm_i915_gem_request *rq)
+{
+	const unsigned idx = rq->engine->id;
+	struct i915_vma *vma =
+		container_of(active, struct i915_vma, last_read[idx]);
+
+	GEM_BUG_ON(!i915_vma_has_active_engine(vma, idx));
+
+	i915_vma_unset_active(vma, idx);
+	if (i915_vma_is_active(vma))
+		return;
+
+	list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
+}
+
 static struct i915_vma *
 __i915_gem_vma_create(struct drm_i915_gem_object *obj,
 		      struct i915_address_space *vm,
 		      const struct i915_ggtt_view *ggtt_view)
 {
 	struct i915_vma *vma;
+	int i;
 
 	if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
 		return ERR_PTR(-EINVAL);
@@ -3298,6 +3316,8 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
 	INIT_LIST_HEAD(&vma->vm_link);
 	INIT_LIST_HEAD(&vma->obj_link);
 	INIT_LIST_HEAD(&vma->exec_list);
+	for (i = 0; i < ARRAY_SIZE(vma->last_read); i++)
+		init_request_active(&vma->last_read[i], i915_vma_retire);
 	vma->vm = vm;
 	vma->obj = obj;
 	vma->is_ggtt = i915_is_ggtt(vm);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 4cabf891fd1d..d86b3e4777a7 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -36,6 +36,8 @@
 
 #include <linux/io-mapping.h>
 
+#include "i915_gem_request.h"
+
 struct drm_i915_file_private;
 
 typedef uint32_t gen6_pte_t;
@@ -179,10 +181,13 @@ struct i915_vma {
 	struct i915_address_space *vm;
 	void __iomem *iomap;
 
+	struct i915_gem_active last_read[I915_NUM_ENGINES];
+
 	/** Flags and address space this VMA is bound to */
 #define GLOBAL_BIND	(1<<0)
 #define LOCAL_BIND	(1<<1)
 	unsigned int bound : 4;
+	unsigned int active : I915_NUM_ENGINES;
 	bool is_ggtt : 1;
 
 	/**
@@ -222,6 +227,27 @@ struct i915_vma {
 #define DRM_I915_GEM_OBJECT_MAX_PIN_COUNT 0xf
 };
 
+static inline bool i915_vma_is_active(const struct i915_vma *vma)
+{
+	return vma->active;
+}
+
+static inline void i915_vma_set_active(struct i915_vma *vma, unsigned engine)
+{
+	vma->active |= 1 << engine;
+}
+
+static inline void i915_vma_unset_active(struct i915_vma *vma, unsigned engine)
+{
+	vma->active &= ~(1 << engine);
+}
+
+static inline bool i915_vma_has_active_engine(const struct i915_vma *vma,
+					      unsigned engine)
+{
+	return vma->active & (1 << engine);
+}
+
 struct i915_page_dma {
 	struct page *page;
 	union {
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 60/62] drm/i915: Release vma when the handle is closed
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (58 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 59/62] drm/i915: Track active vma requests Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 61/62] drm/i915: Mark the context and address space as closed Chris Wilson
                   ` (3 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

In order to prevent a leak of the vma on shared objects, we need to
hook into the object_close callback to destroy the vma on the object for
this file. However, if we destroyed that vma immediately we may cause
unexpected application stalls as we try to unbind a busy vma - hence we
defer the unbind to when we retire the vma.

v2: Keep vma allocated until closed. This is useful for a later
optimisation, but it is required now in order to handle potential
recursion of i915_vma_unbind() by retiring itself.
v3: Comments are important.

Testcase: igt/gem_ppggtt/flink-and-close-vma-leak
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com
---
 drivers/gpu/drm/i915/i915_drv.c       |   1 +
 drivers/gpu/drm/i915/i915_drv.h       |   4 +-
 drivers/gpu/drm/i915/i915_gem.c       | 110 +++++++++++++++++++---------------
 drivers/gpu/drm/i915/i915_gem_evict.c |   8 +--
 drivers/gpu/drm/i915/i915_gem_gtt.c   |  25 ++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h   |   1 +
 6 files changed, 94 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 4483f9e75aa5..652d9f89ef7a 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -2926,6 +2926,7 @@ static struct drm_driver driver = {
 	.postclose = i915_driver_postclose,
 	.set_busid = drm_pci_set_busid,
 
+	.gem_close_object = i915_gem_close_object,
 	.gem_free_object = i915_gem_free_object,
 	.gem_vm_ops = &i915_gem_vm_ops,
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 83c8dcc744fb..e494e692fef0 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2927,8 +2927,8 @@ struct drm_i915_gem_object *i915_gem_object_create(struct drm_device *dev,
 						  size_t size);
 struct drm_i915_gem_object *i915_gem_object_create_from_data(
 		struct drm_device *dev, const void *data, size_t size);
+void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file);
 void i915_gem_free_object(struct drm_gem_object *obj);
-void i915_gem_vma_destroy(struct i915_vma *vma);
 
 /* Flags used by pin/bind&friends. */
 #define PIN_MAPPABLE	(1<<0)
@@ -2961,6 +2961,8 @@ int __must_check i915_vma_unbind(struct i915_vma *vma);
  * _guarantee_ VMA in question is _not in use_ anywhere.
  */
 int __must_check __i915_vma_unbind_no_wait(struct i915_vma *vma);
+void i915_vma_close(struct i915_vma *vma);
+void i915_vma_destroy(struct i915_vma *vma);
 
 int i915_gem_object_unbind(struct drm_i915_gem_object *obj);
 int i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ef68a9183d7d..e7595ab02255 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1136,8 +1136,8 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
 	}
 
 	for_each_active(active_mask, idx) {
-		int ret = i915_gem_active_retire(&active[idx],
-						 &obj->base.dev->struct_mutex);
+		int ret = i915_gem_active_wait(&active[idx],
+					       &obj->base.dev->struct_mutex);
 		if (ret)
 			return ret;
 	}
@@ -2318,6 +2318,19 @@ out:
 	}
 }
 
+void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
+{
+	struct drm_i915_gem_object *obj = to_intel_bo(gem);
+	struct drm_i915_file_private *fpriv = file->driver_priv;
+	struct i915_vma *vma, *vn;
+
+	mutex_lock(&obj->base.dev->struct_mutex);
+	list_for_each_entry_safe(vma, vn, &obj->vma_list, obj_link)
+		if (vma->vm->file == fpriv)
+			i915_vma_close(vma);
+	mutex_unlock(&obj->base.dev->struct_mutex);
+}
+
 /**
  * i915_gem_wait_ioctl - implements DRM_IOCTL_I915_GEM_WAIT
  * @DRM_IOCTL_ARGS: standard ioctl arguments
@@ -2514,28 +2527,46 @@ static void __i915_vma_iounmap(struct i915_vma *vma)
 static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
+	unsigned long active;
 	int ret;
 
-	if (list_empty(&vma->obj_link))
-		return 0;
+	/* First wait upon any activity as retiring the request may
+	 * have side-effects such as unpinning or even unbinding this vma.
+	 */
+	active = vma->active;
+	if (active && wait) {
+		int idx;
+
+		/* When a closed VMA is retired, it is unbound - eek.
+		 * In order to prevent it from being recursively closed,
+		 * take a pin on the vma so that the second unbind is
+		 * aborted.
+		 */
+		vma->pin_count++;
 
-	if (!drm_mm_node_allocated(&vma->node)) {
-		i915_gem_vma_destroy(vma);
-		return 0;
+		for_each_active(active, idx) {
+			ret = i915_gem_active_retire(&vma->last_read[idx],
+						   &vma->vm->dev->struct_mutex);
+			if (ret)
+				break;
+		}
+
+		vma->pin_count--;
+		if (ret)
+			return ret;
+
+		GEM_BUG_ON(i915_vma_is_active(vma));
 	}
 
 	if (vma->pin_count)
 		return -EBUSY;
 
+	if (!drm_mm_node_allocated(&vma->node))
+		goto destroy;
+
 	GEM_BUG_ON(obj->bind_count == 0);
 	GEM_BUG_ON(obj->pages == NULL);
 
-	if (wait) {
-		ret = i915_gem_object_wait_rendering(obj, false);
-		if (ret)
-			return ret;
-	}
-
 	if (vma->is_ggtt && vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
 		i915_gem_object_finish_gtt(obj);
 
@@ -2564,7 +2595,6 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 	}
 
 	drm_mm_remove_node(&vma->node);
-	i915_gem_vma_destroy(vma);
 
 	/* Since the unbound list is global, only move to that list if
 	 * no more VMAs exist. */
@@ -2578,6 +2608,10 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 	 */
 	i915_gem_object_unpin_pages(obj);
 
+destroy:
+	if (unlikely(vma->closed))
+		i915_vma_destroy(vma);
+
 	return 0;
 }
 
@@ -2747,7 +2781,7 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 
 		if (offset & (alignment - 1) || offset + size > end) {
 			ret = -EINVAL;
-			goto err_free_vma;
+			goto err_vma;
 		}
 		vma->node.start = offset;
 		vma->node.size = size;
@@ -2759,7 +2793,7 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
 				ret = drm_mm_reserve_node(&vm->mm, &vma->node);
 		}
 		if (ret)
-			goto err_free_vma;
+			goto err_vma;
 	} else {
 		if (flags & PIN_HIGH) {
 			search_flag = DRM_MM_SEARCH_BELOW;
@@ -2784,7 +2818,7 @@ search_free:
 			if (ret == 0)
 				goto search_free;
 
-			goto err_free_vma;
+			goto err_vma;
 		}
 	}
 	if (WARN_ON(!i915_gem_valid_gtt_space(vma, obj->cache_level))) {
@@ -2805,8 +2839,7 @@ search_free:
 
 err_remove_node:
 	drm_mm_remove_node(&vma->node);
-err_free_vma:
-	i915_gem_vma_destroy(vma);
+err_vma:
 	vma = ERR_PTR(ret);
 err_unpin:
 	i915_gem_object_unpin_pages(obj);
@@ -3756,21 +3789,18 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
 
 	trace_i915_gem_object_destroy(obj);
 
+	/* All file-owned VMA should have been released by this point through
+	 * i915_gem_close_object(), or earlier by i915_gem_context_close().
+	 * However, the object may also be bound into the global GTT (e.g.
+	 * older GPUs without per-process support, or for direct access through
+	 * the GTT either for the user or for scanout). Those VMA still need to
+	 * unbound now.
+	 */
 	list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link) {
-		int ret;
-
+		GEM_BUG_ON(!vma->is_ggtt);
+		GEM_BUG_ON(i915_vma_is_active(vma));
 		vma->pin_count = 0;
-		ret = i915_vma_unbind(vma);
-		if (WARN_ON(ret == -ERESTARTSYS)) {
-			bool was_interruptible;
-
-			was_interruptible = dev_priv->mm.interruptible;
-			dev_priv->mm.interruptible = false;
-
-			WARN_ON(i915_vma_unbind(vma));
-
-			dev_priv->mm.interruptible = was_interruptible;
-		}
+		i915_vma_close(vma);
 	}
 	GEM_BUG_ON(obj->bind_count);
 
@@ -3835,22 +3865,6 @@ struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
 	return NULL;
 }
 
-void i915_gem_vma_destroy(struct i915_vma *vma)
-{
-	WARN_ON(vma->node.allocated);
-
-	/* Keep the vma as a placeholder in the execbuffer reservation lists */
-	if (!list_empty(&vma->exec_list))
-		return;
-
-	if (!vma->is_ggtt)
-		i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));
-
-	list_del(&vma->obj_link);
-
-	kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
-}
-
 static void
 i915_gem_stop_engines(struct drm_device *dev)
 {
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 5a02c32e9ae6..2a9adc802e85 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -213,8 +213,8 @@ found:
 				       struct i915_vma,
 				       exec_list);
 		if (drm_mm_scan_remove_block(&vma->node)) {
+			vma->pin_count++;
 			list_move(&vma->exec_list, &eviction_list);
-			i915_gem_object_get(vma->obj);
 			continue;
 		}
 		list_del_init(&vma->exec_list);
@@ -222,18 +222,14 @@ found:
 
 	/* Unbinding will emit any required flushes */
 	while (!list_empty(&eviction_list)) {
-		struct drm_i915_gem_object *obj;
-
 		vma = list_first_entry(&eviction_list,
 				       struct i915_vma,
 				       exec_list);
 
-		obj =  vma->obj;
 		list_del_init(&vma->exec_list);
+		vma->pin_count--;
 		if (ret == 0)
 			ret = i915_vma_unbind(vma);
-
-		i915_gem_object_put(obj);
 	}
 
 	return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 4d3179e15b94..694d0c1f25cf 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3296,6 +3296,31 @@ i915_vma_retire(struct i915_gem_active *active,
 		return;
 
 	list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
+	if (unlikely(vma->closed && !vma->pin_count))
+		WARN_ON(i915_vma_unbind(vma));
+}
+
+void i915_vma_destroy(struct i915_vma *vma)
+{
+	GEM_BUG_ON(vma->node.allocated);
+	GEM_BUG_ON(i915_vma_is_active(vma));
+	GEM_BUG_ON(!vma->closed);
+
+	list_del(&vma->vm_link);
+	if (!vma->is_ggtt)
+		i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));
+
+	kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
+}
+
+void i915_vma_close(struct i915_vma *vma)
+{
+	GEM_BUG_ON(vma->closed);
+	vma->closed = true;
+
+	list_del_init(&vma->obj_link);
+	if (!i915_vma_is_active(vma) && !vma->pin_count)
+		WARN_ON(i915_vma_unbind(vma));
 }
 
 static struct i915_vma *
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index d86b3e4777a7..47b646264e18 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -189,6 +189,7 @@ struct i915_vma {
 	unsigned int bound : 4;
 	unsigned int active : I915_NUM_ENGINES;
 	bool is_ggtt : 1;
+	bool closed : 1;
 
 	/**
 	 * Support different GGTT views into the same object.
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 61/62] drm/i915: Mark the context and address space as closed
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (59 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 60/62] drm/i915: Release vma when the handle is closed Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-03 16:37 ` [PATCH 62/62] Revert "drm/i915: Clean up associated VMAs on context destruction" Chris Wilson
                   ` (2 subsequent siblings)
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

When the user closes the context mark it and the dependent address space
as closed. As we use an asynchronous destruct method, this has two purposes.
First it allows us to flag the closed context and detect internal errors if
we to create any new objects for it (as it is removed from the user's
namespace, these should be internal bugs only). And secondly, it allows
us to immediately reap stale vma.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h         |  1 +
 drivers/gpu/drm/i915/i915_gem.c         | 15 ++++++------
 drivers/gpu/drm/i915/i915_gem_context.c | 43 ++++++++++++++++++++++++++++-----
 drivers/gpu/drm/i915/i915_gem_gtt.c     |  9 +++++--
 drivers/gpu/drm/i915/i915_gem_gtt.h     |  9 +++++++
 drivers/gpu/drm/i915/i915_gem_stolen.c  |  2 +-
 6 files changed, 63 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e494e692fef0..492e5e73c1ca 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -890,6 +890,7 @@ struct i915_gem_context {
 	struct list_head link;
 
 	u8 remap_slice;
+	bool closed:1;
 };
 
 enum fb_op_origin {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e7595ab02255..3e12122f0f1f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2578,12 +2578,15 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 		__i915_vma_iounmap(vma);
 	}
 
-	trace_i915_vma_unbind(vma);
-
-	vma->vm->unbind_vma(vma);
+	if (likely(!vma->vm->closed)) {
+		trace_i915_vma_unbind(vma);
+		vma->vm->unbind_vma(vma);
+	}
 	vma->bound = 0;
 
-	list_del_init(&vma->vm_link);
+	drm_mm_remove_node(&vma->node);
+	list_move_tail(&vma->vm_link, &vma->vm->unbound_list);
+
 	if (vma->is_ggtt) {
 		if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
 			obj->map_and_fenceable = false;
@@ -2594,8 +2597,6 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 		vma->ggtt_view.pages = NULL;
 	}
 
-	drm_mm_remove_node(&vma->node);
-
 	/* Since the unbound list is global, only move to that list if
 	 * no more VMAs exist. */
 	if (--obj->bind_count == 0)
@@ -2832,7 +2833,7 @@ search_free:
 		goto err_remove_node;
 
 	list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
-	list_add_tail(&vma->vm_link, &vm->inactive_list);
+	list_move_tail(&vma->vm_link, &vm->inactive_list);
 	obj->bind_count++;
 
 	return vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index cace85998204..f04073469853 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -156,6 +156,7 @@ void i915_gem_context_free(struct kref *ctx_ref)
 
 	lockdep_assert_held(&ctx->i915->dev->struct_mutex);
 	trace_i915_context_free(ctx);
+	GEM_BUG_ON(!ctx->closed);
 
 	/*
 	 * This context is going away and we need to remove all VMAs still
@@ -224,6 +225,37 @@ i915_gem_alloc_context_obj(struct drm_device *dev, size_t size)
 	return obj;
 }
 
+static void i915_ppgtt_close(struct i915_address_space *vm)
+{
+	struct list_head *phases[] = {
+		&vm->active_list,
+		&vm->inactive_list,
+		&vm->unbound_list,
+		NULL,
+	}, **phase;
+
+	GEM_BUG_ON(vm->closed);
+	vm->closed = true;
+
+	for (phase = phases; *phase; phase++) {
+		struct i915_vma *vma, *vn;
+
+		list_for_each_entry_safe(vma, vn, *phase, vm_link)
+			if (!vma->closed)
+				i915_vma_close(vma);
+	}
+}
+
+static void context_close(struct i915_gem_context *ctx)
+{
+	GEM_BUG_ON(ctx->closed);
+	ctx->closed = true;
+	if (ctx->ppgtt)
+		i915_ppgtt_close(&ctx->ppgtt->base);
+	ctx->file_priv = ERR_PTR(-EBADF);
+	i915_gem_context_put(ctx);
+}
+
 static int assign_hw_id(struct drm_i915_private *dev_priv, unsigned *out)
 {
 	int ret;
@@ -301,7 +333,7 @@ __create_hw_context(struct drm_device *dev,
 	return ctx;
 
 err_out:
-	i915_gem_context_put(ctx);
+	context_close(ctx);
 	return ERR_PTR(ret);
 }
 
@@ -330,7 +362,7 @@ i915_gem_create_context(struct drm_device *dev,
 			DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
 					 PTR_ERR(ppgtt));
 			idr_remove(&file_priv->context_idr, ctx->user_handle);
-			i915_gem_context_put(ctx);
+			context_close(ctx);
 			return ERR_CAST(ppgtt);
 		}
 
@@ -467,7 +499,7 @@ void i915_gem_context_fini(struct drm_device *dev)
 
 	lockdep_assert_held(&dev->struct_mutex);
 
-	i915_gem_context_put(dctx);
+	context_close(dctx);
 	dev_priv->kernel_context = NULL;
 
 	ida_destroy(&dev_priv->context_hw_ida);
@@ -477,8 +509,7 @@ static int context_idr_cleanup(int id, void *p, void *data)
 {
 	struct i915_gem_context *ctx = p;
 
-	ctx->file_priv = ERR_PTR(-EBADF);
-	i915_gem_context_put(ctx);
+	context_close(ctx);
 	return 0;
 }
 
@@ -946,7 +977,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
 	}
 
 	idr_remove(&file_priv->context_idr, ctx->user_handle);
-	i915_gem_context_put(ctx);
+	context_close(ctx);
 	mutex_unlock(&dev->struct_mutex);
 
 	DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 694d0c1f25cf..9db542f761f7 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2142,6 +2142,7 @@ static void i915_address_space_init(struct i915_address_space *vm,
 	vm->dev = dev_priv->dev;
 	INIT_LIST_HEAD(&vm->active_list);
 	INIT_LIST_HEAD(&vm->inactive_list);
+	INIT_LIST_HEAD(&vm->unbound_list);
 	list_add_tail(&vm->global_link, &dev_priv->vm_list);
 }
 
@@ -2234,9 +2235,10 @@ void  i915_ppgtt_release(struct kref *kref)
 
 	trace_i915_ppgtt_release(&ppgtt->base);
 
-	/* vmas should already be unbound */
+	/* vmas should already be unbound and destroyed */
 	WARN_ON(!list_empty(&ppgtt->base.active_list));
 	WARN_ON(!list_empty(&ppgtt->base.inactive_list));
+	WARN_ON(!list_empty(&ppgtt->base.unbound_list));
 
 	list_del(&ppgtt->base.global_link);
 	drm_mm_takedown(&ppgtt->base.mm);
@@ -3331,6 +3333,8 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
 	struct i915_vma *vma;
 	int i;
 
+	GEM_BUG_ON(vm->closed);
+
 	if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
 		return ERR_PTR(-EINVAL);
 
@@ -3338,11 +3342,11 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
 	if (vma == NULL)
 		return ERR_PTR(-ENOMEM);
 
-	INIT_LIST_HEAD(&vma->vm_link);
 	INIT_LIST_HEAD(&vma->obj_link);
 	INIT_LIST_HEAD(&vma->exec_list);
 	for (i = 0; i < ARRAY_SIZE(vma->last_read); i++)
 		init_request_active(&vma->last_read[i], i915_vma_retire);
+	list_add(&vma->vm_link, &vm->unbound_list);
 	vma->vm = vm;
 	vma->obj = obj;
 	vma->is_ggtt = i915_is_ggtt(vm);
@@ -3383,6 +3387,7 @@ i915_gem_obj_lookup_or_create_ggtt_vma(struct drm_i915_gem_object *obj,
 	if (!vma)
 		vma = __i915_gem_vma_create(obj, &ggtt->base, view);
 
+	GEM_BUG_ON(vma->closed);
 	return vma;
 
 }
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 47b646264e18..e4657bfaea95 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -312,6 +312,8 @@ struct i915_address_space {
 	u64 start;		/* Start offset always 0 for dri2 */
 	u64 total;		/* size addr space maps (ex. 2GB for ggtt) */
 
+	bool closed;
+
 	struct i915_page_scratch *scratch_page;
 	struct i915_page_table *scratch_pt;
 	struct i915_page_directory *scratch_pd;
@@ -340,6 +342,13 @@ struct i915_address_space {
 	 */
 	struct list_head inactive_list;
 
+	/**
+	 * List of vma that have been unbound.
+	 *
+	 * A reference is not held on the buffer while on this list.
+	 */
+	struct list_head unbound_list;
+
 	/* FIXME: Need a more generic return type */
 	gen6_pte_t (*pte_encode)(dma_addr_t addr,
 				 enum i915_cache_level level,
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 21584e86908c..a881c243fca2 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -705,7 +705,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
 
 	vma->bound |= GLOBAL_BIND;
 	__i915_vma_set_map_and_fenceable(vma);
-	list_add_tail(&vma->vm_link, &ggtt->base.inactive_list);
+	list_move_tail(&vma->vm_link, &ggtt->base.inactive_list);
 	obj->bind_count++;
 
 	list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* [PATCH 62/62] Revert "drm/i915: Clean up associated VMAs on context destruction"
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (60 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 61/62] drm/i915: Mark the context and address space as closed Chris Wilson
@ 2016-06-03 16:37 ` Chris Wilson
  2016-06-05  5:24 ` ✗ Ro.CI.BAT: failure for series starting with [01/62] drm/i915: Only start retire worker when idle Patchwork
  2016-06-08  9:30 ` The vma leak fix from yonder Daniel Vetter
  63 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-03 16:37 UTC (permalink / raw)
  To: intel-gfx

This reverts commit e9f24d5fb7cf3628b195b18ff3ac4e37937ceeae.

The patch was only a stop-gap measure that fixed half the problem - the
leak of the fbcon when restarting X. A complete solution required
releasing the VMA when the object itself was closed rather than rely on
file/process exit. The previous patches add the VMA tracking necessary
to do close them along with the object, context or file, and so the time
has come to remove the partial fix.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_drv.h         |  5 -----
 drivers/gpu/drm/i915/i915_gem.c         | 14 ++------------
 drivers/gpu/drm/i915/i915_gem_context.c | 22 ----------------------
 3 files changed, 2 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 492e5e73c1ca..09999ebf1a70 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2957,11 +2957,6 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
 		  u32 flags);
 void __i915_vma_set_map_and_fenceable(struct i915_vma *vma);
 int __must_check i915_vma_unbind(struct i915_vma *vma);
-/*
- * BEWARE: Do not use the function below unless you can _absolutely_
- * _guarantee_ VMA in question is _not in use_ anywhere.
- */
-int __must_check __i915_vma_unbind_no_wait(struct i915_vma *vma);
 void i915_vma_close(struct i915_vma *vma);
 void i915_vma_destroy(struct i915_vma *vma);
 
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3e12122f0f1f..e6c46f2d08e7 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2524,7 +2524,7 @@ static void __i915_vma_iounmap(struct i915_vma *vma)
 	vma->iomap = NULL;
 }
 
-static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
+int i915_vma_unbind(struct i915_vma *vma)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
 	unsigned long active;
@@ -2534,7 +2534,7 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
 	 * have side-effects such as unpinning or even unbinding this vma.
 	 */
 	active = vma->active;
-	if (active && wait) {
+	if (active) {
 		int idx;
 
 		/* When a closed VMA is retired, it is unbound - eek.
@@ -2616,16 +2616,6 @@ destroy:
 	return 0;
 }
 
-int i915_vma_unbind(struct i915_vma *vma)
-{
-	return __i915_vma_unbind(vma, true);
-}
-
-int __i915_vma_unbind_no_wait(struct i915_vma *vma)
-{
-	return __i915_vma_unbind(vma, false);
-}
-
 int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv)
 {
 	struct intel_engine_cs *engine;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index f04073469853..5ed91406d4e9 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -134,21 +134,6 @@ static int get_context_size(struct drm_i915_private *dev_priv)
 	return ret;
 }
 
-static void i915_gem_context_clean(struct i915_gem_context *ctx)
-{
-	struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
-	struct i915_vma *vma, *next;
-
-	if (!ppgtt)
-		return;
-
-	list_for_each_entry_safe(vma, next, &ppgtt->base.inactive_list,
-				 vm_link) {
-		if (WARN_ON(__i915_vma_unbind_no_wait(vma)))
-			break;
-	}
-}
-
 void i915_gem_context_free(struct kref *ctx_ref)
 {
 	struct i915_gem_context *ctx = container_of(ctx_ref, typeof(*ctx), ref);
@@ -158,13 +143,6 @@ void i915_gem_context_free(struct kref *ctx_ref)
 	trace_i915_context_free(ctx);
 	GEM_BUG_ON(!ctx->closed);
 
-	/*
-	 * This context is going away and we need to remove all VMAs still
-	 * around. This is to handle imported shared objects for which
-	 * destructor did not run when their handles were closed.
-	 */
-	i915_gem_context_clean(ctx);
-
 	i915_ppgtt_put(ctx->ppgtt);
 
 	for (i = 0; i < I915_NUM_ENGINES; i++) {
-- 
2.8.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* ✗ Ro.CI.BAT: failure for series starting with [01/62] drm/i915: Only start retire worker when idle
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (61 preceding siblings ...)
  2016-06-03 16:37 ` [PATCH 62/62] Revert "drm/i915: Clean up associated VMAs on context destruction" Chris Wilson
@ 2016-06-05  5:24 ` Patchwork
  2016-06-08  9:30 ` The vma leak fix from yonder Daniel Vetter
  63 siblings, 0 replies; 87+ messages in thread
From: Patchwork @ 2016-06-05  5:24 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/62] drm/i915: Only start retire worker when idle
URL   : https://patchwork.freedesktop.org/series/8248/
State : failure

== Summary ==

Applying: drm/i915: Only start retire worker when idle
fatal: sha1 information is lacking or useless (drivers/gpu/drm/i915/i915_debugfs.c).
error: could not build fake ancestor
Patch failed at 0001 drm/i915: Only start retire worker when idle
The copy of the patch that failed is found in: .git/rebase-apply/patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference()
  2016-06-03 16:36 ` [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference() Chris Wilson
@ 2016-06-06 12:12   ` Joonas Lahtinen
  0 siblings, 0 replies; 87+ messages in thread
From: Joonas Lahtinen @ 2016-06-06 12:12 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> As these are wrappers around kref_get/kref_put() it is preferable to
> follow the naming convention and use the same verb get/put in our
> wrapper names for manipulating a reference to the context.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/i915_drv.h            |  6 ++++--
>  drivers/gpu/drm/i915/i915_gem_context.c    | 22 ++++++++++------------
>  drivers/gpu/drm/i915/i915_gem_execbuffer.c |  6 +++---
>  drivers/gpu/drm/i915/i915_gem_request.c    |  7 +++----
>  drivers/gpu/drm/i915/intel_lrc.c           |  4 ++--
>  drivers/gpu/drm/i915/intel_ringbuffer.c    |  4 ++--
>  6 files changed, 24 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 939cd45043c7..48d89b181246 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3247,12 +3247,14 @@ i915_gem_context_lookup(struct drm_i915_file_private *file_priv, u32 id)
>  	return ctx;
>  }
>  
> -static inline void i915_gem_context_reference(struct i915_gem_context *ctx)
> +static inline struct i915_gem_context *
> +i915_gem_context_get(struct i915_gem_context *ctx)
>  {
>  	kref_get(&ctx->ref);
> +	return ctx;
>  }
>  
> -static inline void i915_gem_context_unreference(struct i915_gem_context *ctx)
> +static inline void i915_gem_context_put(struct i915_gem_context *ctx)
>  {
>  	lockdep_assert_held(&ctx->i915->drm.struct_mutex);
>  	kref_put(&ctx->ref, i915_gem_context_free);
> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> index d01b3893eac0..b62862e31642 100644
> --- a/drivers/gpu/drm/i915/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> @@ -301,7 +301,7 @@ __create_hw_context(struct drm_device *dev,
>  	return ctx;
>  
>  err_out:
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  	return ERR_PTR(ret);
>  }
>  
> @@ -329,7 +329,7 @@ i915_gem_create_context(struct drm_device *dev,
>  			DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
>  					 PTR_ERR(ppgtt));
>  			idr_remove(&file_priv->context_idr, ctx->user_handle);
> -			i915_gem_context_unreference(ctx);
> +			i915_gem_context_put(ctx);
>  			return ERR_CAST(ppgtt);
>  		}
>  
> @@ -352,7 +352,7 @@ static void i915_gem_context_unpin(struct i915_gem_context *ctx,
>  		if (ce->state)
>  			i915_gem_object_ggtt_unpin(ce->state);
>  
> -		i915_gem_context_unreference(ctx);
> +		i915_gem_context_put(ctx);
>  	}
>  }
>  
> @@ -466,7 +466,7 @@ void i915_gem_context_fini(struct drm_device *dev)
>  
>  	lockdep_assert_held(&dev->struct_mutex);
>  
> -	i915_gem_context_unreference(dctx);
> +	i915_gem_context_put(dctx);
>  	dev_priv->kernel_context = NULL;
>  
>  	ida_destroy(&dev_priv->context_hw_ida);
> @@ -477,7 +477,7 @@ static int context_idr_cleanup(int id, void *p, void *data)
>  	struct i915_gem_context *ctx = p;
>  
>  	ctx->file_priv = ERR_PTR(-EBADF);
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  	return 0;
>  }
>  
> @@ -789,10 +789,9 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
>  
>  		/* obj is kept alive until the next request by its active ref */
>  		i915_gem_object_ggtt_unpin(from->engine[RCS].state);
> -		i915_gem_context_unreference(from);
> +		i915_gem_context_put(from);
>  	}
> -	i915_gem_context_reference(to);
> -	engine->last_context = to;
> +	engine->last_context = i915_gem_context_get(to);
>  
>  	/* GEN8 does *not* require an explicit reload if the PDPs have been
>  	 * setup, and we do not wish to move them.
> @@ -876,10 +875,9 @@ int i915_switch_context(struct drm_i915_gem_request *req)
>  		}
>  
>  		if (to != engine->last_context) {
> -			i915_gem_context_reference(to);
>  			if (engine->last_context)
> -				i915_gem_context_unreference(engine->last_context);
> -			engine->last_context = to;
> +				i915_gem_context_put(engine->last_context);
> +			engine->last_context = i915_gem_context_get(to);
>  		}
>  
>  		return 0;
> @@ -947,7 +945,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
>  	}
>  
>  	idr_remove(&file_priv->context_idr, ctx->user_handle);
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  	mutex_unlock(&dev->struct_mutex);
>  
>  	DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index d3297dab0298..7f441e74c903 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1496,7 +1496,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  		goto pre_mutex_err;
>  	}
>  
> -	i915_gem_context_reference(ctx);
> +	i915_gem_context_get(ctx);
>  
>  	if (ctx->ppgtt)
>  		vm = &ctx->ppgtt->base;
> @@ -1507,7 +1507,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  
>  	eb = eb_create(args);
>  	if (eb == NULL) {
> -		i915_gem_context_unreference(ctx);
> +		i915_gem_context_put(ctx);
>  		mutex_unlock(&dev->struct_mutex);
>  		ret = -ENOMEM;
>  		goto pre_mutex_err;
> @@ -1651,7 +1651,7 @@ err_batch_unpin:
>  
>  err:
>  	/* the request owns the ref now */
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  	eb_destroy(eb);
>  
>  	mutex_unlock(&dev->struct_mutex);
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 2ecaf9fa936a..987a43f1aac8 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -243,8 +243,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  	req->i915 = dev_priv;
>  	req->engine = engine;
>  	req->reset_counter = reset_counter;
> -	req->ctx = ctx;
> -	i915_gem_context_reference(ctx);
> +	req->ctx = i915_gem_context_get(ctx);
>  
>  	/*
>  	 * Reserve space in the ring buffer for all the commands required to
> @@ -266,7 +265,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  	return 0;
>  
>  err_ctx:
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  err:
>  	kmem_cache_free(dev_priv->requests, req);
>  	return ret;
> @@ -364,7 +363,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
>  					       request->engine);
>  	}
>  
> -	i915_gem_context_unreference(request->ctx);
> +	i915_gem_context_put(request->ctx);
>  	i915_gem_request_put(request);
>  }
>  
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index a25177016fb3..d55aa9ca2877 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -961,7 +961,6 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
>  	if (ret)
>  		goto unpin_map;
>  
> -	i915_gem_context_reference(ctx);
>  	ce->lrc_vma = i915_gem_obj_to_ggtt(ce->state);
>  	intel_lr_context_descriptor_update(ctx, engine);
>  
> @@ -973,6 +972,7 @@ static int intel_lr_context_pin(struct i915_gem_context *ctx,
>  	if (i915.enable_guc_submission)
>  		I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);
>  
> +	i915_gem_context_get(ctx);
>  	return 0;
>  
>  unpin_map:
> @@ -1004,7 +1004,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
>  	ce->lrc_desc = 0;
>  	ce->lrc_reg_state = NULL;
>  
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  }
>  
>  static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index c3d6345aa2c1..e6a2e4973a01 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -2058,7 +2058,7 @@ static int intel_ring_context_pin(struct i915_gem_context *ctx,
>  	if (ctx == ctx->i915->kernel_context)
>  		ce->initialised = true;
>  
> -	i915_gem_context_reference(ctx);
> +	i915_gem_context_get(ctx);
>  	return 0;
>  
>  error:
> @@ -2079,7 +2079,7 @@ static void intel_ring_context_unpin(struct i915_gem_context *ctx,
>  	if (ce->state)
>  		i915_gem_object_ggtt_unpin(ce->state);
>  
> -	i915_gem_context_unreference(ctx);
> +	i915_gem_context_put(ctx);
>  }
>  
>  static int intel_init_ring_buffer(struct drm_device *dev,
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 26/62] drm/i915: Rename request->ring to request->engine
  2016-06-03 16:36 ` [PATCH 26/62] drm/i915: Rename request->ring to request->engine Chris Wilson
@ 2016-06-06 13:42   ` Tvrtko Ursulin
  0 siblings, 0 replies; 87+ messages in thread
From: Tvrtko Ursulin @ 2016-06-06 13:42 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 03/06/16 17:36, Chris Wilson wrote:
> In order to disambiguate between the pointer to the intel_engine_cs
> (called ring) and the intel_ringbuffer (called ringbuf), rename
> s/ring/engine/.

This patch looks like residual rebase noise so I think you should just 
drop it.

Regards,

Tvrtko

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_debugfs.c          |  3 +--
>   drivers/gpu/drm/i915/i915_gem.c              |  6 ++----
>   drivers/gpu/drm/i915/i915_gem_context.c      |  6 ++----
>   drivers/gpu/drm/i915/i915_gem_gtt.c          |  5 ++---
>   drivers/gpu/drm/i915/i915_gem_render_state.c | 12 ++++++------
>   drivers/gpu/drm/i915/i915_gem_request.c      |  6 +-----
>   drivers/gpu/drm/i915/i915_gpu_error.c        |  3 +--
>   drivers/gpu/drm/i915/i915_guc_submission.c   |  4 ++--
>   drivers/gpu/drm/i915/intel_lrc.c             |  6 +++---
>   9 files changed, 20 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index c1f8b5126d16..34e41ae2943e 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -193,8 +193,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
>   		seq_printf(m, " (%s mappable)", s);
>   	}
>   	if (obj->last_write_req != NULL)
> -		seq_printf(m, " (%s)",
> -			   i915_gem_request_get_engine(obj->last_write_req)->name);
> +		seq_printf(m, " (%s)", obj->last_write_req->engine->name);
>   	if (obj->frontbuffer_bits)
>   		seq_printf(m, " (frontbuffer: 0x%03x)", obj->frontbuffer_bits);
>   }
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 22c8361748d6..8edd79ad08b4 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2101,9 +2101,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
>   			     struct drm_i915_gem_request *req)
>   {
>   	struct drm_i915_gem_object *obj = vma->obj;
> -	struct intel_engine_cs *engine;
> -
> -	engine = i915_gem_request_get_engine(req);
> +	struct intel_engine_cs *engine = req->engine;
>
>   	/* Add a reference if we're newly entering the active list. */
>   	if (obj->active == 0)
> @@ -2561,7 +2559,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
>   	struct intel_engine_cs *from;
>   	int ret;
>
> -	from = i915_gem_request_get_engine(from_req);
> +	from = from_req->engine;
>   	if (to == from)
>   		return 0;
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> index 41e32426d174..899731f9a2c4 100644
> --- a/drivers/gpu/drm/i915/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> @@ -555,8 +555,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
>   		if (num_rings) {
>   			struct intel_engine_cs *signaller;
>
> -			intel_ring_emit(ring,
> -					MI_LOAD_REGISTER_IMM(num_rings));
> +			intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
>   			for_each_engine(signaller, dev_priv) {
>   				if (signaller == req->engine)
>   					continue;
> @@ -585,8 +584,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
>   			struct intel_engine_cs *signaller;
>   			i915_reg_t last_reg = {}; /* keep gcc quiet */
>
> -			intel_ring_emit(ring,
> -					MI_LOAD_REGISTER_IMM(num_rings));
> +			intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
>   			for_each_engine(signaller, dev_priv) {
>   				if (signaller == req->engine)
>   					continue;
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index f735d1ec189a..4b4e3de58ad9 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -1689,7 +1689,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
>   			  struct drm_i915_gem_request *req)
>   {
>   	struct intel_engine_cs *engine = req->engine;
> -	struct drm_i915_private *dev_priv = to_i915(ppgtt->base.dev);
> +	struct drm_i915_private *dev_priv = req->i915;
>
>   	I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
>   	I915_WRITE(RING_PP_DIR_BASE(engine), get_pd_offset(ppgtt));
> @@ -1737,8 +1737,7 @@ static int gen6_mm_switch(struct i915_hw_ppgtt *ppgtt,
>   			  struct drm_i915_gem_request *req)
>   {
>   	struct intel_engine_cs *engine = req->engine;
> -	struct drm_device *dev = ppgtt->base.dev;
> -	struct drm_i915_private *dev_priv = dev->dev_private;
> +	struct drm_i915_private *dev_priv = req->i915;
>
>
>   	I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
> diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
> index 99eff898b4cb..41eb9a91bfee 100644
> --- a/drivers/gpu/drm/i915/i915_gem_render_state.c
> +++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
> @@ -207,17 +207,17 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
>   		return 0;
>
>   	ret = req->engine->dispatch_execbuffer(req, so.ggtt_offset,
> -					     so.rodata->batch_items * 4,
> -					     I915_DISPATCH_SECURE);
> +					       so.rodata->batch_items * 4,
> +					       I915_DISPATCH_SECURE);
>   	if (ret)
>   		goto out;
>
>   	if (so.aux_batch_size > 8) {
>   		ret = req->engine->dispatch_execbuffer(req,
> -						     (so.ggtt_offset +
> -						      so.aux_batch_offset),
> -						     so.aux_batch_size,
> -						     I915_DISPATCH_SECURE);
> +						       (so.ggtt_offset +
> +							so.aux_batch_offset),
> +						       so.aux_batch_size,
> +						       I915_DISPATCH_SECURE);
>   		if (ret)
>   			goto out;
>   	}
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index ba745f0740d0..059ba88e182e 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -299,7 +299,6 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
>   int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
>   				   struct drm_file *file)
>   {
> -	struct drm_i915_private *dev_private;
>   	struct drm_i915_file_private *file_priv;
>
>   	WARN_ON(!req || !file || req->file_priv);
> @@ -310,7 +309,6 @@ int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
>   	if (req->file_priv)
>   		return -EINVAL;
>
> -	dev_private = req->i915;
>   	file_priv = file->driver_priv;
>
>   	spin_lock(&file_priv->mm.lock);
> @@ -417,7 +415,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   			bool flush_caches)
>   {
>   	struct intel_engine_cs *engine;
> -	struct drm_i915_private *dev_priv;
>   	struct intel_ringbuffer *ringbuf;
>   	u32 request_start;
>   	u32 reserved_tail;
> @@ -427,7 +424,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   		return;
>
>   	engine = request->engine;
> -	dev_priv = request->i915;
>   	ringbuf = request->ringbuf;
>
>   	/*
> @@ -502,7 +498,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   		  "for adding the request (%d bytes)\n",
>   		  reserved_tail, ret);
>
> -	i915_gem_mark_busy(dev_priv, engine);
> +	i915_gem_mark_busy(request->i915, engine);
>   }
>
>   static unsigned long local_clock_us(unsigned *cpu)
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index a8082b8a9797..d1667aa640ef 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -762,8 +762,7 @@ static void capture_bo(struct drm_i915_error_buffer *err,
>   	err->dirty = obj->dirty;
>   	err->purgeable = obj->madv != I915_MADV_WILLNEED;
>   	err->userptr = obj->userptr.mm != NULL;
> -	err->ring = obj->last_write_req ?
> -			i915_gem_request_get_engine(obj->last_write_req)->id : -1;
> +	err->ring = obj->last_write_req ? obj->last_write_req->engine->id : -1;
>   	err->cache_level = obj->cache_level;
>   }
>
> diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
> index 4cec580784ea..337b8f60989c 100644
> --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> @@ -534,8 +534,8 @@ static void guc_add_workqueue_item(struct i915_guc_client *gc,
>   			WQ_NO_WCFLUSH_WAIT;
>
>   	/* The GuC wants only the low-order word of the context descriptor */
> -	wqi->context_desc = (u32)intel_lr_context_descriptor(rq->ctx,
> -							     rq->engine);
> +	wqi->context_desc =
> +		(u32)intel_lr_context_descriptor(rq->ctx, rq->engine);
>
>   	wqi->ring_tail = tail << WQ_RING_TAIL_SHIFT;
>   	wqi->fence_id = rq->fence.seqno;
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 3076b63f2298..a1820d531e49 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1776,13 +1776,13 @@ static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
>   		return 0;
>
>   	ret = req->engine->emit_bb_start(req, so.ggtt_offset,
> -				       I915_DISPATCH_SECURE);
> +					 I915_DISPATCH_SECURE);
>   	if (ret)
>   		goto out;
>
>   	ret = req->engine->emit_bb_start(req,
> -				       (so.ggtt_offset + so.aux_batch_offset),
> -				       I915_DISPATCH_SECURE);
> +					 (so.ggtt_offset + so.aux_batch_offset),
> +					 I915_DISPATCH_SECURE);
>   	if (ret)
>   		goto out;
>
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring
  2016-06-03 16:36 ` [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring Chris Wilson
@ 2016-06-06 13:44   ` Tvrtko Ursulin
  2016-06-08  9:18     ` Daniel Vetter
  0 siblings, 1 reply; 87+ messages in thread
From: Tvrtko Ursulin @ 2016-06-06 13:44 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 03/06/16 17:36, Chris Wilson wrote:
> Now that we have disambuigated ring and engine, we can use the clearer
> and more consistent name for the intel_ringbuffer pointer in the
> request.

This one needs all the stakeholders to agree about the rename. As 
before, I am not convinced it is better/worth it.

Regards,

Tvrtko


> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_gem_context.c    |  4 +-
>   drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 +-
>   drivers/gpu/drm/i915/i915_gem_gtt.c        |  6 +-
>   drivers/gpu/drm/i915/i915_gem_request.c    | 16 +++---
>   drivers/gpu/drm/i915/i915_gem_request.h    |  3 +-
>   drivers/gpu/drm/i915/i915_gpu_error.c      | 20 +++----
>   drivers/gpu/drm/i915/intel_display.c       | 10 ++--
>   drivers/gpu/drm/i915/intel_lrc.c           | 57 +++++++++---------
>   drivers/gpu/drm/i915/intel_mocs.c          | 36 ++++++------
>   drivers/gpu/drm/i915/intel_overlay.c       |  8 +--
>   drivers/gpu/drm/i915/intel_ringbuffer.c    | 92 +++++++++++++++---------------
>   11 files changed, 126 insertions(+), 130 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> index 899731f9a2c4..a7911f39f416 100644
> --- a/drivers/gpu/drm/i915/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> @@ -514,7 +514,7 @@ static inline int
>   mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
>   {
>   	struct drm_i915_private *dev_priv = req->i915;
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 flags = hw_flags | MI_MM_SPACE_GTT;
>   	const int num_rings =
>   		/* Use an extended w/a on ivb+ if signalling from other rings */
> @@ -614,7 +614,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
>   static int remap_l3(struct drm_i915_gem_request *req, int slice)
>   {
>   	u32 *remap_info = req->i915->l3_parity.remap_info[slice];
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int i, ret;
>
>   	if (!remap_info)
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 99663e8429b3..246bd70c0c9f 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1140,7 +1140,7 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
>   static int
>   i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret, i;
>
>   	if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
> @@ -1270,7 +1270,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
>
>   	if (params->engine->id == RCS &&
>   	    instp_mode != dev_priv->relative_constants_mode) {
> -		struct intel_ringbuffer *ring = params->request->ringbuf;
> +		struct intel_ringbuffer *ring = params->request->ring;
>
>   		ret = intel_ring_begin(params->request, 4);
>   		if (ret)
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index 4b4e3de58ad9..b0a644cede20 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -669,7 +669,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
>   			  unsigned entry,
>   			  dma_addr_t addr)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	BUG_ON(entry >= 4);
> @@ -1660,7 +1660,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
>   static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
>   			 struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	/* NB: TLBs must be flushed and invalidated before a switch */
> @@ -1699,7 +1699,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
>   static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
>   			  struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	/* NB: TLBs must be flushed and invalidated before a switch */
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 059ba88e182e..c6a7a7984f1f 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -351,7 +351,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
>   	 * Note this requires that we are always called in request
>   	 * completion order.
>   	 */
> -	request->ringbuf->last_retired_head = request->postfix;
> +	request->ring->last_retired_head = request->postfix;
>
>   	i915_gem_request_remove_from_client(request);
>
> @@ -415,7 +415,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   			bool flush_caches)
>   {
>   	struct intel_engine_cs *engine;
> -	struct intel_ringbuffer *ringbuf;
> +	struct intel_ringbuffer *ring;
>   	u32 request_start;
>   	u32 reserved_tail;
>   	int ret;
> @@ -424,14 +424,14 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   		return;
>
>   	engine = request->engine;
> -	ringbuf = request->ringbuf;
> +	ring = request->ring;
>
>   	/*
>   	 * To ensure that this call will not fail, space for its emissions
>   	 * should already have been reserved in the ring buffer. Let the ring
>   	 * know that it is time to use that space up.
>   	 */
> -	request_start = intel_ring_get_tail(ringbuf);
> +	request_start = intel_ring_get_tail(ring);
>   	reserved_tail = request->reserved_space;
>   	request->reserved_space = 0;
>
> @@ -478,21 +478,21 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   	 * GPU processing the request, we never over-estimate the
>   	 * position of the head.
>   	 */
> -	request->postfix = intel_ring_get_tail(ringbuf);
> +	request->postfix = intel_ring_get_tail(ring);
>
>   	if (i915.enable_execlists)
>   		ret = engine->emit_request(request);
>   	else {
>   		ret = engine->add_request(request);
>
> -		request->tail = intel_ring_get_tail(ringbuf);
> +		request->tail = intel_ring_get_tail(ring);
>   	}
>   	/* Not allowed to fail! */
>   	WARN(ret, "emit|add_request failed: %d!\n", ret);
>   	/* Sanity check that the reserved size was large enough. */
> -	ret = intel_ring_get_tail(ringbuf) - request_start;
> +	ret = intel_ring_get_tail(ring) - request_start;
>   	if (ret < 0)
> -		ret += ringbuf->size;
> +		ret += ring->size;
>   	WARN_ONCE(ret > reserved_tail,
>   		  "Not enough space reserved (%d bytes) "
>   		  "for adding the request (%d bytes)\n",
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
> index a3cac13ab9af..913565fbb0e3 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.h
> +++ b/drivers/gpu/drm/i915/i915_gem_request.h
> @@ -59,7 +59,7 @@ struct drm_i915_gem_request {
>   	 */
>   	struct i915_gem_context *ctx;
>   	struct intel_engine_cs *engine;
> -	struct intel_ringbuffer *ringbuf;
> +	struct intel_ringbuffer *ring;
>   	struct intel_signal_node signaling;
>
>   	unsigned reset_counter;
> @@ -86,7 +86,6 @@ struct drm_i915_gem_request {
>   	/** Preallocate space in the ringbuffer for the emitting the request */
>   	u32 reserved_space;
>
> -
>   	/**
>   	 * Context related to the previous request.
>   	 * As the contexts are accessed by the hardware until the switch is
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index d1667aa640ef..b934986bb117 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1089,7 +1089,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
>   		request = i915_gem_find_active_request(engine);
>   		if (request) {
>   			struct i915_address_space *vm;
> -			struct intel_ringbuffer *rb;
> +			struct intel_ringbuffer *ring;
>
>   			vm = request->ctx && request->ctx->ppgtt ?
>   				&request->ctx->ppgtt->base :
> @@ -1107,7 +1107,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
>   			if (HAS_BROKEN_CS_TLB(dev_priv))
>   				error->ring[i].wa_batchbuffer =
>   					i915_error_ggtt_object_create(dev_priv,
> -							     engine->scratch.obj);
> +								      engine->scratch.obj);
>
>   			if (request->pid) {
>   				struct task_struct *task;
> @@ -1123,23 +1123,21 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
>
>   			error->simulated |= request->ctx->flags & CONTEXT_NO_ERROR_CAPTURE;
>
> -			rb = request->ringbuf;
> -			error->ring[i].cpu_ring_head = rb->head;
> -			error->ring[i].cpu_ring_tail = rb->tail;
> +			ring = request->ring;
> +			error->ring[i].cpu_ring_head = ring->head;
> +			error->ring[i].cpu_ring_tail = ring->tail;
>   			error->ring[i].ringbuffer =
>   				i915_error_ggtt_object_create(dev_priv,
> -							      rb->obj);
> +							      ring->obj);
>   		}
>
>   		error->ring[i].hws_page =
>   			i915_error_ggtt_object_create(dev_priv,
>   						      engine->status_page.obj);
>
> -		if (engine->wa_ctx.obj) {
> -			error->ring[i].wa_ctx =
> -				i915_error_ggtt_object_create(dev_priv,
> -							      engine->wa_ctx.obj);
> -		}
> +		error->ring[i].wa_ctx =
> +			i915_error_ggtt_object_create(dev_priv,
> +						      engine->wa_ctx.obj);
>
>   		i915_gem_record_active_context(engine, error, &error->ring[i]);
>
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 2cba91207d7e..2dafbfbc8134 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -11174,7 +11174,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
>   				 struct drm_i915_gem_request *req,
>   				 uint32_t flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
>   	u32 flip_mask;
>   	int ret;
> @@ -11208,7 +11208,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
>   				 struct drm_i915_gem_request *req,
>   				 uint32_t flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
>   	u32 flip_mask;
>   	int ret;
> @@ -11239,7 +11239,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
>   				 struct drm_i915_gem_request *req,
>   				 uint32_t flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct drm_i915_private *dev_priv = dev->dev_private;
>   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
>   	uint32_t pf, pipesrc;
> @@ -11277,7 +11277,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
>   				 struct drm_i915_gem_request *req,
>   				 uint32_t flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct drm_i915_private *dev_priv = dev->dev_private;
>   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
>   	uint32_t pf, pipesrc;
> @@ -11312,7 +11312,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
>   				 struct drm_i915_gem_request *req,
>   				 uint32_t flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
>   	uint32_t plane_bit = 0;
>   	int len, ret;
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index a1820d531e49..229545fc5b4a 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -692,7 +692,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
>   			return ret;
>   	}
>
> -	request->ringbuf = ce->ringbuf;
> +	request->ring = ce->ringbuf;
>
>   	if (i915.enable_guc_submission) {
>   		/*
> @@ -748,11 +748,11 @@ err_unpin:
>   static int
>   intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
>   {
> -	struct intel_ringbuffer *ringbuf = request->ringbuf;
> +	struct intel_ringbuffer *ring = request->ring;
>   	struct intel_engine_cs *engine = request->engine;
>
> -	intel_ring_advance(ringbuf);
> -	request->tail = ringbuf->tail;
> +	intel_ring_advance(ring);
> +	request->tail = ring->tail;
>
>   	/*
>   	 * Here we add two extra NOOPs as padding to avoid
> @@ -760,9 +760,9 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
>   	 *
>   	 * Caller must reserve WA_TAIL_DWORDS for us!
>   	 */
> -	intel_ring_emit(ringbuf, MI_NOOP);
> -	intel_ring_emit(ringbuf, MI_NOOP);
> -	intel_ring_advance(ringbuf);
> +	intel_ring_emit(ring, MI_NOOP);
> +	intel_ring_emit(ring, MI_NOOP);
> +	intel_ring_advance(ring);
>
>   	/* We keep the previous context alive until we retire the following
>   	 * request. This ensures that any the context object is still pinned
> @@ -805,7 +805,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
>   	struct drm_device       *dev = params->dev;
>   	struct intel_engine_cs *engine = params->engine;
>   	struct drm_i915_private *dev_priv = dev->dev_private;
> -	struct intel_ringbuffer *ringbuf = params->ctx->engine[engine->id].ringbuf;
> +	struct intel_ringbuffer *ring = params->request->ring;
>   	u64 exec_start;
>   	int instp_mode;
>   	u32 instp_mask;
> @@ -817,7 +817,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
>   	case I915_EXEC_CONSTANTS_REL_GENERAL:
>   	case I915_EXEC_CONSTANTS_ABSOLUTE:
>   	case I915_EXEC_CONSTANTS_REL_SURFACE:
> -		if (instp_mode != 0 && engine != &dev_priv->engine[RCS]) {
> +		if (instp_mode != 0 && engine->id != RCS) {
>   			DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
>   			return -EINVAL;
>   		}
> @@ -846,17 +846,17 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
>   	if (ret)
>   		return ret;
>
> -	if (engine == &dev_priv->engine[RCS] &&
> +	if (engine->id == RCS &&
>   	    instp_mode != dev_priv->relative_constants_mode) {
>   		ret = intel_ring_begin(params->request, 4);
>   		if (ret)
>   			return ret;
>
> -		intel_ring_emit(ringbuf, MI_NOOP);
> -		intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
> -		intel_ring_emit_reg(ringbuf, INSTPM);
> -		intel_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
> -		intel_ring_advance(ringbuf);
> +		intel_ring_emit(ring, MI_NOOP);
> +		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
> +		intel_ring_emit_reg(ring, INSTPM);
> +		intel_ring_emit(ring, instp_mask << 16 | instp_mode);
> +		intel_ring_advance(ring);
>
>   		dev_priv->relative_constants_mode = instp_mode;
>   	}
> @@ -1011,7 +1011,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
>   {
>   	int ret, i;
>   	struct intel_engine_cs *engine = req->engine;
> -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct i915_workarounds *w = &req->i915->workarounds;
>
>   	if (w->count == 0)
> @@ -1026,14 +1026,14 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
>   	if (ret)
>   		return ret;
>
> -	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
> +	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
>   	for (i = 0; i < w->count; i++) {
> -		intel_ring_emit_reg(ringbuf, w->reg[i].addr);
> -		intel_ring_emit(ringbuf, w->reg[i].value);
> +		intel_ring_emit_reg(ring, w->reg[i].addr);
> +		intel_ring_emit(ring, w->reg[i].value);
>   	}
> -	intel_ring_emit(ringbuf, MI_NOOP);
> +	intel_ring_emit(ring, MI_NOOP);
>
> -	intel_ring_advance(ringbuf);
> +	intel_ring_advance(ring);
>
>   	engine->gpu_caches_dirty = true;
>   	ret = logical_ring_flush_all_caches(req);
> @@ -1506,7 +1506,7 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine)
>   static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
>   {
>   	struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
>   	int i, ret;
>
> @@ -1533,7 +1533,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
>   static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
>   			      u64 offset, unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
>   	int ret;
>
> @@ -1590,8 +1590,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
>   			   u32 invalidate_domains,
>   			   u32 unused)
>   {
> -	struct intel_ringbuffer *ring = request->ringbuf;
> -	struct intel_engine_cs *engine = ring->engine;
> +	struct intel_ringbuffer *ring = request->ring;
>   	uint32_t cmd;
>   	int ret;
>
> @@ -1610,7 +1609,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
>
>   	if (invalidate_domains & I915_GEM_GPU_DOMAINS) {
>   		cmd |= MI_INVALIDATE_TLB;
> -		if (engine->id == VCS)
> +		if (request->engine->id == VCS)
>   			cmd |= MI_INVALIDATE_BSD;
>   	}
>
> @@ -1629,7 +1628,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
>   				  u32 invalidate_domains,
>   				  u32 flush_domains)
>   {
> -	struct intel_ringbuffer *ring = request->ringbuf;
> +	struct intel_ringbuffer *ring = request->ring;
>   	struct intel_engine_cs *engine = request->engine;
>   	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
>   	bool vf_flush_wa = false;
> @@ -1711,7 +1710,7 @@ static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
>
>   static int gen8_emit_request(struct drm_i915_gem_request *request)
>   {
> -	struct intel_ringbuffer *ring = request->ringbuf;
> +	struct intel_ringbuffer *ring = request->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
> @@ -1734,7 +1733,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
>
>   static int gen8_emit_request_render(struct drm_i915_gem_request *request)
>   {
> -	struct intel_ringbuffer *ring = request->ringbuf;
> +	struct intel_ringbuffer *ring = request->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
> diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
> index 8513bf06d4df..4b44bbcfd7cd 100644
> --- a/drivers/gpu/drm/i915/intel_mocs.c
> +++ b/drivers/gpu/drm/i915/intel_mocs.c
> @@ -231,7 +231,7 @@ int intel_mocs_init_engine(struct intel_engine_cs *engine)
>   static int emit_mocs_control_table(struct drm_i915_gem_request *req,
>   				   const struct drm_i915_mocs_table *table)
>   {
> -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	enum intel_engine_id engine = req->engine->id;
>   	unsigned int index;
>   	int ret;
> @@ -243,11 +243,11 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
>   	if (ret)
>   		return ret;
>
> -	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
> +	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
>
>   	for (index = 0; index < table->size; index++) {
> -		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
> -		intel_ring_emit(ringbuf, table->table[index].control_value);
> +		intel_ring_emit_reg(ring, mocs_register(engine, index));
> +		intel_ring_emit(ring, table->table[index].control_value);
>   	}
>
>   	/*
> @@ -259,12 +259,12 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
>   	 * that value to all the used entries.
>   	 */
>   	for (; index < GEN9_NUM_MOCS_ENTRIES; index++) {
> -		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
> -		intel_ring_emit(ringbuf, table->table[0].control_value);
> +		intel_ring_emit_reg(ring, mocs_register(engine, index));
> +		intel_ring_emit(ring, table->table[0].control_value);
>   	}
>
> -	intel_ring_emit(ringbuf, MI_NOOP);
> -	intel_ring_advance(ringbuf);
> +	intel_ring_emit(ring, MI_NOOP);
> +	intel_ring_advance(ring);
>
>   	return 0;
>   }
> @@ -291,7 +291,7 @@ static inline u32 l3cc_combine(const struct drm_i915_mocs_table *table,
>   static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
>   				const struct drm_i915_mocs_table *table)
>   {
> -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	unsigned int i;
>   	int ret;
>
> @@ -302,18 +302,18 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
>   	if (ret)
>   		return ret;
>
> -	intel_ring_emit(ringbuf,
> +	intel_ring_emit(ring,
>   			MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));
>
>   	for (i = 0; i < table->size/2; i++) {
> -		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
> -		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 2*i+1));
> +		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
> +		intel_ring_emit(ring, l3cc_combine(table, 2*i, 2*i+1));
>   	}
>
>   	if (table->size & 0x01) {
>   		/* Odd table size - 1 left over */
> -		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
> -		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 0));
> +		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
> +		intel_ring_emit(ring, l3cc_combine(table, 2*i, 0));
>   		i++;
>   	}
>
> @@ -323,12 +323,12 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
>   	 * they are reserved by the hardware.
>   	 */
>   	for (; i < GEN9_NUM_MOCS_ENTRIES / 2; i++) {
> -		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
> -		intel_ring_emit(ringbuf, l3cc_combine(table, 0, 0));
> +		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
> +		intel_ring_emit(ring, l3cc_combine(table, 0, 0));
>   	}
>
> -	intel_ring_emit(ringbuf, MI_NOOP);
> -	intel_ring_advance(ringbuf);
> +	intel_ring_emit(ring, MI_NOOP);
> +	intel_ring_advance(ring);
>
>   	return 0;
>   }
> diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
> index be79c4497af5..f9c062fea39f 100644
> --- a/drivers/gpu/drm/i915/intel_overlay.c
> +++ b/drivers/gpu/drm/i915/intel_overlay.c
> @@ -253,7 +253,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
>
>   	overlay->active = true;
>
> -	ring = req->ringbuf;
> +	ring = req->ring;
>   	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
>   	intel_ring_emit(ring, overlay->flip_addr | OFC_UPDATE);
>   	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
> @@ -295,7 +295,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
>   		return ret;
>   	}
>
> -	ring = req->ringbuf;
> +	ring = req->ring;
>   	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
>   	intel_ring_emit(ring, flip_addr);
>   	intel_ring_advance(ring);
> @@ -362,7 +362,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
>   		return ret;
>   	}
>
> -	ring = req->ringbuf;
> +	ring = req->ring;
>   	/* wait for overlay to go idle */
>   	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
>   	intel_ring_emit(ring, flip_addr);
> @@ -438,7 +438,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
>   			return ret;
>   		}
>
> -		ring = req->ringbuf;
> +		ring = req->ring;
>   		intel_ring_emit(ring,
>   				MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
>   		intel_ring_emit(ring, MI_NOOP);
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index ace455b2b2d6..0f13e9900bd6 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -70,7 +70,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
>   		       u32	invalidate_domains,
>   		       u32	flush_domains)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 cmd;
>   	int ret;
>
> @@ -97,7 +97,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
>   		       u32	invalidate_domains,
>   		       u32	flush_domains)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 cmd;
>   	int ret;
>
> @@ -187,7 +187,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
>   static int
>   intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 scratch_addr =
>   	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
>   	int ret;
> @@ -224,7 +224,7 @@ static int
>   gen6_render_ring_flush(struct drm_i915_gem_request *req,
>   		       u32 invalidate_domains, u32 flush_domains)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 scratch_addr =
>   	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
>   	u32 flags = 0;
> @@ -277,7 +277,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
>   static int
>   gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 4);
> @@ -299,7 +299,7 @@ static int
>   gen7_render_ring_flush(struct drm_i915_gem_request *req,
>   		       u32 invalidate_domains, u32 flush_domains)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 scratch_addr =
>   	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
>   	u32 flags = 0;
> @@ -364,7 +364,7 @@ static int
>   gen8_emit_pipe_control(struct drm_i915_gem_request *req,
>   		       u32 flags, u32 scratch_addr)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 6);
> @@ -680,7 +680,7 @@ err:
>
>   static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct i915_workarounds *w = &req->i915->workarounds;
>   	int ret, i;
>
> @@ -1242,7 +1242,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
>   			   unsigned int num_dwords)
>   {
>   #define MBOX_UPDATE_DWORDS 8
> -	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
> +	struct intel_ringbuffer *signaller = signaller_req->ring;
>   	struct drm_i915_private *dev_priv = signaller_req->i915;
>   	struct intel_engine_cs *waiter;
>   	enum intel_engine_id id;
> @@ -1282,7 +1282,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
>   			   unsigned int num_dwords)
>   {
>   #define MBOX_UPDATE_DWORDS 6
> -	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
> +	struct intel_ringbuffer *signaller = signaller_req->ring;
>   	struct drm_i915_private *dev_priv = signaller_req->i915;
>   	struct intel_engine_cs *waiter;
>   	enum intel_engine_id id;
> @@ -1319,7 +1319,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
>   static int gen6_signal(struct drm_i915_gem_request *signaller_req,
>   		       unsigned int num_dwords)
>   {
> -	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
> +	struct intel_ringbuffer *signaller = signaller_req->ring;
>   	struct drm_i915_private *dev_priv = signaller_req->i915;
>   	struct intel_engine_cs *useless;
>   	enum intel_engine_id id;
> @@ -1363,7 +1363,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
>   static int
>   gen6_add_request(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	if (req->engine->semaphore.signal)
> @@ -1387,7 +1387,7 @@ static int
>   gen8_render_add_request(struct drm_i915_gem_request *req)
>   {
>   	struct intel_engine_cs *engine = req->engine;
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	if (engine->semaphore.signal)
> @@ -1432,7 +1432,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
>   	       struct intel_engine_cs *signaller,
>   	       u32 seqno)
>   {
> -	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
> +	struct intel_ringbuffer *waiter = waiter_req->ring;
>   	struct drm_i915_private *dev_priv = waiter_req->i915;
>   	struct i915_hw_ppgtt *ppgtt;
>   	int ret;
> @@ -1469,7 +1469,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
>   	       struct intel_engine_cs *signaller,
>   	       u32 seqno)
>   {
> -	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
> +	struct intel_ringbuffer *waiter = waiter_req->ring;
>   	u32 dw1 = MI_SEMAPHORE_MBOX |
>   		  MI_SEMAPHORE_COMPARE |
>   		  MI_SEMAPHORE_REGISTER;
> @@ -1603,7 +1603,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
>   	       u32     invalidate_domains,
>   	       u32     flush_domains)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 2);
> @@ -1619,7 +1619,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
>   static int
>   i9xx_add_request(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 4);
> @@ -1697,7 +1697,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   			 u64 offset, u32 length,
>   			 unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 2);
> @@ -1724,7 +1724,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   			 u64 offset, u32 len,
>   			 unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	u32 cs_offset = req->engine->scratch.gtt_offset;
>   	int ret;
>
> @@ -1786,7 +1786,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   			 u64 offset, u32 len,
>   			 unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 2);
> @@ -2221,7 +2221,7 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
>   	 */
>   	request->reserved_space += LEGACY_REQUEST_SIZE;
>
> -	request->ringbuf = request->engine->buffer;
> +	request->ring = request->engine->buffer;
>
>   	ret = intel_ring_begin(request, 0);
>   	if (ret)
> @@ -2233,12 +2233,12 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
>
>   static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
>   {
> -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	struct intel_engine_cs *engine = req->engine;
>   	struct drm_i915_gem_request *target;
>
> -	intel_ring_update_space(ringbuf);
> -	if (ringbuf->space >= bytes)
> +	intel_ring_update_space(ring);
> +	if (ring->space >= bytes)
>   		return 0;
>
>   	/*
> @@ -2260,12 +2260,12 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
>   		 * from multiple ringbuffers. Here, we must ignore any that
>   		 * aren't from the ringbuffer we're considering.
>   		 */
> -		if (target->ringbuf != ringbuf)
> +		if (target->ring != ring)
>   			continue;
>
>   		/* Would completion of this request free enough space? */
> -		space = __intel_ring_space(target->postfix, ringbuf->tail,
> -					   ringbuf->size);
> +		space = __intel_ring_space(target->postfix, ring->tail,
> +					   ring->size);
>   		if (space >= bytes)
>   			break;
>   	}
> @@ -2278,9 +2278,9 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
>
>   int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
>   {
> -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> -	int remain_actual = ringbuf->size - ringbuf->tail;
> -	int remain_usable = ringbuf->effective_size - ringbuf->tail;
> +	struct intel_ringbuffer *ring = req->ring;
> +	int remain_actual = ring->size - ring->tail;
> +	int remain_usable = ring->effective_size - ring->tail;
>   	int bytes = num_dwords * sizeof(u32);
>   	int total_bytes, wait_bytes;
>   	bool need_wrap = false;
> @@ -2307,35 +2307,35 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
>   		wait_bytes = total_bytes;
>   	}
>
> -	if (wait_bytes > ringbuf->space) {
> +	if (wait_bytes > ring->space) {
>   		int ret = wait_for_space(req, wait_bytes);
>   		if (unlikely(ret))
>   			return ret;
>
> -		intel_ring_update_space(ringbuf);
> -		if (unlikely(ringbuf->space < wait_bytes))
> +		intel_ring_update_space(ring);
> +		if (unlikely(ring->space < wait_bytes))
>   			return -EAGAIN;
>   	}
>
>   	if (unlikely(need_wrap)) {
> -		GEM_BUG_ON(remain_actual > ringbuf->space);
> -		GEM_BUG_ON(ringbuf->tail + remain_actual > ringbuf->size);
> +		GEM_BUG_ON(remain_actual > ring->space);
> +		GEM_BUG_ON(ring->tail + remain_actual > ring->size);
>
>   		/* Fill the tail with MI_NOOP */
> -		memset(ringbuf->vaddr + ringbuf->tail, 0, remain_actual);
> -		ringbuf->tail = 0;
> -		ringbuf->space -= remain_actual;
> +		memset(ring->vaddr + ring->tail, 0, remain_actual);
> +		ring->tail = 0;
> +		ring->space -= remain_actual;
>   	}
>
> -	ringbuf->space -= bytes;
> -	GEM_BUG_ON(ringbuf->space < 0);
> +	ring->space -= bytes;
> +	GEM_BUG_ON(ring->space < 0);
>   	return 0;
>   }
>
>   /* Align the ring tail to a cacheline boundary */
>   int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int num_dwords =
>   	       	(ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
>   	int ret;
> @@ -2429,7 +2429,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
>   static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
>   			       u32 invalidate, u32 flush)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	uint32_t cmd;
>   	int ret;
>
> @@ -2475,7 +2475,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   			      u64 offset, u32 len,
>   			      unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	bool ppgtt = USES_PPGTT(req->i915) &&
>   			!(dispatch_flags & I915_DISPATCH_SECURE);
>   	int ret;
> @@ -2501,7 +2501,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   			     u64 offset, u32 len,
>   			     unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 2);
> @@ -2526,7 +2526,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   			      u64 offset, u32 len,
>   			      unsigned dispatch_flags)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	int ret;
>
>   	ret = intel_ring_begin(req, 2);
> @@ -2549,7 +2549,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
>   static int gen6_ring_flush(struct drm_i915_gem_request *req,
>   			   u32 invalidate, u32 flush)
>   {
> -	struct intel_ringbuffer *ring = req->ringbuf;
> +	struct intel_ringbuffer *ring = req->ring;
>   	uint32_t cmd;
>   	int ret;
>
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs
  2016-06-03 16:36 ` [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs Chris Wilson
@ 2016-06-06 13:45   ` Tvrtko Ursulin
  0 siblings, 0 replies; 87+ messages in thread
From: Tvrtko Ursulin @ 2016-06-06 13:45 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 03/06/16 17:36, Chris Wilson wrote:
> Having ringbuf->ring point to an engine is confusing, so rename it once
> again to ring->engine.

I don't see any backpointers here, must be more rebase noise.

Regards,

Tvrtko


> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/intel_ringbuffer.c | 14 +++++++-------
>   1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index 0f13e9900bd6..ab498ecce1ca 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -2087,8 +2087,8 @@ static void intel_ring_context_unpin(struct i915_gem_context *ctx,
>   	i915_gem_context_put(ctx);
>   }
>
> -static int intel_init_ring_buffer(struct drm_device *dev,
> -				  struct intel_engine_cs *engine)
> +static int intel_init_engine(struct drm_device *dev,
> +			     struct intel_engine_cs *engine)
>   {
>   	struct drm_i915_private *dev_priv = to_i915(dev);
>   	struct intel_ringbuffer *ringbuf;
> @@ -2707,7 +2707,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
>   	engine->init_hw = init_render_ring;
>   	engine->cleanup = render_ring_cleanup;
>
> -	ret = intel_init_ring_buffer(dev, engine);
> +	ret = intel_init_engine(dev, engine);
>   	if (ret)
>   		return ret;
>
> @@ -2794,7 +2794,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
>   	}
>   	engine->init_hw = init_ring_common;
>
> -	return intel_init_ring_buffer(dev, engine);
> +	return intel_init_engine(dev, engine);
>   }
>
>   /**
> @@ -2828,7 +2828,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
>   	}
>   	engine->init_hw = init_ring_common;
>
> -	return intel_init_ring_buffer(dev, engine);
> +	return intel_init_engine(dev, engine);
>   }
>
>   int intel_init_blt_ring_buffer(struct drm_device *dev)
> @@ -2886,7 +2886,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
>   	}
>   	engine->init_hw = init_ring_common;
>
> -	return intel_init_ring_buffer(dev, engine);
> +	return intel_init_engine(dev, engine);
>   }
>
>   int intel_init_vebox_ring_buffer(struct drm_device *dev)
> @@ -2938,7 +2938,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
>   	}
>   	engine->init_hw = init_ring_common;
>
> -	return intel_init_ring_buffer(dev, engine);
> +	return intel_init_engine(dev, engine);
>   }
>
>   int
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 01/62] drm/i915: Only start retire worker when idle
  2016-06-03 16:36 ` [PATCH 01/62] drm/i915: Only start retire worker when idle Chris Wilson
@ 2016-06-07 11:31   ` Joonas Lahtinen
  2016-06-08 10:53     ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Joonas Lahtinen @ 2016-06-07 11:31 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> The retire worker is a low frequency task that makes sure we retire
> outstanding requests if userspace is being lax. We only need to start it
> once as it remains active until the GPU is idle, so do a cheap test
> before the more expensive queue_work(). A consequence of this is that we
> need correct locking in the worker to make the hot path of request
> submission cheap. To keep the symmetry and keep hangcheck strictly bound
> by the GPU's wakelock, we move the cancel_sync(hangcheck) to the idle
> worker before dropping the wakelock.
> 
> v2: Guard against RCU fouling the breadcrumbs bottom-half whilst we kick
> the waiter.
> v3: Remove the wakeref assertion squelching (now we hold a wakeref for
> the hangcheck, any rpm error there is genuine).
> v4: To prevent excess work when retiring requests, we split the busy
> flag into two, a boolean to denote whether we hold the wakeref and a
> bitmask of active engines.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> References: https://bugs.freedesktop.org/show_bug.cgi?id=88437
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c        |   5 +-
>  drivers/gpu/drm/i915/i915_drv.c            |   2 -
>  drivers/gpu/drm/i915/i915_drv.h            |  56 +++++++-------
>  drivers/gpu/drm/i915/i915_gem.c            | 114 ++++++++++++++++++-----------
>  drivers/gpu/drm/i915/i915_gem_execbuffer.c |   6 ++
>  drivers/gpu/drm/i915/i915_irq.c            |  15 +---
>  drivers/gpu/drm/i915/intel_display.c       |  26 -------
>  drivers/gpu/drm/i915/intel_pm.c            |   2 +-
>  drivers/gpu/drm/i915/intel_ringbuffer.h    |   4 +-
>  9 files changed, 115 insertions(+), 115 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 72dae6fb0aa2..dd6cf222e8f5 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -2437,7 +2437,8 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
>  	struct drm_file *file;
>  
>  	seq_printf(m, "RPS enabled? %d\n", dev_priv->rps.enabled);
> -	seq_printf(m, "GPU busy? %d\n", dev_priv->mm.busy);
> +	seq_printf(m, "GPU busy? %s [%x]\n",
> +		   yesno(dev_priv->gt.awake), dev_priv->gt.active_engines);
>  	seq_printf(m, "CPU waiting? %d\n", count_irq_waiters(dev_priv));
>  	seq_printf(m, "Frequency requested %d; min hard:%d, soft:%d; max soft:%d, hard:%d\n",
>  		   intel_gpu_freq(dev_priv, dev_priv->rps.cur_freq),
> @@ -2777,7 +2778,7 @@ static int i915_runtime_pm_status(struct seq_file *m, void *unused)
>  	if (!HAS_RUNTIME_PM(dev_priv))
>  		seq_puts(m, "Runtime power management not supported\n");
>  
> -	seq_printf(m, "GPU idle: %s\n", yesno(!dev_priv->mm.busy));
> +	seq_printf(m, "GPU idle: %s\n", yesno(!dev_priv->gt.awake));
>  	seq_printf(m, "IRQs disabled: %s\n",
>  		   yesno(!intel_irqs_enabled(dev_priv)));
>  #ifdef CONFIG_PM
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 3c8c75c77574..5f7208d2fdbf 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -2697,8 +2697,6 @@ static int intel_runtime_suspend(struct device *device)
>  	i915_gem_release_all_mmaps(dev_priv);
>  	mutex_unlock(&dev->struct_mutex);
>  
> -	cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
> -
>  	intel_guc_suspend(dev);
>  
>  	intel_suspend_gt_powersave(dev_priv);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 88d9242398ce..3f075adf9e84 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1305,37 +1305,11 @@ struct i915_gem_mm {
>  	struct list_head fence_list;
>  
>  	/**
> -	 * We leave the user IRQ off as much as possible,
> -	 * but this means that requests will finish and never
> -	 * be retired once the system goes idle. Set a timer to
> -	 * fire periodically while the ring is running. When it
> -	 * fires, go retire requests.
> -	 */
> -	struct delayed_work retire_work;
> -
> -	/**
> -	 * When we detect an idle GPU, we want to turn on
> -	 * powersaving features. So once we see that there
> -	 * are no more requests outstanding and no more
> -	 * arrive within a small period of time, we fire
> -	 * off the idle_work.
> -	 */
> -	struct delayed_work idle_work;
> -
> -	/**
>  	 * Are we in a non-interruptible section of code like
>  	 * modesetting?
>  	 */
>  	bool interruptible;
>  
> -	/**
> -	 * Is the GPU currently considered idle, or busy executing userspace
> -	 * requests?  Whilst idle, we attempt to power down the hardware and
> -	 * display clocks. In order to reduce the effect on performance, there
> -	 * is a slight delay before we do so.
> -	 */
> -	bool busy;
> -
>  	/* the indicator for dispatch video commands on two BSD rings */
>  	unsigned int bsd_ring_dispatch_index;
>  
> @@ -2034,6 +2008,34 @@ struct drm_i915_private {
>  		int (*init_engines)(struct drm_device *dev);
>  		void (*cleanup_engine)(struct intel_engine_cs *engine);
>  		void (*stop_engine)(struct intel_engine_cs *engine);
> +
> +		/**
> +		 * Is the GPU currently considered idle, or busy executing
> +		 * userspace requests? Whilst idle, we allow runtime power
> +		 * management to power down the hardware and display clocks.
> +		 * In order to reduce the effect on performance, there
> +		 * is a slight delay before we do so.
> +		 */
> +		unsigned active_engines;
> +		bool awake;
> +
> +		/**
> +		 * We leave the user IRQ off as much as possible,
> +		 * but this means that requests will finish and never
> +		 * be retired once the system goes idle. Set a timer to
> +		 * fire periodically while the ring is running. When it
> +		 * fires, go retire requests.
> +		 */
> +		struct delayed_work retire_work;
> +
> +		/**
> +		 * When we detect an idle GPU, we want to turn on
> +		 * powersaving features. So once we see that there
> +		 * are no more requests outstanding and no more
> +		 * arrive within a small period of time, we fire
> +		 * off the idle_work.
> +		 */
> +		struct delayed_work idle_work;

Code motion would be cool in separate patches, but well it's a 62 patch
series already.

>  	} gt;
>  
>  	/* perform PHY state sanity checks? */
> @@ -3247,7 +3249,7 @@ int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);
>  struct drm_i915_gem_request *
>  i915_gem_find_active_request(struct intel_engine_cs *engine);
>  
> -bool i915_gem_retire_requests(struct drm_i915_private *dev_priv);
> +void i915_gem_retire_requests(struct drm_i915_private *dev_priv);
>  void i915_gem_retire_requests_ring(struct intel_engine_cs *engine);
>  
>  static inline u32 i915_reset_counter(struct i915_gpu_error *error)
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index f4e550ddaa5d..5a7131b749a2 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2554,6 +2554,26 @@ i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
>  	return 0;
>  }
>  
> +static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
> +			       const struct intel_engine_cs *engine)
> +{
> +	dev_priv->gt.active_engines |= intel_engine_flag(engine);
> +	if (dev_priv->gt.awake)
> +		return;
> +
> +	intel_runtime_pm_get_noresume(dev_priv);
> +	dev_priv->gt.awake = true;
> +
> +	intel_enable_gt_powersave(dev_priv);
> +	i915_update_gfx_val(dev_priv);
> +	if (INTEL_INFO(dev_priv)->gen >= 6)
> +		gen6_rps_busy(dev_priv);
> +
> +	queue_delayed_work(dev_priv->wq,
> +			   &dev_priv->gt.retire_work,
> +			   round_jiffies_up_relative(HZ));
> +}
> +
>  /*
>   * NB: This function is not allowed to fail. Doing so would mean the the
>   * request is not being tracked for completion but the work itself is
> @@ -2640,12 +2660,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>  	}
>  	/* Not allowed to fail! */
>  	WARN(ret, "emit|add_request failed: %d!\n", ret);
> -
> -	queue_delayed_work(dev_priv->wq,
> -			   &dev_priv->mm.retire_work,
> -			   round_jiffies_up_relative(HZ));
> -	intel_mark_busy(dev_priv);
> -
>  	/* Sanity check that the reserved size was large enough. */
>  	ret = intel_ring_get_tail(ringbuf) - request_start;
>  	if (ret < 0)
> @@ -2654,6 +2668,8 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>  		  "Not enough space reserved (%d bytes) "
>  		  "for adding the request (%d bytes)\n",
>  		  reserved_tail, ret);
> +
> +	i915_gem_mark_busy(dev_priv, engine);
>  }
>  
>  static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
> @@ -2968,46 +2984,47 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
>  	WARN_ON(i915_verify_lists(engine->dev));
>  }
>  
> -bool
> -i915_gem_retire_requests(struct drm_i915_private *dev_priv)
> +void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
>  {
>  	struct intel_engine_cs *engine;
> -	bool idle = true;
> +
> +	if (dev_priv->gt.active_engines == 0)
> +		return;
> +
> +	GEM_BUG_ON(!dev_priv->gt.awake);
>  
>  	for_each_engine(engine, dev_priv) {
>  		i915_gem_retire_requests_ring(engine);
> -		idle &= list_empty(&engine->request_list);
> -		if (i915.enable_execlists) {
> -			spin_lock_bh(&engine->execlist_lock);
> -			idle &= list_empty(&engine->execlist_queue);
> -			spin_unlock_bh(&engine->execlist_lock);
> -		}

As discussed in IRC, this disappearing could be mentioned in the commit
message.

> +		if (list_empty(&engine->request_list))
> +			dev_priv->gt.active_engines &= ~intel_engine_flag(engine);
>  	}
>  
> -	if (idle)
> +	if (dev_priv->gt.active_engines == 0)
>  		mod_delayed_work(dev_priv->wq,
> -				 &dev_priv->mm.idle_work,
> +				 &dev_priv->gt.idle_work,
>  				 msecs_to_jiffies(100));
> -
> -	return idle;
>  }
>  
>  static void
>  i915_gem_retire_work_handler(struct work_struct *work)
>  {
>  	struct drm_i915_private *dev_priv =
> -		container_of(work, typeof(*dev_priv), mm.retire_work.work);
> +		container_of(work, typeof(*dev_priv), gt.retire_work.work);
>  	struct drm_device *dev = dev_priv->dev;
> -	bool idle;
>  
>  	/* Come back later if the device is busy... */
> -	idle = false;
>  	if (mutex_trylock(&dev->struct_mutex)) {
> -		idle = i915_gem_retire_requests(dev_priv);
> +		i915_gem_retire_requests(dev_priv);
>  		mutex_unlock(&dev->struct_mutex);
>  	}
> -	if (!idle)
> -		queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work,
> +
> +	/* Keep the retire handler running until we are finally idle.
> +	 * We do not need to do this test under locking as in the worst-case
> +	 * we queue the retire worker once too often.
> +	 */
> +	if (READ_ONCE(dev_priv->gt.awake))

This is the only occurrance in this function, so don't think we need
READ_ONCE. Not sure if READ_ONCE is good documentation of "read outside
lock", comment might be better.

> +		queue_delayed_work(dev_priv->wq,
> +				   &dev_priv->gt.retire_work,
>  				   round_jiffies_up_relative(HZ));
>  }
>  
> @@ -3015,25 +3032,36 @@ static void
>  i915_gem_idle_work_handler(struct work_struct *work)
>  {
>  	struct drm_i915_private *dev_priv =
> -		container_of(work, typeof(*dev_priv), mm.idle_work.work);
> +		container_of(work, typeof(*dev_priv), gt.idle_work.work);
>  	struct drm_device *dev = dev_priv->dev;
>  	struct intel_engine_cs *engine;
>  
> -	for_each_engine(engine, dev_priv)
> -		if (!list_empty(&engine->request_list))
> -			return;
> +	if (!READ_ONCE(dev_priv->gt.awake))
> +		return;
>  
> -	/* we probably should sync with hangcheck here, using cancel_work_sync.
> -	 * Also locking seems to be fubar here, engine->request_list is protected
> -	 * by dev->struct_mutex. */
> +	mutex_lock(&dev->struct_mutex);
> +	if (dev_priv->gt.active_engines)
> +		goto out;
>  
> -	intel_mark_idle(dev_priv);
> +	for_each_engine(engine, dev_priv)
> +		i915_gem_batch_pool_fini(&engine->batch_pool);
>  
> -	if (mutex_trylock(&dev->struct_mutex)) {
> -		for_each_engine(engine, dev_priv)
> -			i915_gem_batch_pool_fini(&engine->batch_pool);
> +	GEM_BUG_ON(!dev_priv->gt.awake);
> +	dev_priv->gt.awake = false;
>  
> -		mutex_unlock(&dev->struct_mutex);
> +	if (INTEL_INFO(dev_priv)->gen >= 6)
> +		gen6_rps_idle(dev_priv);
> +	intel_runtime_pm_put(dev_priv);
> +out:
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	if (!dev_priv->gt.awake &&

No READ_ONCE here, even we just unlocked the mutex. So lacks some
consistency.

Also, this assumes we might be pre-empted between unlocking mutex and
making this test, so I'm little bit confused. Do you want to optimize
by avoiding calling cancel_delayed_work_sync?

> +	    cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work)) {
> +		unsigned stuck = intel_kick_waiters(dev_priv);
> +		if (unlikely(stuck)) {
> +			DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
> +			dev_priv->gpu_error.missed_irq_rings |= stuck;
> +		}
>  	}
>  }
>  
> @@ -4154,7 +4182,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
>  
>  	ret = __i915_wait_request(target, true, NULL, NULL);
>  	if (ret == 0)
> -		queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);
> +		queue_delayed_work(dev_priv->wq, &dev_priv->gt.retire_work, 0);
>  
>  	i915_gem_request_unreference(target);
>  
> @@ -4672,13 +4700,13 @@ i915_gem_suspend(struct drm_device *dev)
>  	mutex_unlock(&dev->struct_mutex);
>  
>  	cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
> -	cancel_delayed_work_sync(&dev_priv->mm.retire_work);
> -	flush_delayed_work(&dev_priv->mm.idle_work);
> +	cancel_delayed_work_sync(&dev_priv->gt.retire_work);
> +	flush_delayed_work(&dev_priv->gt.idle_work);
>  
>  	/* Assert that we sucessfully flushed all the work and
>  	 * reset the GPU back to its idle, low power state.
>  	 */
> -	WARN_ON(dev_priv->mm.busy);
> +	WARN_ON(dev_priv->gt.awake);
>  
>  	return 0;
>  
> @@ -4982,9 +5010,9 @@ i915_gem_load_init(struct drm_device *dev)
>  		init_engine_lists(&dev_priv->engine[i]);
>  	for (i = 0; i < I915_MAX_NUM_FENCES; i++)
>  		INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list);
> -	INIT_DELAYED_WORK(&dev_priv->mm.retire_work,
> +	INIT_DELAYED_WORK(&dev_priv->gt.retire_work,
>  			  i915_gem_retire_work_handler);
> -	INIT_DELAYED_WORK(&dev_priv->mm.idle_work,
> +	INIT_DELAYED_WORK(&dev_priv->gt.idle_work,
>  			  i915_gem_idle_work_handler);
>  	init_waitqueue_head(&dev_priv->gpu_error.wait_queue);
>  	init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 8097698b9622..d3297dab0298 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1477,6 +1477,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  		dispatch_flags |= I915_DISPATCH_RS;
>  	}
>  
> +	/* Take a local wakeref for preparing to dispatch the execbuf as
> +	 * we expect to access the hardware fairly frequently in the
> +	 * process. Upon first dispatch, we acquire another prolonged
> +	 * wakeref that we hold until the GPU has been idle for at least
> +	 * 100ms.
> +	 */
>  	intel_runtime_pm_get(dev_priv);
>  
>  	ret = i915_mutex_lock_interruptible(dev);
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index f74f5727ea77..7a2dc8f1f64e 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -3102,12 +3102,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>  	if (!i915.enable_hangcheck)
>  		return;
>  
> -	/*
> -	 * The hangcheck work is synced during runtime suspend, we don't
> -	 * require a wakeref. TODO: instead of disabling the asserts make
> -	 * sure that we hold a reference when this work is running.
> -	 */
> -	DISABLE_RPM_WAKEREF_ASSERTS(dev_priv);
> +	if (!READ_ONCE(dev_priv->gt.awake))
> +		return;
>  
>  	/* As enabling the GPU requires fairly extensive mmio access,
>  	 * periodically arm the mmio checker to see if we are triggering
> @@ -3215,17 +3211,12 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>  		}
>  	}
>  
> -	if (rings_hung) {
> +	if (rings_hung)
>  		i915_handle_error(dev_priv, rings_hung, "Engine(s) hung");
> -		goto out;
> -	}
>  
>  	/* Reset timer in case GPU hangs without another request being added */
>  	if (busy_count)
>  		i915_queue_hangcheck(dev_priv);
> -
> -out:
> -	ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
>  }
>  
>  static void ibx_irq_reset(struct drm_device *dev)
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index bb09ee6d1a3f..14e41fdd8112 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -10969,32 +10969,6 @@ struct drm_display_mode *intel_crtc_mode_get(struct drm_device *dev,
>  	return mode;
>  }
>  
> -void intel_mark_busy(struct drm_i915_private *dev_priv)
> -{
> -	if (dev_priv->mm.busy)
> -		return;
> -
> -	intel_runtime_pm_get(dev_priv);
> -	intel_enable_gt_powersave(dev_priv);
> -	i915_update_gfx_val(dev_priv);
> -	if (INTEL_GEN(dev_priv) >= 6)
> -		gen6_rps_busy(dev_priv);
> -	dev_priv->mm.busy = true;
> -}
> -
> -void intel_mark_idle(struct drm_i915_private *dev_priv)
> -{
> -	if (!dev_priv->mm.busy)
> -		return;
> -
> -	dev_priv->mm.busy = false;
> -
> -	if (INTEL_GEN(dev_priv) >= 6)
> -		gen6_rps_idle(dev_priv);
> -
> -	intel_runtime_pm_put(dev_priv);
> -}
> -
>  static void intel_crtc_destroy(struct drm_crtc *crtc)
>  {
>  	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 712bd0debb91..35bb9a23cd2d 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4850,7 +4850,7 @@ void gen6_rps_boost(struct drm_i915_private *dev_priv,
>  	/* This is intentionally racy! We peek at the state here, then
>  	 * validate inside the RPS worker.
>  	 */
> -	if (!(dev_priv->mm.busy &&
> +	if (!(dev_priv->gt.awake &&
>  	      dev_priv->rps.enabled &&
>  	      dev_priv->rps.cur_freq < dev_priv->rps.max_freq_softlimit))
>  		return;
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 166f1a3829b0..d0cd9a1aa80e 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -372,13 +372,13 @@ struct intel_engine_cs {
>  };
>  
>  static inline bool
> -intel_engine_initialized(struct intel_engine_cs *engine)
> +intel_engine_initialized(const struct intel_engine_cs *engine)
>  {
>  	return engine->i915 != NULL;
>  }
>  
>  static inline unsigned
> -intel_engine_flag(struct intel_engine_cs *engine)
> +intel_engine_flag(const struct intel_engine_cs *engine)
>  {
>  	return 1 << engine->id;
>  }

I think majority of our functions are not const-correct, I remember
some grunting on the subject when I tried to change some to be. But I'm
all for it myself.

Regards, Joonas

-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 02/62] drm/i915: Do not keep postponing the idle-work
  2016-06-03 16:36 ` [PATCH 02/62] drm/i915: Do not keep postponing the idle-work Chris Wilson
@ 2016-06-07 11:34   ` Joonas Lahtinen
  0 siblings, 0 replies; 87+ messages in thread
From: Joonas Lahtinen @ 2016-06-07 11:34 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> Rather than persistently postponing the idle-work everytime somebody
> calls i915_gem_retire_requests() (potentially ensuring that we never
> reach the idle state), queue the work the first time we detect all
> requests are complete. Then if in 100ms, more requests have been queued,
> we will abort the idle-worker and wait again until all the new requests
> have been completed.
> 

This does depend on the previous patch, might be worth rewording to
bring that up. But it makes much more sense to me.

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/i915_gem.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 5a7131b749a2..e27c9331b84b 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -3000,9 +3000,9 @@ void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
>  	}
>  
>  	if (dev_priv->gt.active_engines == 0)
> -		mod_delayed_work(dev_priv->wq,
> -				 &dev_priv->gt.idle_work,
> -				 msecs_to_jiffies(100));
> +		queue_delayed_work(dev_priv->wq,
> +				   &dev_priv->gt.idle_work,
> +				   msecs_to_jiffies(100));
>  }
>  
>  static void
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl
  2016-06-03 16:36 ` [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl Chris Wilson
@ 2016-06-07 11:39   ` Joonas Lahtinen
  0 siblings, 0 replies; 87+ messages in thread
From: Joonas Lahtinen @ 2016-06-07 11:39 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> We know, by design, that whilst the GPU is active (and thus we are
> throttling) the retire_worker is queued. Therefore attempting to requeue
> it with queue_delayed_work() is a no-op and we can safely remove it.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/i915_gem.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index e27c9331b84b..da44715c894f 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -4181,9 +4181,6 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
>  		return 0;
>  
>  	ret = __i915_wait_request(target, true, NULL, NULL);
> -	if (ret == 0)
> -		queue_delayed_work(dev_priv->wq, &dev_priv->gt.retire_work, 0);
> -
>  	i915_gem_request_unreference(target);
>  
>  	return ret;
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter
  2016-06-03 16:36 ` [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter Chris Wilson
@ 2016-06-08  9:04   ` Daniel Vetter
  2016-06-08 10:38     ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Daniel Vetter @ 2016-06-08  9:04 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, Jesse Barnes

On Fri, Jun 03, 2016 at 05:36:29PM +0100, Chris Wilson wrote:
> Ideally, we want to automagically have the GPU respond to the
> instantaneous load by reclocking itself. However, reclocking occurs
> relatively slowly, and to the client waiting for a result from the GPU,
> too late. To compensate and reduce the client latency, we allow the
> first wait from a client to boost the GPU clocks to maximum. This
> overcomes the lag in autoreclocking, at the expense of forcing the GPU
> clocks too high. So to offset the excessive power usage, we currently
> allow a client to only boost the clocks once before we detect the GPU
> is idle again. This works reasonably for say the first frame in a
> benchmark, but for many more synchronous workloads (like OpenCL) we find
> the GPU clocks remain too low. By noting a wait which would idle the GPU
> (i.e. we just waited upon the last known request), we can give that
> client the idle boost credit (for their next wait) without the 100ms
> delay required for us to detect the GPU idle state. The intention is to
> boost clients that are stalling in the process of feeding the GPU more
> work (and who in doing so let the GPU idle), without granting boost
> credits to clients that are throttling themselves (such as compositors).
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: "Zou, Nanhai" <nanhai.zou@intel.com>
> Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>

I wonder a bit what will happen here for workloads that flip-flop between
engines, since you check for last request on a given engine. But maybe in
the future we'll get clock domains per engine ;-)

Anyway commit message needs to be touched up to say idle engine instead of
idle GPU. With that

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

since it makes sense indeed.

> ---
>  drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index da44715c894f..bec02baef190 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1310,6 +1310,22 @@ complete:
>  			*timeout = 0;
>  	}
>  
> +	if (rps && req->seqno == req->engine->last_submitted_seqno) {
> +		/* The GPU is now idle and this client has stalled.
> +		 * Since no other client has submitted a request in the
> +		 * meantime, assume that this client is the only one
> +		 * supplying work to the GPU but is unable to keep that
> +		 * work supplied because it is waiting. Since the GPU is
> +		 * then never kept fully busy, RPS autoclocking will
> +		 * keep the clocks relatively low, causing further delays.
> +		 * Compensate by giving the synchronous client credit for
> +		 * a waitboost next time.
> +		 */
> +		spin_lock(&req->i915->rps.client_lock);
> +		list_del_init(&rps->link);
> +		spin_unlock(&req->i915->rps.client_lock);
> +	}
> +
>  	return ret;
>  }
>  
> -- 
> 2.8.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence
  2016-06-03 16:36 ` [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence Chris Wilson
@ 2016-06-08  9:14   ` Daniel Vetter
  2016-06-08 10:33     ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Daniel Vetter @ 2016-06-08  9:14 UTC (permalink / raw)
  To: Chris Wilson; +Cc: Daniel Vetter, intel-gfx, Jesse Barnes

On Fri, Jun 03, 2016 at 05:36:38PM +0100, Chris Wilson wrote:
> dma-buf provides a generic fence class for interoperation between
> drivers. Internally we use the request structure as a fence, and so with
> only a little bit of interfacing we can rebase those requests on top of
> dma-buf fences. This will allow us, in the future, to pass those fences
> back to userspace or between drivers.
> 
> v2: The fence_context needs to be globally unique, not just unique to
> this device.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c        |   2 +-
>  drivers/gpu/drm/i915/i915_gem_request.c    | 116 ++++++++++++++++++++++++++---
>  drivers/gpu/drm/i915/i915_gem_request.h    |  33 ++++----
>  drivers/gpu/drm/i915/i915_gpu_error.c      |   2 +-
>  drivers/gpu/drm/i915/i915_guc_submission.c |   4 +-
>  drivers/gpu/drm/i915/i915_trace.h          |  10 +--
>  drivers/gpu/drm/i915/intel_breadcrumbs.c   |   7 +-
>  drivers/gpu/drm/i915/intel_lrc.c           |   3 +-
>  drivers/gpu/drm/i915/intel_ringbuffer.c    |  11 +--
>  drivers/gpu/drm/i915/intel_ringbuffer.h    |   1 +
>  10 files changed, 143 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 8f576b443ff6..8e37315443f3 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -768,7 +768,7 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
>  			if (req->pid)
>  				task = pid_task(req->pid, PIDTYPE_PID);
>  			seq_printf(m, "    %x @ %d: %s [%d]\n",
> -				   req->seqno,
> +				   req->fence.seqno,
>  				   (int) (jiffies - req->emitted_jiffies),
>  				   task ? task->comm : "<unknown>",
>  				   task ? task->pid : -1);
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 34b2f151cdfc..512b15153ac6 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -24,6 +24,98 @@
>  
>  #include "i915_drv.h"
>  
> +static inline struct drm_i915_gem_request *
> +to_i915_request(struct fence *fence)
> +{
> +	return container_of(fence, struct drm_i915_gem_request, fence);
> +}
> +
> +static const char *i915_fence_get_driver_name(struct fence *fence)
> +{
> +	return "i915";
> +}
> +
> +static const char *i915_fence_get_timeline_name(struct fence *fence)
> +{
> +	/* Timelines are bound by eviction to a VM. However, since
> +	 * we only have a global seqno at the moment, we only have
> +	 * a single timeline. Note that each timeline will have
> +	 * multiple execution contexts (fence contexts) as we allow
> +	 * engines within a single timeline to execute in parallel.
> +	 */
> +	return "global";
> +}
> +
> +static bool i915_fence_signaled(struct fence *fence)
> +{
> +	return i915_gem_request_completed(to_i915_request(fence));
> +}
> +
> +static bool i915_fence_enable_signaling(struct fence *fence)
> +{
> +	if (i915_fence_signaled(fence))
> +		return false;
> +
> +	return intel_engine_enable_signaling(to_i915_request(fence)) == 0;
> +}
> +
> +static signed long i915_fence_wait(struct fence *fence,
> +				   bool interruptible,
> +				   signed long timeout_jiffies)
> +{
> +	s64 timeout_ns, *timeout;
> +	int ret;
> +
> +	if (timeout_jiffies != MAX_SCHEDULE_TIMEOUT) {
> +		timeout_ns = jiffies_to_nsecs(timeout_jiffies);
> +		timeout = &timeout_ns;
> +	} else
> +		timeout = NULL;
> +
> +	ret = __i915_wait_request(to_i915_request(fence),
> +				  interruptible, timeout,
> +				  NULL);
> +	if (ret == -ETIME)
> +		return 0;
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	if (timeout_jiffies != MAX_SCHEDULE_TIMEOUT)
> +		timeout_jiffies = nsecs_to_jiffies(timeout_ns);
> +
> +	return timeout_jiffies;
> +}
> +
> +static void i915_fence_value_str(struct fence *fence, char *str, int size)
> +{
> +	snprintf(str, size, "%u", fence->seqno);
> +}
> +
> +static void i915_fence_timeline_value_str(struct fence *fence, char *str,
> +					  int size)
> +{
> +	snprintf(str, size, "%u",
> +		 intel_engine_get_seqno(to_i915_request(fence)->engine));
> +}
> +
> +static void i915_fence_release(struct fence *fence)
> +{
> +	struct drm_i915_gem_request *req = to_i915_request(fence);
> +	kmem_cache_free(req->i915->requests, req);
> +}
> +
> +static const struct fence_ops i915_fence_ops = {
> +	.get_driver_name = i915_fence_get_driver_name,
> +	.get_timeline_name = i915_fence_get_timeline_name,
> +	.enable_signaling = i915_fence_enable_signaling,
> +	.signaled = i915_fence_signaled,
> +	.wait = i915_fence_wait,
> +	.release = i915_fence_release,
> +	.fence_value_str = i915_fence_value_str,
> +	.timeline_value_str = i915_fence_timeline_value_str,
> +};
> +
>  static int i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
>  {
>  	if (__i915_terminally_wedged(reset_counter))
> @@ -117,6 +209,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  	struct drm_i915_private *dev_priv = engine->i915;
>  	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
>  	struct drm_i915_gem_request *req;
> +	u32 seqno;
>  	int ret;
>  
>  	if (!req_out)
> @@ -136,11 +229,17 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  	if (req == NULL)
>  		return -ENOMEM;
>  
> -	ret = i915_gem_get_seqno(dev_priv, &req->seqno);
> +	ret = i915_gem_get_seqno(dev_priv, &seqno);
>  	if (ret)
>  		goto err;
>  
> -	kref_init(&req->ref);
> +	spin_lock_init(&req->lock);
> +	fence_init(&req->fence,
> +		   &i915_fence_ops,
> +		   &req->lock,
> +		   engine->fence_context,
> +		   seqno);
> +
>  	req->i915 = dev_priv;
>  	req->engine = engine;
>  	req->reset_counter = reset_counter;
> @@ -376,7 +475,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>  	 */
>  	request->emitted_jiffies = jiffies;
>  	request->previous_seqno = engine->last_submitted_seqno;
> -	smp_store_mb(engine->last_submitted_seqno, request->seqno);
> +	smp_store_mb(engine->last_submitted_seqno, request->fence.seqno);
>  	list_add_tail(&request->list, &engine->request_list);
>  
>  	/* Record the position of the start of the request so that
> @@ -543,7 +642,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
>  	if (i915_spin_request(req, state, 5))
>  		goto complete;
>  
> -	intel_wait_init(&wait, req->seqno);
> +	intel_wait_init(&wait, req->fence.seqno);
>  	set_current_state(state);
>  	if (intel_engine_add_wait(req->engine, &wait))
>  		/* In order to check that we haven't missed the interrupt
> @@ -609,7 +708,7 @@ complete:
>  			*timeout = 0;
>  	}
>  
> -	if (rps && req->seqno == req->engine->last_submitted_seqno) {
> +	if (rps && req->fence.seqno == req->engine->last_submitted_seqno) {
>  		/* The GPU is now idle and this client has stalled.
>  		 * Since no other client has submitted a request in the
>  		 * meantime, assume that this client is the only one
> @@ -650,10 +749,3 @@ int i915_wait_request(struct drm_i915_gem_request *req)
>  
>  	return 0;
>  }
> -
> -void i915_gem_request_free(struct kref *req_ref)
> -{
> -	struct drm_i915_gem_request *req =
> -	       	container_of(req_ref, typeof(*req), ref);
> -	kmem_cache_free(req->i915->requests, req);
> -}
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
> index 166e0733d2d8..248aec2c09b7 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.h
> +++ b/drivers/gpu/drm/i915/i915_gem_request.h
> @@ -25,6 +25,8 @@
>  #ifndef I915_GEM_REQUEST_H
>  #define I915_GEM_REQUEST_H
>  
> +#include <linux/fence.h>
> +
>  /**
>   * Request queue structure.
>   *
> @@ -36,11 +38,11 @@
>   * emission time to be associated with the request for tracking how far ahead
>   * of the GPU the submission is.
>   *
> - * The requests are reference counted, so upon creation they should have an
> - * initial reference taken using kref_init
> + * The requests are reference counted.
>   */
>  struct drm_i915_gem_request {
> -	struct kref ref;
> +	struct fence fence;
> +	spinlock_t lock;
>  
>  	/** On Which ring this request was generated */
>  	struct drm_i915_private *i915;
> @@ -68,12 +70,6 @@ struct drm_i915_gem_request {
>  	 */
>  	u32 previous_seqno;
>  
> -	/** GEM sequence number associated with this request,
> -	 * when the HWS breadcrumb is equal or greater than this the GPU
> -	 * has finished processing this request.
> -	 */
> -	u32 seqno;
> -
>  	/** Position in the ringbuffer of the start of the request */
>  	u32 head;
>  
> @@ -152,7 +148,6 @@ __request_to_i915(const struct drm_i915_gem_request *request)
>  struct drm_i915_gem_request * __must_check
>  i915_gem_request_alloc(struct intel_engine_cs *engine,
>  		       struct i915_gem_context *ctx);
> -void i915_gem_request_free(struct kref *req_ref);
>  int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
>  				   struct drm_file *file);
>  void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
> @@ -160,7 +155,7 @@ void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
>  static inline uint32_t
>  i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
>  {
> -	return req ? req->seqno : 0;
> +	return req ? req->fence.seqno : 0;
>  }
>  
>  static inline struct intel_engine_cs *
> @@ -170,17 +165,23 @@ i915_gem_request_get_engine(struct drm_i915_gem_request *req)
>  }
>  
>  static inline struct drm_i915_gem_request *
> +to_request(struct fence *fence)
> +{
> +	/* We assume that NULL fence/request are interoperable */
> +	BUILD_BUG_ON(offsetof(struct drm_i915_gem_request, fence) != 0);
> +	return container_of(fence, struct drm_i915_gem_request, fence);

For future-proofing to make sure we don't accidentally call this on a
foreign fence:

	BUG_ON(fence->ops != i915_fence_ops);

BUG_ON since I don't want to splatter all callers with handlers for this.
And if we ever get this wrong debugging it with just some randomy
corruption would be serious pain, so I think the overhead is justified.
-Daniel

> +}
> +
> +static inline struct drm_i915_gem_request *
>  i915_gem_request_reference(struct drm_i915_gem_request *req)
>  {
> -	if (req)
> -		kref_get(&req->ref);
> -	return req;
> +	return to_request(fence_get(&req->fence));
>  }
>  
>  static inline void
>  i915_gem_request_unreference(struct drm_i915_gem_request *req)
>  {
> -	kref_put(&req->ref, i915_gem_request_free);
> +	fence_put(&req->fence);
>  }
>  
>  static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
> @@ -230,7 +231,7 @@ static inline bool i915_gem_request_started(const struct drm_i915_gem_request *r
>  static inline bool i915_gem_request_completed(const struct drm_i915_gem_request *req)
>  {
>  	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
> -				 req->seqno);
> +				 req->fence.seqno);
>  }
>  
>  bool __i915_spin_request(const struct drm_i915_gem_request *request,
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 3ba5302ce19f..5332bd32c555 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1181,7 +1181,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
>  			}
>  
>  			erq = &error->ring[i].requests[count++];
> -			erq->seqno = request->seqno;
> +			erq->seqno = request->fence.seqno;
>  			erq->jiffies = request->emitted_jiffies;
>  			erq->tail = request->postfix;
>  		}
> diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
> index ac72451c571c..629111d42ce0 100644
> --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> @@ -538,7 +538,7 @@ static void guc_add_workqueue_item(struct i915_guc_client *gc,
>  							     rq->engine);
>  
>  	wqi->ring_tail = tail << WQ_RING_TAIL_SHIFT;
> -	wqi->fence_id = rq->seqno;
> +	wqi->fence_id = rq->fence.seqno;
>  
>  	kunmap_atomic(base);
>  }
> @@ -578,7 +578,7 @@ int i915_guc_submit(struct drm_i915_gem_request *rq)
>  		client->b_fail += 1;
>  
>  	guc->submissions[engine_id] += 1;
> -	guc->last_seqno[engine_id] = rq->seqno;
> +	guc->last_seqno[engine_id] = rq->fence.seqno;
>  
>  	return b_ret;
>  }
> diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
> index f59cf07184ae..0296a77b586a 100644
> --- a/drivers/gpu/drm/i915/i915_trace.h
> +++ b/drivers/gpu/drm/i915/i915_trace.h
> @@ -465,7 +465,7 @@ TRACE_EVENT(i915_gem_ring_sync_to,
>  			   __entry->dev = from->i915->dev->primary->index;
>  			   __entry->sync_from = from->id;
>  			   __entry->sync_to = to_req->engine->id;
> -			   __entry->seqno = i915_gem_request_get_seqno(req);
> +			   __entry->seqno = req->fence.seqno;
>  			   ),
>  
>  	    TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u",
> @@ -488,9 +488,9 @@ TRACE_EVENT(i915_gem_ring_dispatch,
>  	    TP_fast_assign(
>  			   __entry->dev = req->i915->dev->primary->index;
>  			   __entry->ring = req->engine->id;
> -			   __entry->seqno = req->seqno;
> +			   __entry->seqno = req->fence.seqno;
>  			   __entry->flags = flags;
> -			   intel_engine_enable_signaling(req);
> +			   fence_enable_sw_signaling(&req->fence);
>  			   ),
>  
>  	    TP_printk("dev=%u, ring=%u, seqno=%u, flags=%x",
> @@ -533,7 +533,7 @@ DECLARE_EVENT_CLASS(i915_gem_request,
>  	    TP_fast_assign(
>  			   __entry->dev = req->i915->dev->primary->index;
>  			   __entry->ring = req->engine->id;
> -			   __entry->seqno = req->seqno;
> +			   __entry->seqno = req->fence.seqno;
>  			   ),
>  
>  	    TP_printk("dev=%u, ring=%u, seqno=%u",
> @@ -595,7 +595,7 @@ TRACE_EVENT(i915_gem_request_wait_begin,
>  	    TP_fast_assign(
>  			   __entry->dev = req->i915->dev->primary->index;
>  			   __entry->ring = req->engine->id;
> -			   __entry->seqno = req->seqno;
> +			   __entry->seqno = req->fence.seqno;
>  			   __entry->blocking =
>  				     mutex_is_locked(&req->i915->dev->struct_mutex);
>  			   ),
> diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
> index dc65a007fa20..05f62f706897 100644
> --- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
> +++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
> @@ -396,6 +396,7 @@ static int intel_breadcrumbs_signaler(void *arg)
>  			 */
>  			intel_engine_remove_wait(engine,
>  						 &request->signaling.wait);
> +			fence_signal(&request->fence);
>  
>  			/* Find the next oldest signal. Note that as we have
>  			 * not been holding the lock, another client may
> @@ -444,7 +445,7 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
>  	}
>  
>  	request->signaling.wait.task = b->signaler;
> -	request->signaling.wait.seqno = request->seqno;
> +	request->signaling.wait.seqno = request->fence.seqno;
>  	i915_gem_request_reference(request);
>  
>  	/* First add ourselves into the list of waiters, but register our
> @@ -466,8 +467,8 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
>  	p = &b->signals.rb_node;
>  	while (*p) {
>  		parent = *p;
> -		if (i915_seqno_passed(request->seqno,
> -				      to_signal(parent)->seqno)) {
> +		if (i915_seqno_passed(request->fence.seqno,
> +				      to_signal(parent)->fence.seqno)) {
>  			p = &parent->rb_right;
>  			first = false;
>  		} else
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 0742a849acce..c7a9ebdb0811 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1731,7 +1731,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
>  				intel_hws_seqno_address(request->engine) |
>  				MI_FLUSH_DW_USE_GTT);
>  	intel_logical_ring_emit(ringbuf, 0);
> -	intel_logical_ring_emit(ringbuf, request->seqno);
> +	intel_logical_ring_emit(ringbuf, request->fence.seqno);
>  	intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
>  	intel_logical_ring_emit(ringbuf, MI_NOOP);
>  	return intel_logical_ring_advance_and_submit(request);
> @@ -1964,6 +1964,7 @@ logical_ring_setup(struct drm_device *dev, enum intel_engine_id id)
>  	engine->exec_id = info->exec_id;
>  	engine->guc_id = info->guc_id;
>  	engine->mmio_base = info->mmio_base;
> +	engine->fence_context = fence_context_alloc(1);
>  
>  	engine->i915 = dev_priv;
>  
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index 327ad7fdf118..c3d6345aa2c1 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -1266,7 +1266,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
>  					   PIPE_CONTROL_CS_STALL);
>  		intel_ring_emit(signaller, lower_32_bits(gtt_offset));
>  		intel_ring_emit(signaller, upper_32_bits(gtt_offset));
> -		intel_ring_emit(signaller, signaller_req->seqno);
> +		intel_ring_emit(signaller, signaller_req->fence.seqno);
>  		intel_ring_emit(signaller, 0);
>  		intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
>  					   MI_SEMAPHORE_TARGET(waiter->hw_id));
> @@ -1304,7 +1304,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
>  		intel_ring_emit(signaller, lower_32_bits(gtt_offset) |
>  					   MI_FLUSH_DW_USE_GTT);
>  		intel_ring_emit(signaller, upper_32_bits(gtt_offset));
> -		intel_ring_emit(signaller, signaller_req->seqno);
> +		intel_ring_emit(signaller, signaller_req->fence.seqno);
>  		intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
>  					   MI_SEMAPHORE_TARGET(waiter->hw_id));
>  		intel_ring_emit(signaller, 0);
> @@ -1337,7 +1337,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
>  		if (i915_mmio_reg_valid(mbox_reg)) {
>  			intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
>  			intel_ring_emit_reg(signaller, mbox_reg);
> -			intel_ring_emit(signaller, signaller_req->seqno);
> +			intel_ring_emit(signaller, signaller_req->fence.seqno);
>  		}
>  	}
>  
> @@ -1373,7 +1373,7 @@ gen6_add_request(struct drm_i915_gem_request *req)
>  	intel_ring_emit(engine, MI_STORE_DWORD_INDEX);
>  	intel_ring_emit(engine,
>  			I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
> -	intel_ring_emit(engine, req->seqno);
> +	intel_ring_emit(engine, req->fence.seqno);
>  	intel_ring_emit(engine, MI_USER_INTERRUPT);
>  	__intel_ring_advance(engine);
>  
> @@ -1623,7 +1623,7 @@ i9xx_add_request(struct drm_i915_gem_request *req)
>  	intel_ring_emit(engine, MI_STORE_DWORD_INDEX);
>  	intel_ring_emit(engine,
>  		       	I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
> -	intel_ring_emit(engine, req->seqno);
> +	intel_ring_emit(engine, req->fence.seqno);
>  	intel_ring_emit(engine, MI_USER_INTERRUPT);
>  	__intel_ring_advance(engine);
>  
> @@ -2092,6 +2092,7 @@ static int intel_init_ring_buffer(struct drm_device *dev,
>  	WARN_ON(engine->buffer);
>  
>  	engine->i915 = dev_priv;
> +	engine->fence_context = fence_context_alloc(1);
>  	INIT_LIST_HEAD(&engine->active_list);
>  	INIT_LIST_HEAD(&engine->request_list);
>  	INIT_LIST_HEAD(&engine->execlist_queue);
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 6017367e94fb..b041fb6a6d01 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -158,6 +158,7 @@ struct intel_engine_cs {
>  	unsigned int exec_id;
>  	unsigned int hw_id;
>  	unsigned int guc_id; /* XXX same as hw_id? */
> +	unsigned fence_context;
>  	u32		mmio_base;
>  	struct intel_ringbuffer *buffer;
>  	struct list_head buffers;
> -- 
> 2.8.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put
  2016-06-03 16:36 ` [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put Chris Wilson
@ 2016-06-08  9:15   ` Daniel Vetter
  0 siblings, 0 replies; 87+ messages in thread
From: Daniel Vetter @ 2016-06-08  9:15 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

On Fri, Jun 03, 2016 at 05:36:39PM +0100, Chris Wilson wrote:
> Now that we derive requests from struct fence, swap over to its
> nomenclature for references. It's shorter and more idiomatic across the
> kernel.
> 
> s/i915_gem_request_reference/i915_gem_request_get/
> s/i915_gem_request_unreference/i915_gem_request_put/
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Assuming it compiles:

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  drivers/gpu/drm/i915/i915_gem.c          | 14 +++++++-------
>  drivers/gpu/drm/i915/i915_gem_request.c  |  2 +-
>  drivers/gpu/drm/i915/i915_gem_request.h  |  8 ++++----
>  drivers/gpu/drm/i915/i915_gem_userptr.c  |  4 ++--
>  drivers/gpu/drm/i915/intel_breadcrumbs.c |  4 ++--
>  drivers/gpu/drm/i915/intel_display.c     |  5 ++---
>  drivers/gpu/drm/i915/intel_lrc.c         | 10 +++++-----
>  drivers/gpu/drm/i915/intel_pm.c          |  5 ++---
>  8 files changed, 25 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 95782cf85dcc..5f232fb1a2a4 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1188,7 +1188,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
>  		if (req == NULL)
>  			return 0;
>  
> -		requests[n++] = i915_gem_request_reference(req);
> +		requests[n++] = i915_gem_request_get(req);
>  	} else {
>  		for (i = 0; i < I915_NUM_ENGINES; i++) {
>  			struct drm_i915_gem_request *req;
> @@ -1197,7 +1197,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
>  			if (req == NULL)
>  				continue;
>  
> -			requests[n++] = i915_gem_request_reference(req);
> +			requests[n++] = i915_gem_request_get(req);
>  		}
>  	}
>  
> @@ -1210,7 +1210,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
>  	for (i = 0; i < n; i++) {
>  		if (ret == 0)
>  			i915_gem_object_retire_request(obj, requests[i]);
> -		i915_gem_request_unreference(requests[i]);
> +		i915_gem_request_put(requests[i]);
>  	}
>  
>  	return ret;
> @@ -2532,7 +2532,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>  		if (obj->last_read_req[i] == NULL)
>  			continue;
>  
> -		req[n++] = i915_gem_request_reference(obj->last_read_req[i]);
> +		req[n++] = i915_gem_request_get(obj->last_read_req[i]);
>  	}
>  
>  	mutex_unlock(&dev->struct_mutex);
> @@ -2542,7 +2542,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>  			ret = __i915_wait_request(req[i], true,
>  						  args->timeout_ns > 0 ? &args->timeout_ns : NULL,
>  						  to_rps_client(file));
> -		i915_gem_request_unreference(req[i]);
> +		i915_gem_request_put(req[i]);
>  	}
>  	return ret;
>  
> @@ -3548,14 +3548,14 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
>  		target = request;
>  	}
>  	if (target)
> -		i915_gem_request_reference(target);
> +		i915_gem_request_get(target);
>  	spin_unlock(&file_priv->mm.lock);
>  
>  	if (target == NULL)
>  		return 0;
>  
>  	ret = __i915_wait_request(target, true, NULL, NULL);
> -	i915_gem_request_unreference(target);
> +	i915_gem_request_put(target);
>  
>  	return ret;
>  }
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 512b15153ac6..2ecaf9fa936a 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -365,7 +365,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
>  	}
>  
>  	i915_gem_context_unreference(request->ctx);
> -	i915_gem_request_unreference(request);
> +	i915_gem_request_put(request);
>  }
>  
>  void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
> index 248aec2c09b7..b1bc96c9e31d 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.h
> +++ b/drivers/gpu/drm/i915/i915_gem_request.h
> @@ -173,13 +173,13 @@ to_request(struct fence *fence)
>  }
>  
>  static inline struct drm_i915_gem_request *
> -i915_gem_request_reference(struct drm_i915_gem_request *req)
> +i915_gem_request_get(struct drm_i915_gem_request *req)
>  {
>  	return to_request(fence_get(&req->fence));
>  }
>  
>  static inline void
> -i915_gem_request_unreference(struct drm_i915_gem_request *req)
> +i915_gem_request_put(struct drm_i915_gem_request *req)
>  {
>  	fence_put(&req->fence);
>  }
> @@ -188,10 +188,10 @@ static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
>  					   struct drm_i915_gem_request *src)
>  {
>  	if (src)
> -		i915_gem_request_reference(src);
> +		i915_gem_request_get(src);
>  
>  	if (*pdst)
> -		i915_gem_request_unreference(*pdst);
> +		i915_gem_request_put(*pdst);
>  
>  	*pdst = src;
>  }
> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> index 2314c88323e3..ba16e044fac6 100644
> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> @@ -78,7 +78,7 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
>  		if (req == NULL)
>  			continue;
>  
> -		requests[n++] = i915_gem_request_reference(req);
> +		requests[n++] = i915_gem_request_get(req);
>  	}
>  
>  	mutex_unlock(&dev->struct_mutex);
> @@ -89,7 +89,7 @@ static void wait_rendering(struct drm_i915_gem_object *obj)
>  	mutex_lock(&dev->struct_mutex);
>  
>  	for (i = 0; i < n; i++)
> -		i915_gem_request_unreference(requests[i]);
> +		i915_gem_request_put(requests[i]);
>  }
>  
>  static void cancel_userptr(struct work_struct *work)
> diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
> index 05f62f706897..1d60149833e6 100644
> --- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
> +++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
> @@ -413,7 +413,7 @@ static int intel_breadcrumbs_signaler(void *arg)
>  			rb_erase(&request->signaling.node, &b->signals);
>  			spin_unlock(&b->lock);
>  
> -			i915_gem_request_unreference(request);
> +			i915_gem_request_put(request);
>  		} else {
>  			if (kthread_should_stop())
>  				break;
> @@ -446,7 +446,7 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
>  
>  	request->signaling.wait.task = b->signaler;
>  	request->signaling.wait.seqno = request->fence.seqno;
> -	i915_gem_request_reference(request);
> +	i915_gem_request_get(request);
>  
>  	/* First add ourselves into the list of waiters, but register our
>  	 * bottom-half as the signaller thread. As per usual, only the oldest
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 14e41fdd8112..9b257126fa22 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -11005,11 +11005,10 @@ static void intel_unpin_work_fn(struct work_struct *__work)
>  	mutex_lock(&dev->struct_mutex);
>  	intel_unpin_fb_obj(work->old_fb, primary->state->rotation);
>  	drm_gem_object_unreference(&work->pending_flip_obj->base);
> -
> -	if (work->flip_queued_req)
> -		i915_gem_request_assign(&work->flip_queued_req, NULL);
>  	mutex_unlock(&dev->struct_mutex);
>  
> +	i915_gem_request_put(work->flip_queued_req);
> +
>  	intel_frontbuffer_flip_complete(dev, to_intel_plane(primary)->frontbuffer_bit);
>  	intel_fbc_post_update(crtc);
>  	drm_framebuffer_unreference(work->old_fb);
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index c7a9ebdb0811..a25177016fb3 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -438,7 +438,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *engine)
>  			 * will update tail past first request's workload */
>  			cursor->elsp_submitted = req0->elsp_submitted;
>  			list_del(&req0->execlist_link);
> -			i915_gem_request_unreference(req0);
> +			i915_gem_request_put(req0);
>  			req0 = cursor;
>  		} else {
>  			req1 = cursor;
> @@ -489,7 +489,7 @@ execlists_check_remove_request(struct intel_engine_cs *engine, u32 ctx_id)
>  		return 0;
>  
>  	list_del(&head_req->execlist_link);
> -	i915_gem_request_unreference(head_req);
> +	i915_gem_request_put(head_req);
>  
>  	return 1;
>  }
> @@ -610,11 +610,11 @@ static void execlists_context_queue(struct drm_i915_gem_request *request)
>  			WARN(tail_req->elsp_submitted != 0,
>  				"More than 2 already-submitted reqs queued\n");
>  			list_del(&tail_req->execlist_link);
> -			i915_gem_request_unreference(tail_req);
> +			i915_gem_request_put(tail_req);
>  		}
>  	}
>  
> -	i915_gem_request_reference(request);
> +	i915_gem_request_get(request);
>  	list_add_tail(&request->execlist_link, &engine->execlist_queue);
>  	request->ctx_hw_id = request->ctx->hw_id;
>  	if (num_elements == 0)
> @@ -888,7 +888,7 @@ void intel_execlists_cancel_requests(struct intel_engine_cs *engine)
>  
>  	list_for_each_entry_safe(req, tmp, &cancel_list, execlist_link) {
>  		list_del(&req->execlist_link);
> -		i915_gem_request_unreference(req);
> +		i915_gem_request_put(req);
>  	}
>  }
>  
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 923ec6884a5e..ee247063c1b2 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -7696,7 +7696,7 @@ static void __intel_rps_boost_work(struct work_struct *work)
>  	if (!i915_gem_request_completed(req))
>  		gen6_rps_boost(req->i915, NULL, req->emitted_jiffies);
>  
> -	i915_gem_request_unreference(req);
> +	i915_gem_request_put(req);
>  	kfree(boost);
>  }
>  
> @@ -7714,8 +7714,7 @@ void intel_queue_rps_boost_for_request(struct drm_i915_gem_request *req)
>  	if (boost == NULL)
>  		return;
>  
> -	i915_gem_request_reference(req);
> -	boost->req = req;
> +	boost->req = i915_gem_request_get(req);
>  
>  	INIT_WORK(&boost->work, __intel_rps_boost_work);
>  	queue_work(req->i915->wq, &boost->work);
> -- 
> 2.8.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring
  2016-06-06 13:44   ` Tvrtko Ursulin
@ 2016-06-08  9:18     ` Daniel Vetter
  0 siblings, 0 replies; 87+ messages in thread
From: Daniel Vetter @ 2016-06-08  9:18 UTC (permalink / raw)
  To: Tvrtko Ursulin; +Cc: intel-gfx

On Mon, Jun 06, 2016 at 02:44:41PM +0100, Tvrtko Ursulin wrote:
> 
> On 03/06/16 17:36, Chris Wilson wrote:
> > Now that we have disambuigated ring and engine, we can use the clearer
> > and more consistent name for the intel_ringbuffer pointer in the
> > request.
> 
> This one needs all the stakeholders to agree about the rename. As before, I
> am not convinced it is better/worth it.

If we've indeed succeeded in eradicating all instances of calling an
intel_engine_cs a ring, then I think this makes sense.
-Daniel

> 
> Regards,
> 
> Tvrtko
> 
> 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/i915/i915_gem_context.c    |  4 +-
> >   drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 +-
> >   drivers/gpu/drm/i915/i915_gem_gtt.c        |  6 +-
> >   drivers/gpu/drm/i915/i915_gem_request.c    | 16 +++---
> >   drivers/gpu/drm/i915/i915_gem_request.h    |  3 +-
> >   drivers/gpu/drm/i915/i915_gpu_error.c      | 20 +++----
> >   drivers/gpu/drm/i915/intel_display.c       | 10 ++--
> >   drivers/gpu/drm/i915/intel_lrc.c           | 57 +++++++++---------
> >   drivers/gpu/drm/i915/intel_mocs.c          | 36 ++++++------
> >   drivers/gpu/drm/i915/intel_overlay.c       |  8 +--
> >   drivers/gpu/drm/i915/intel_ringbuffer.c    | 92 +++++++++++++++---------------
> >   11 files changed, 126 insertions(+), 130 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> > index 899731f9a2c4..a7911f39f416 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_context.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> > @@ -514,7 +514,7 @@ static inline int
> >   mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
> >   {
> >   	struct drm_i915_private *dev_priv = req->i915;
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 flags = hw_flags | MI_MM_SPACE_GTT;
> >   	const int num_rings =
> >   		/* Use an extended w/a on ivb+ if signalling from other rings */
> > @@ -614,7 +614,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
> >   static int remap_l3(struct drm_i915_gem_request *req, int slice)
> >   {
> >   	u32 *remap_info = req->i915->l3_parity.remap_info[slice];
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int i, ret;
> > 
> >   	if (!remap_info)
> > diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > index 99663e8429b3..246bd70c0c9f 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > @@ -1140,7 +1140,7 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
> >   static int
> >   i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret, i;
> > 
> >   	if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
> > @@ -1270,7 +1270,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
> > 
> >   	if (params->engine->id == RCS &&
> >   	    instp_mode != dev_priv->relative_constants_mode) {
> > -		struct intel_ringbuffer *ring = params->request->ringbuf;
> > +		struct intel_ringbuffer *ring = params->request->ring;
> > 
> >   		ret = intel_ring_begin(params->request, 4);
> >   		if (ret)
> > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > index 4b4e3de58ad9..b0a644cede20 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> > @@ -669,7 +669,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
> >   			  unsigned entry,
> >   			  dma_addr_t addr)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	BUG_ON(entry >= 4);
> > @@ -1660,7 +1660,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
> >   static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
> >   			 struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	/* NB: TLBs must be flushed and invalidated before a switch */
> > @@ -1699,7 +1699,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
> >   static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
> >   			  struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	/* NB: TLBs must be flushed and invalidated before a switch */
> > diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> > index 059ba88e182e..c6a7a7984f1f 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_request.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> > @@ -351,7 +351,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
> >   	 * Note this requires that we are always called in request
> >   	 * completion order.
> >   	 */
> > -	request->ringbuf->last_retired_head = request->postfix;
> > +	request->ring->last_retired_head = request->postfix;
> > 
> >   	i915_gem_request_remove_from_client(request);
> > 
> > @@ -415,7 +415,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
> >   			bool flush_caches)
> >   {
> >   	struct intel_engine_cs *engine;
> > -	struct intel_ringbuffer *ringbuf;
> > +	struct intel_ringbuffer *ring;
> >   	u32 request_start;
> >   	u32 reserved_tail;
> >   	int ret;
> > @@ -424,14 +424,14 @@ void __i915_add_request(struct drm_i915_gem_request *request,
> >   		return;
> > 
> >   	engine = request->engine;
> > -	ringbuf = request->ringbuf;
> > +	ring = request->ring;
> > 
> >   	/*
> >   	 * To ensure that this call will not fail, space for its emissions
> >   	 * should already have been reserved in the ring buffer. Let the ring
> >   	 * know that it is time to use that space up.
> >   	 */
> > -	request_start = intel_ring_get_tail(ringbuf);
> > +	request_start = intel_ring_get_tail(ring);
> >   	reserved_tail = request->reserved_space;
> >   	request->reserved_space = 0;
> > 
> > @@ -478,21 +478,21 @@ void __i915_add_request(struct drm_i915_gem_request *request,
> >   	 * GPU processing the request, we never over-estimate the
> >   	 * position of the head.
> >   	 */
> > -	request->postfix = intel_ring_get_tail(ringbuf);
> > +	request->postfix = intel_ring_get_tail(ring);
> > 
> >   	if (i915.enable_execlists)
> >   		ret = engine->emit_request(request);
> >   	else {
> >   		ret = engine->add_request(request);
> > 
> > -		request->tail = intel_ring_get_tail(ringbuf);
> > +		request->tail = intel_ring_get_tail(ring);
> >   	}
> >   	/* Not allowed to fail! */
> >   	WARN(ret, "emit|add_request failed: %d!\n", ret);
> >   	/* Sanity check that the reserved size was large enough. */
> > -	ret = intel_ring_get_tail(ringbuf) - request_start;
> > +	ret = intel_ring_get_tail(ring) - request_start;
> >   	if (ret < 0)
> > -		ret += ringbuf->size;
> > +		ret += ring->size;
> >   	WARN_ONCE(ret > reserved_tail,
> >   		  "Not enough space reserved (%d bytes) "
> >   		  "for adding the request (%d bytes)\n",
> > diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
> > index a3cac13ab9af..913565fbb0e3 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_request.h
> > +++ b/drivers/gpu/drm/i915/i915_gem_request.h
> > @@ -59,7 +59,7 @@ struct drm_i915_gem_request {
> >   	 */
> >   	struct i915_gem_context *ctx;
> >   	struct intel_engine_cs *engine;
> > -	struct intel_ringbuffer *ringbuf;
> > +	struct intel_ringbuffer *ring;
> >   	struct intel_signal_node signaling;
> > 
> >   	unsigned reset_counter;
> > @@ -86,7 +86,6 @@ struct drm_i915_gem_request {
> >   	/** Preallocate space in the ringbuffer for the emitting the request */
> >   	u32 reserved_space;
> > 
> > -
> >   	/**
> >   	 * Context related to the previous request.
> >   	 * As the contexts are accessed by the hardware until the switch is
> > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> > index d1667aa640ef..b934986bb117 100644
> > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> > @@ -1089,7 +1089,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
> >   		request = i915_gem_find_active_request(engine);
> >   		if (request) {
> >   			struct i915_address_space *vm;
> > -			struct intel_ringbuffer *rb;
> > +			struct intel_ringbuffer *ring;
> > 
> >   			vm = request->ctx && request->ctx->ppgtt ?
> >   				&request->ctx->ppgtt->base :
> > @@ -1107,7 +1107,7 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
> >   			if (HAS_BROKEN_CS_TLB(dev_priv))
> >   				error->ring[i].wa_batchbuffer =
> >   					i915_error_ggtt_object_create(dev_priv,
> > -							     engine->scratch.obj);
> > +								      engine->scratch.obj);
> > 
> >   			if (request->pid) {
> >   				struct task_struct *task;
> > @@ -1123,23 +1123,21 @@ static void i915_gem_record_rings(struct drm_i915_private *dev_priv,
> > 
> >   			error->simulated |= request->ctx->flags & CONTEXT_NO_ERROR_CAPTURE;
> > 
> > -			rb = request->ringbuf;
> > -			error->ring[i].cpu_ring_head = rb->head;
> > -			error->ring[i].cpu_ring_tail = rb->tail;
> > +			ring = request->ring;
> > +			error->ring[i].cpu_ring_head = ring->head;
> > +			error->ring[i].cpu_ring_tail = ring->tail;
> >   			error->ring[i].ringbuffer =
> >   				i915_error_ggtt_object_create(dev_priv,
> > -							      rb->obj);
> > +							      ring->obj);
> >   		}
> > 
> >   		error->ring[i].hws_page =
> >   			i915_error_ggtt_object_create(dev_priv,
> >   						      engine->status_page.obj);
> > 
> > -		if (engine->wa_ctx.obj) {
> > -			error->ring[i].wa_ctx =
> > -				i915_error_ggtt_object_create(dev_priv,
> > -							      engine->wa_ctx.obj);
> > -		}
> > +		error->ring[i].wa_ctx =
> > +			i915_error_ggtt_object_create(dev_priv,
> > +						      engine->wa_ctx.obj);
> > 
> >   		i915_gem_record_active_context(engine, error, &error->ring[i]);
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> > index 2cba91207d7e..2dafbfbc8134 100644
> > --- a/drivers/gpu/drm/i915/intel_display.c
> > +++ b/drivers/gpu/drm/i915/intel_display.c
> > @@ -11174,7 +11174,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
> >   				 struct drm_i915_gem_request *req,
> >   				 uint32_t flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> >   	u32 flip_mask;
> >   	int ret;
> > @@ -11208,7 +11208,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
> >   				 struct drm_i915_gem_request *req,
> >   				 uint32_t flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> >   	u32 flip_mask;
> >   	int ret;
> > @@ -11239,7 +11239,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
> >   				 struct drm_i915_gem_request *req,
> >   				 uint32_t flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct drm_i915_private *dev_priv = dev->dev_private;
> >   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> >   	uint32_t pf, pipesrc;
> > @@ -11277,7 +11277,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
> >   				 struct drm_i915_gem_request *req,
> >   				 uint32_t flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct drm_i915_private *dev_priv = dev->dev_private;
> >   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> >   	uint32_t pf, pipesrc;
> > @@ -11312,7 +11312,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
> >   				 struct drm_i915_gem_request *req,
> >   				 uint32_t flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> >   	uint32_t plane_bit = 0;
> >   	int len, ret;
> > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> > index a1820d531e49..229545fc5b4a 100644
> > --- a/drivers/gpu/drm/i915/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > @@ -692,7 +692,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
> >   			return ret;
> >   	}
> > 
> > -	request->ringbuf = ce->ringbuf;
> > +	request->ring = ce->ringbuf;
> > 
> >   	if (i915.enable_guc_submission) {
> >   		/*
> > @@ -748,11 +748,11 @@ err_unpin:
> >   static int
> >   intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
> >   {
> > -	struct intel_ringbuffer *ringbuf = request->ringbuf;
> > +	struct intel_ringbuffer *ring = request->ring;
> >   	struct intel_engine_cs *engine = request->engine;
> > 
> > -	intel_ring_advance(ringbuf);
> > -	request->tail = ringbuf->tail;
> > +	intel_ring_advance(ring);
> > +	request->tail = ring->tail;
> > 
> >   	/*
> >   	 * Here we add two extra NOOPs as padding to avoid
> > @@ -760,9 +760,9 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
> >   	 *
> >   	 * Caller must reserve WA_TAIL_DWORDS for us!
> >   	 */
> > -	intel_ring_emit(ringbuf, MI_NOOP);
> > -	intel_ring_emit(ringbuf, MI_NOOP);
> > -	intel_ring_advance(ringbuf);
> > +	intel_ring_emit(ring, MI_NOOP);
> > +	intel_ring_emit(ring, MI_NOOP);
> > +	intel_ring_advance(ring);
> > 
> >   	/* We keep the previous context alive until we retire the following
> >   	 * request. This ensures that any the context object is still pinned
> > @@ -805,7 +805,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
> >   	struct drm_device       *dev = params->dev;
> >   	struct intel_engine_cs *engine = params->engine;
> >   	struct drm_i915_private *dev_priv = dev->dev_private;
> > -	struct intel_ringbuffer *ringbuf = params->ctx->engine[engine->id].ringbuf;
> > +	struct intel_ringbuffer *ring = params->request->ring;
> >   	u64 exec_start;
> >   	int instp_mode;
> >   	u32 instp_mask;
> > @@ -817,7 +817,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
> >   	case I915_EXEC_CONSTANTS_REL_GENERAL:
> >   	case I915_EXEC_CONSTANTS_ABSOLUTE:
> >   	case I915_EXEC_CONSTANTS_REL_SURFACE:
> > -		if (instp_mode != 0 && engine != &dev_priv->engine[RCS]) {
> > +		if (instp_mode != 0 && engine->id != RCS) {
> >   			DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
> >   			return -EINVAL;
> >   		}
> > @@ -846,17 +846,17 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
> >   	if (ret)
> >   		return ret;
> > 
> > -	if (engine == &dev_priv->engine[RCS] &&
> > +	if (engine->id == RCS &&
> >   	    instp_mode != dev_priv->relative_constants_mode) {
> >   		ret = intel_ring_begin(params->request, 4);
> >   		if (ret)
> >   			return ret;
> > 
> > -		intel_ring_emit(ringbuf, MI_NOOP);
> > -		intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
> > -		intel_ring_emit_reg(ringbuf, INSTPM);
> > -		intel_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
> > -		intel_ring_advance(ringbuf);
> > +		intel_ring_emit(ring, MI_NOOP);
> > +		intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
> > +		intel_ring_emit_reg(ring, INSTPM);
> > +		intel_ring_emit(ring, instp_mask << 16 | instp_mode);
> > +		intel_ring_advance(ring);
> > 
> >   		dev_priv->relative_constants_mode = instp_mode;
> >   	}
> > @@ -1011,7 +1011,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
> >   {
> >   	int ret, i;
> >   	struct intel_engine_cs *engine = req->engine;
> > -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct i915_workarounds *w = &req->i915->workarounds;
> > 
> >   	if (w->count == 0)
> > @@ -1026,14 +1026,14 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
> >   	if (ret)
> >   		return ret;
> > 
> > -	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
> > +	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
> >   	for (i = 0; i < w->count; i++) {
> > -		intel_ring_emit_reg(ringbuf, w->reg[i].addr);
> > -		intel_ring_emit(ringbuf, w->reg[i].value);
> > +		intel_ring_emit_reg(ring, w->reg[i].addr);
> > +		intel_ring_emit(ring, w->reg[i].value);
> >   	}
> > -	intel_ring_emit(ringbuf, MI_NOOP);
> > +	intel_ring_emit(ring, MI_NOOP);
> > 
> > -	intel_ring_advance(ringbuf);
> > +	intel_ring_advance(ring);
> > 
> >   	engine->gpu_caches_dirty = true;
> >   	ret = logical_ring_flush_all_caches(req);
> > @@ -1506,7 +1506,7 @@ static int gen9_init_render_ring(struct intel_engine_cs *engine)
> >   static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
> >   {
> >   	struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
> >   	int i, ret;
> > 
> > @@ -1533,7 +1533,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
> >   static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
> >   			      u64 offset, unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
> >   	int ret;
> > 
> > @@ -1590,8 +1590,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
> >   			   u32 invalidate_domains,
> >   			   u32 unused)
> >   {
> > -	struct intel_ringbuffer *ring = request->ringbuf;
> > -	struct intel_engine_cs *engine = ring->engine;
> > +	struct intel_ringbuffer *ring = request->ring;
> >   	uint32_t cmd;
> >   	int ret;
> > 
> > @@ -1610,7 +1609,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
> > 
> >   	if (invalidate_domains & I915_GEM_GPU_DOMAINS) {
> >   		cmd |= MI_INVALIDATE_TLB;
> > -		if (engine->id == VCS)
> > +		if (request->engine->id == VCS)
> >   			cmd |= MI_INVALIDATE_BSD;
> >   	}
> > 
> > @@ -1629,7 +1628,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
> >   				  u32 invalidate_domains,
> >   				  u32 flush_domains)
> >   {
> > -	struct intel_ringbuffer *ring = request->ringbuf;
> > +	struct intel_ringbuffer *ring = request->ring;
> >   	struct intel_engine_cs *engine = request->engine;
> >   	u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
> >   	bool vf_flush_wa = false;
> > @@ -1711,7 +1710,7 @@ static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
> > 
> >   static int gen8_emit_request(struct drm_i915_gem_request *request)
> >   {
> > -	struct intel_ringbuffer *ring = request->ringbuf;
> > +	struct intel_ringbuffer *ring = request->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
> > @@ -1734,7 +1733,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
> > 
> >   static int gen8_emit_request_render(struct drm_i915_gem_request *request)
> >   {
> > -	struct intel_ringbuffer *ring = request->ringbuf;
> > +	struct intel_ringbuffer *ring = request->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
> > diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
> > index 8513bf06d4df..4b44bbcfd7cd 100644
> > --- a/drivers/gpu/drm/i915/intel_mocs.c
> > +++ b/drivers/gpu/drm/i915/intel_mocs.c
> > @@ -231,7 +231,7 @@ int intel_mocs_init_engine(struct intel_engine_cs *engine)
> >   static int emit_mocs_control_table(struct drm_i915_gem_request *req,
> >   				   const struct drm_i915_mocs_table *table)
> >   {
> > -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	enum intel_engine_id engine = req->engine->id;
> >   	unsigned int index;
> >   	int ret;
> > @@ -243,11 +243,11 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
> >   	if (ret)
> >   		return ret;
> > 
> > -	intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
> > +	intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
> > 
> >   	for (index = 0; index < table->size; index++) {
> > -		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
> > -		intel_ring_emit(ringbuf, table->table[index].control_value);
> > +		intel_ring_emit_reg(ring, mocs_register(engine, index));
> > +		intel_ring_emit(ring, table->table[index].control_value);
> >   	}
> > 
> >   	/*
> > @@ -259,12 +259,12 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
> >   	 * that value to all the used entries.
> >   	 */
> >   	for (; index < GEN9_NUM_MOCS_ENTRIES; index++) {
> > -		intel_ring_emit_reg(ringbuf, mocs_register(engine, index));
> > -		intel_ring_emit(ringbuf, table->table[0].control_value);
> > +		intel_ring_emit_reg(ring, mocs_register(engine, index));
> > +		intel_ring_emit(ring, table->table[0].control_value);
> >   	}
> > 
> > -	intel_ring_emit(ringbuf, MI_NOOP);
> > -	intel_ring_advance(ringbuf);
> > +	intel_ring_emit(ring, MI_NOOP);
> > +	intel_ring_advance(ring);
> > 
> >   	return 0;
> >   }
> > @@ -291,7 +291,7 @@ static inline u32 l3cc_combine(const struct drm_i915_mocs_table *table,
> >   static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
> >   				const struct drm_i915_mocs_table *table)
> >   {
> > -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	unsigned int i;
> >   	int ret;
> > 
> > @@ -302,18 +302,18 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
> >   	if (ret)
> >   		return ret;
> > 
> > -	intel_ring_emit(ringbuf,
> > +	intel_ring_emit(ring,
> >   			MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));
> > 
> >   	for (i = 0; i < table->size/2; i++) {
> > -		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
> > -		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 2*i+1));
> > +		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
> > +		intel_ring_emit(ring, l3cc_combine(table, 2*i, 2*i+1));
> >   	}
> > 
> >   	if (table->size & 0x01) {
> >   		/* Odd table size - 1 left over */
> > -		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
> > -		intel_ring_emit(ringbuf, l3cc_combine(table, 2*i, 0));
> > +		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
> > +		intel_ring_emit(ring, l3cc_combine(table, 2*i, 0));
> >   		i++;
> >   	}
> > 
> > @@ -323,12 +323,12 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
> >   	 * they are reserved by the hardware.
> >   	 */
> >   	for (; i < GEN9_NUM_MOCS_ENTRIES / 2; i++) {
> > -		intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
> > -		intel_ring_emit(ringbuf, l3cc_combine(table, 0, 0));
> > +		intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
> > +		intel_ring_emit(ring, l3cc_combine(table, 0, 0));
> >   	}
> > 
> > -	intel_ring_emit(ringbuf, MI_NOOP);
> > -	intel_ring_advance(ringbuf);
> > +	intel_ring_emit(ring, MI_NOOP);
> > +	intel_ring_advance(ring);
> > 
> >   	return 0;
> >   }
> > diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
> > index be79c4497af5..f9c062fea39f 100644
> > --- a/drivers/gpu/drm/i915/intel_overlay.c
> > +++ b/drivers/gpu/drm/i915/intel_overlay.c
> > @@ -253,7 +253,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
> > 
> >   	overlay->active = true;
> > 
> > -	ring = req->ringbuf;
> > +	ring = req->ring;
> >   	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
> >   	intel_ring_emit(ring, overlay->flip_addr | OFC_UPDATE);
> >   	intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
> > @@ -295,7 +295,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
> >   		return ret;
> >   	}
> > 
> > -	ring = req->ringbuf;
> > +	ring = req->ring;
> >   	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
> >   	intel_ring_emit(ring, flip_addr);
> >   	intel_ring_advance(ring);
> > @@ -362,7 +362,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
> >   		return ret;
> >   	}
> > 
> > -	ring = req->ringbuf;
> > +	ring = req->ring;
> >   	/* wait for overlay to go idle */
> >   	intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
> >   	intel_ring_emit(ring, flip_addr);
> > @@ -438,7 +438,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
> >   			return ret;
> >   		}
> > 
> > -		ring = req->ringbuf;
> > +		ring = req->ring;
> >   		intel_ring_emit(ring,
> >   				MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
> >   		intel_ring_emit(ring, MI_NOOP);
> > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> > index ace455b2b2d6..0f13e9900bd6 100644
> > --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> > @@ -70,7 +70,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
> >   		       u32	invalidate_domains,
> >   		       u32	flush_domains)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 cmd;
> >   	int ret;
> > 
> > @@ -97,7 +97,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
> >   		       u32	invalidate_domains,
> >   		       u32	flush_domains)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 cmd;
> >   	int ret;
> > 
> > @@ -187,7 +187,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
> >   static int
> >   intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 scratch_addr =
> >   	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
> >   	int ret;
> > @@ -224,7 +224,7 @@ static int
> >   gen6_render_ring_flush(struct drm_i915_gem_request *req,
> >   		       u32 invalidate_domains, u32 flush_domains)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 scratch_addr =
> >   	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
> >   	u32 flags = 0;
> > @@ -277,7 +277,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
> >   static int
> >   gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 4);
> > @@ -299,7 +299,7 @@ static int
> >   gen7_render_ring_flush(struct drm_i915_gem_request *req,
> >   		       u32 invalidate_domains, u32 flush_domains)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 scratch_addr =
> >   	       	req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
> >   	u32 flags = 0;
> > @@ -364,7 +364,7 @@ static int
> >   gen8_emit_pipe_control(struct drm_i915_gem_request *req,
> >   		       u32 flags, u32 scratch_addr)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 6);
> > @@ -680,7 +680,7 @@ err:
> > 
> >   static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct i915_workarounds *w = &req->i915->workarounds;
> >   	int ret, i;
> > 
> > @@ -1242,7 +1242,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
> >   			   unsigned int num_dwords)
> >   {
> >   #define MBOX_UPDATE_DWORDS 8
> > -	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
> > +	struct intel_ringbuffer *signaller = signaller_req->ring;
> >   	struct drm_i915_private *dev_priv = signaller_req->i915;
> >   	struct intel_engine_cs *waiter;
> >   	enum intel_engine_id id;
> > @@ -1282,7 +1282,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
> >   			   unsigned int num_dwords)
> >   {
> >   #define MBOX_UPDATE_DWORDS 6
> > -	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
> > +	struct intel_ringbuffer *signaller = signaller_req->ring;
> >   	struct drm_i915_private *dev_priv = signaller_req->i915;
> >   	struct intel_engine_cs *waiter;
> >   	enum intel_engine_id id;
> > @@ -1319,7 +1319,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
> >   static int gen6_signal(struct drm_i915_gem_request *signaller_req,
> >   		       unsigned int num_dwords)
> >   {
> > -	struct intel_ringbuffer *signaller = signaller_req->ringbuf;
> > +	struct intel_ringbuffer *signaller = signaller_req->ring;
> >   	struct drm_i915_private *dev_priv = signaller_req->i915;
> >   	struct intel_engine_cs *useless;
> >   	enum intel_engine_id id;
> > @@ -1363,7 +1363,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
> >   static int
> >   gen6_add_request(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	if (req->engine->semaphore.signal)
> > @@ -1387,7 +1387,7 @@ static int
> >   gen8_render_add_request(struct drm_i915_gem_request *req)
> >   {
> >   	struct intel_engine_cs *engine = req->engine;
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	if (engine->semaphore.signal)
> > @@ -1432,7 +1432,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
> >   	       struct intel_engine_cs *signaller,
> >   	       u32 seqno)
> >   {
> > -	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
> > +	struct intel_ringbuffer *waiter = waiter_req->ring;
> >   	struct drm_i915_private *dev_priv = waiter_req->i915;
> >   	struct i915_hw_ppgtt *ppgtt;
> >   	int ret;
> > @@ -1469,7 +1469,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
> >   	       struct intel_engine_cs *signaller,
> >   	       u32 seqno)
> >   {
> > -	struct intel_ringbuffer *waiter = waiter_req->ringbuf;
> > +	struct intel_ringbuffer *waiter = waiter_req->ring;
> >   	u32 dw1 = MI_SEMAPHORE_MBOX |
> >   		  MI_SEMAPHORE_COMPARE |
> >   		  MI_SEMAPHORE_REGISTER;
> > @@ -1603,7 +1603,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
> >   	       u32     invalidate_domains,
> >   	       u32     flush_domains)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 2);
> > @@ -1619,7 +1619,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
> >   static int
> >   i9xx_add_request(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 4);
> > @@ -1697,7 +1697,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   			 u64 offset, u32 length,
> >   			 unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 2);
> > @@ -1724,7 +1724,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   			 u64 offset, u32 len,
> >   			 unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	u32 cs_offset = req->engine->scratch.gtt_offset;
> >   	int ret;
> > 
> > @@ -1786,7 +1786,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   			 u64 offset, u32 len,
> >   			 unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 2);
> > @@ -2221,7 +2221,7 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
> >   	 */
> >   	request->reserved_space += LEGACY_REQUEST_SIZE;
> > 
> > -	request->ringbuf = request->engine->buffer;
> > +	request->ring = request->engine->buffer;
> > 
> >   	ret = intel_ring_begin(request, 0);
> >   	if (ret)
> > @@ -2233,12 +2233,12 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
> > 
> >   static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
> >   {
> > -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	struct intel_engine_cs *engine = req->engine;
> >   	struct drm_i915_gem_request *target;
> > 
> > -	intel_ring_update_space(ringbuf);
> > -	if (ringbuf->space >= bytes)
> > +	intel_ring_update_space(ring);
> > +	if (ring->space >= bytes)
> >   		return 0;
> > 
> >   	/*
> > @@ -2260,12 +2260,12 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
> >   		 * from multiple ringbuffers. Here, we must ignore any that
> >   		 * aren't from the ringbuffer we're considering.
> >   		 */
> > -		if (target->ringbuf != ringbuf)
> > +		if (target->ring != ring)
> >   			continue;
> > 
> >   		/* Would completion of this request free enough space? */
> > -		space = __intel_ring_space(target->postfix, ringbuf->tail,
> > -					   ringbuf->size);
> > +		space = __intel_ring_space(target->postfix, ring->tail,
> > +					   ring->size);
> >   		if (space >= bytes)
> >   			break;
> >   	}
> > @@ -2278,9 +2278,9 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
> > 
> >   int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
> >   {
> > -	struct intel_ringbuffer *ringbuf = req->ringbuf;
> > -	int remain_actual = ringbuf->size - ringbuf->tail;
> > -	int remain_usable = ringbuf->effective_size - ringbuf->tail;
> > +	struct intel_ringbuffer *ring = req->ring;
> > +	int remain_actual = ring->size - ring->tail;
> > +	int remain_usable = ring->effective_size - ring->tail;
> >   	int bytes = num_dwords * sizeof(u32);
> >   	int total_bytes, wait_bytes;
> >   	bool need_wrap = false;
> > @@ -2307,35 +2307,35 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
> >   		wait_bytes = total_bytes;
> >   	}
> > 
> > -	if (wait_bytes > ringbuf->space) {
> > +	if (wait_bytes > ring->space) {
> >   		int ret = wait_for_space(req, wait_bytes);
> >   		if (unlikely(ret))
> >   			return ret;
> > 
> > -		intel_ring_update_space(ringbuf);
> > -		if (unlikely(ringbuf->space < wait_bytes))
> > +		intel_ring_update_space(ring);
> > +		if (unlikely(ring->space < wait_bytes))
> >   			return -EAGAIN;
> >   	}
> > 
> >   	if (unlikely(need_wrap)) {
> > -		GEM_BUG_ON(remain_actual > ringbuf->space);
> > -		GEM_BUG_ON(ringbuf->tail + remain_actual > ringbuf->size);
> > +		GEM_BUG_ON(remain_actual > ring->space);
> > +		GEM_BUG_ON(ring->tail + remain_actual > ring->size);
> > 
> >   		/* Fill the tail with MI_NOOP */
> > -		memset(ringbuf->vaddr + ringbuf->tail, 0, remain_actual);
> > -		ringbuf->tail = 0;
> > -		ringbuf->space -= remain_actual;
> > +		memset(ring->vaddr + ring->tail, 0, remain_actual);
> > +		ring->tail = 0;
> > +		ring->space -= remain_actual;
> >   	}
> > 
> > -	ringbuf->space -= bytes;
> > -	GEM_BUG_ON(ringbuf->space < 0);
> > +	ring->space -= bytes;
> > +	GEM_BUG_ON(ring->space < 0);
> >   	return 0;
> >   }
> > 
> >   /* Align the ring tail to a cacheline boundary */
> >   int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int num_dwords =
> >   	       	(ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
> >   	int ret;
> > @@ -2429,7 +2429,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *engine,
> >   static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
> >   			       u32 invalidate, u32 flush)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	uint32_t cmd;
> >   	int ret;
> > 
> > @@ -2475,7 +2475,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   			      u64 offset, u32 len,
> >   			      unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	bool ppgtt = USES_PPGTT(req->i915) &&
> >   			!(dispatch_flags & I915_DISPATCH_SECURE);
> >   	int ret;
> > @@ -2501,7 +2501,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   			     u64 offset, u32 len,
> >   			     unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 2);
> > @@ -2526,7 +2526,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   			      u64 offset, u32 len,
> >   			      unsigned dispatch_flags)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	int ret;
> > 
> >   	ret = intel_ring_begin(req, 2);
> > @@ -2549,7 +2549,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
> >   static int gen6_ring_flush(struct drm_i915_gem_request *req,
> >   			   u32 invalidate, u32 flush)
> >   {
> > -	struct intel_ringbuffer *ring = req->ringbuf;
> > +	struct intel_ringbuffer *ring = req->ring;
> >   	uint32_t cmd;
> >   	int ret;
> > 
> > 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: The vma leak fix from yonder
  2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
                   ` (62 preceding siblings ...)
  2016-06-05  5:24 ` ✗ Ro.CI.BAT: failure for series starting with [01/62] drm/i915: Only start retire worker when idle Patchwork
@ 2016-06-08  9:30 ` Daniel Vetter
  63 siblings, 0 replies; 87+ messages in thread
From: Daniel Vetter @ 2016-06-08  9:30 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

On Fri, Jun 03, 2016 at 05:36:25PM +0100, Chris Wilson wrote:
> Just to see if anyone is awake this series takes us to the VMA leak fix.
> Just the tip of the iceberg when it comes to VMA fixes...

Read through it. I think it'd be good (although yes, painful) if we could
untangle the ring/engine renaming and from the other untangling. But meh,
that's my bikeshed, so doesn't matter ;-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence
  2016-06-08  9:14   ` Daniel Vetter
@ 2016-06-08 10:33     ` Chris Wilson
  0 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-08 10:33 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: Daniel Vetter, intel-gfx, Jesse Barnes

On Wed, Jun 08, 2016 at 11:14:23AM +0200, Daniel Vetter wrote:
> On Fri, Jun 03, 2016 at 05:36:38PM +0100, Chris Wilson wrote:
> >  static inline struct drm_i915_gem_request *
> > +to_request(struct fence *fence)
> > +{
> > +	/* We assume that NULL fence/request are interoperable */
> > +	BUILD_BUG_ON(offsetof(struct drm_i915_gem_request, fence) != 0);
> > +	return container_of(fence, struct drm_i915_gem_request, fence);
> 
> For future-proofing to make sure we don't accidentally call this on a
> foreign fence:
> 
> 	BUG_ON(fence->ops != i915_fence_ops);
> 
> BUG_ON since I don't want to splatter all callers with handlers for this.
> And if we ever get this wrong debugging it with just some randomy
> corruption would be serious pain, so I think the overhead is justified.

How about I just remove the function? It is only used on known requests
today, or call it __to_request_from_fence() ?
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter
  2016-06-08  9:04   ` Daniel Vetter
@ 2016-06-08 10:38     ` Chris Wilson
  0 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-08 10:38 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx, Jesse Barnes

On Wed, Jun 08, 2016 at 11:04:57AM +0200, Daniel Vetter wrote:
> On Fri, Jun 03, 2016 at 05:36:29PM +0100, Chris Wilson wrote:
> > Ideally, we want to automagically have the GPU respond to the
> > instantaneous load by reclocking itself. However, reclocking occurs
> > relatively slowly, and to the client waiting for a result from the GPU,
> > too late. To compensate and reduce the client latency, we allow the
> > first wait from a client to boost the GPU clocks to maximum. This
> > overcomes the lag in autoreclocking, at the expense of forcing the GPU
> > clocks too high. So to offset the excessive power usage, we currently
> > allow a client to only boost the clocks once before we detect the GPU
> > is idle again. This works reasonably for say the first frame in a
> > benchmark, but for many more synchronous workloads (like OpenCL) we find
> > the GPU clocks remain too low. By noting a wait which would idle the GPU
> > (i.e. we just waited upon the last known request), we can give that
> > client the idle boost credit (for their next wait) without the 100ms
> > delay required for us to detect the GPU idle state. The intention is to
> > boost clients that are stalling in the process of feeding the GPU more
> > work (and who in doing so let the GPU idle), without granting boost
> > credits to clients that are throttling themselves (such as compositors).
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: "Zou, Nanhai" <nanhai.zou@intel.com>
> > Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
> > Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
> 
> I wonder a bit what will happen here for workloads that flip-flop between
> engines, since you check for last request on a given engine. But maybe in
> the future we'll get clock domains per engine ;-)

We disable RPS boosting for inter ring synchronisation, so only if the
client does submit(RCS); submit(BCS); wait(RCS); wait(BCS) would it get
two bites at the cherry. That still falls under the notion of allowed
client behaviour as the second wait is presumably stalling the client
from submitting more work.

s/GPU idle/engine idle/ + tweaks
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 01/62] drm/i915: Only start retire worker when idle
  2016-06-07 11:31   ` Joonas Lahtinen
@ 2016-06-08 10:53     ` Chris Wilson
  2016-06-08 11:06       ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-08 10:53 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Tue, Jun 07, 2016 at 02:31:07PM +0300, Joonas Lahtinen wrote:
> On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> >  i915_gem_idle_work_handler(struct work_struct *work)
> >  {
> >  	struct drm_i915_private *dev_priv =
> > -		container_of(work, typeof(*dev_priv), mm.idle_work.work);
> > +		container_of(work, typeof(*dev_priv), gt.idle_work.work);
> >  	struct drm_device *dev = dev_priv->dev;
> >  	struct intel_engine_cs *engine;
> >  
> > -	for_each_engine(engine, dev_priv)
> > -		if (!list_empty(&engine->request_list))
> > -			return;
> > +	if (!READ_ONCE(dev_priv->gt.awake))
> > +		return;
> >  
> > -	/* we probably should sync with hangcheck here, using cancel_work_sync.
> > -	 * Also locking seems to be fubar here, engine->request_list is protected
> > -	 * by dev->struct_mutex. */
> > +	mutex_lock(&dev->struct_mutex);
> > +	if (dev_priv->gt.active_engines)
> > +		goto out;
> >  
> > -	intel_mark_idle(dev_priv);
> > +	for_each_engine(engine, dev_priv)
> > +		i915_gem_batch_pool_fini(&engine->batch_pool);
> >  
> > -	if (mutex_trylock(&dev->struct_mutex)) {
> > -		for_each_engine(engine, dev_priv)
> > -			i915_gem_batch_pool_fini(&engine->batch_pool);
> > +	GEM_BUG_ON(!dev_priv->gt.awake);
> > +	dev_priv->gt.awake = false;
> >  
> > -		mutex_unlock(&dev->struct_mutex);
> > +	if (INTEL_INFO(dev_priv)->gen >= 6)
> > +		gen6_rps_idle(dev_priv);
> > +	intel_runtime_pm_put(dev_priv);
> > +out:
> > +	mutex_unlock(&dev->struct_mutex);
> > +
> > +	if (!dev_priv->gt.awake &&
> 
> No READ_ONCE here, even we just unlocked the mutex. So lacks some
> consistency.
> 
> Also, this assumes we might be pre-empted between unlocking mutex and
> making this test, so I'm little bit confused. Do you want to optimize
> by avoiding calling cancel_delayed_work_sync?

General principle to never call work_sync functions with locks held. I
had actually thought I had fixed this up (but realized that I just
rewrote hangcheck later on instead ;)

Ok, what I think is safer here is

	bool hangcheck = cancel_delay_work_sync(hangcheck_work)

	mutex_lock()
	if (actually_idle()) {
		awake = false;
		missed_irq_rings |= intel_kick_waiters();
	}
	mutex_unlock();

	if (awake && hangcheck)
		queue_hangcheck()
	
So always kick the hangcheck and reeanble if we tried to idle too early.
This will potentially delay hangcheck by one full hangcheck period if we
do encounter that race. But we shouldn't be hitting this race that
often, or hanging the GPU for that mterr.

> > index 166f1a3829b0..d0cd9a1aa80e 100644
> > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> > @@ -372,13 +372,13 @@ struct intel_engine_cs {
> >  };
> >  
> >  static inline bool
> > -intel_engine_initialized(struct intel_engine_cs *engine)
> > +intel_engine_initialized(const struct intel_engine_cs *engine)
> >  {
> >  	return engine->i915 != NULL;
> >  }
> >  
> >  static inline unsigned
> > -intel_engine_flag(struct intel_engine_cs *engine)
> > +intel_engine_flag(const struct intel_engine_cs *engine)
> >  {
> >  	return 1 << engine->id;
> >  }
> 
> I think majority of our functions are not const-correct, I remember
> some grunting on the subject when I tried to change some to be. But I'm
> all for it myself.

Not yet, a few more gradual drive bys and we'll be in position for a
grander scheme of correctness ;)
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 01/62] drm/i915: Only start retire worker when idle
  2016-06-08 10:53     ` Chris Wilson
@ 2016-06-08 11:06       ` Chris Wilson
  2016-06-08 12:07         ` Joonas Lahtinen
  0 siblings, 1 reply; 87+ messages in thread
From: Chris Wilson @ 2016-06-08 11:06 UTC (permalink / raw)
  To: Joonas Lahtinen, intel-gfx

On Wed, Jun 08, 2016 at 11:53:15AM +0100, Chris Wilson wrote:
> On Tue, Jun 07, 2016 at 02:31:07PM +0300, Joonas Lahtinen wrote:
> > On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> > >  i915_gem_idle_work_handler(struct work_struct *work)
> > >  {
> > >  	struct drm_i915_private *dev_priv =
> > > -		container_of(work, typeof(*dev_priv), mm.idle_work.work);
> > > +		container_of(work, typeof(*dev_priv), gt.idle_work.work);
> > >  	struct drm_device *dev = dev_priv->dev;
> > >  	struct intel_engine_cs *engine;
> > >  
> > > -	for_each_engine(engine, dev_priv)
> > > -		if (!list_empty(&engine->request_list))
> > > -			return;
> > > +	if (!READ_ONCE(dev_priv->gt.awake))
> > > +		return;
> > >  
> > > -	/* we probably should sync with hangcheck here, using cancel_work_sync.
> > > -	 * Also locking seems to be fubar here, engine->request_list is protected
> > > -	 * by dev->struct_mutex. */
> > > +	mutex_lock(&dev->struct_mutex);
> > > +	if (dev_priv->gt.active_engines)
> > > +		goto out;
> > >  
> > > -	intel_mark_idle(dev_priv);
> > > +	for_each_engine(engine, dev_priv)
> > > +		i915_gem_batch_pool_fini(&engine->batch_pool);
> > >  
> > > -	if (mutex_trylock(&dev->struct_mutex)) {
> > > -		for_each_engine(engine, dev_priv)
> > > -			i915_gem_batch_pool_fini(&engine->batch_pool);
> > > +	GEM_BUG_ON(!dev_priv->gt.awake);
> > > +	dev_priv->gt.awake = false;
> > >  
> > > -		mutex_unlock(&dev->struct_mutex);
> > > +	if (INTEL_INFO(dev_priv)->gen >= 6)
> > > +		gen6_rps_idle(dev_priv);
> > > +	intel_runtime_pm_put(dev_priv);
> > > +out:
> > > +	mutex_unlock(&dev->struct_mutex);
> > > +
> > > +	if (!dev_priv->gt.awake &&
> > 
> > No READ_ONCE here, even we just unlocked the mutex. So lacks some
> > consistency.
> > 
> > Also, this assumes we might be pre-empted between unlocking mutex and
> > making this test, so I'm little bit confused. Do you want to optimize
> > by avoiding calling cancel_delayed_work_sync?
> 
> General principle to never call work_sync functions with locks held. I
> had actually thought I had fixed this up (but realized that I just
> rewrote hangcheck later on instead ;)
> 
> Ok, what I think is safer here is
> 
> 	bool hangcheck = cancel_delay_work_sync(hangcheck_work)
> 
> 	mutex_lock()
> 	if (actually_idle()) {
> 		awake = false;
> 		missed_irq_rings |= intel_kick_waiters();
> 	}
> 	mutex_unlock();
> 
> 	if (awake && hangcheck)
> 		queue_hangcheck()
> 	
> So always kick the hangcheck and reeanble if we tried to idle too early.
> This will potentially delay hangcheck by one full hangcheck period if we
> do encounter that race. But we shouldn't be hitting this race that
> often, or hanging the GPU for that mterr.

Actual delta:

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 406046f66e36..856da4036fb3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3066,10 +3066,15 @@ i915_gem_idle_work_handler(struct work_struct *work)
                container_of(work, typeof(*dev_priv), gt.idle_work.work);
        struct drm_device *dev = dev_priv->dev;
        struct intel_engine_cs *engine;
+       unsigned stuck_engines;
+       bool rearm_hangcheck;
 
        if (!READ_ONCE(dev_priv->gt.awake))
                return;
 
+       rearm_hangcheck =
+               cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
+
        mutex_lock(&dev->struct_mutex);
        if (dev_priv->gt.active_engines)
                goto out;
@@ -3079,6 +3084,13 @@ i915_gem_idle_work_handler(struct work_struct *work)
 
        GEM_BUG_ON(!dev_priv->gt.awake);
        dev_priv->gt.awake = false;
+       rearm_hangcheck = false;
+
+       stuck_engines = intel_kick_waiters(dev_priv);
+       if (unlikely(stuck_engines)) {
+               DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
+               dev_priv->gpu_error.missed_irq_rings |= stuck_engines;
+       }
 
        if (INTEL_INFO(dev_priv)->gen >= 6)
                gen6_rps_idle(dev_priv);
@@ -3086,14 +3098,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
 out:
        mutex_unlock(&dev->struct_mutex);
 
-       if (!dev_priv->gt.awake &&
-           cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work)) {
-               unsigned stuck = intel_kick_waiters(dev_priv);
-               if (unlikely(stuck)) {
-                       DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
-                       dev_priv->gpu_error.missed_irq_rings |= stuck;
-               }
-       }
+       if (rearm_hangcheck)
+               i915_queue_hangcheck(dev_priv);
 }
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 87+ messages in thread

* Re: [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one
  2016-06-03 16:36 ` [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one Chris Wilson
@ 2016-06-08 11:14   ` Arun Siluvery
  2016-06-08 12:06     ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Arun Siluvery @ 2016-06-08 11:14 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On 03/06/2016 22:06, Chris Wilson wrote:
> As we only ever keep the first error state around, we can avoid some
> work that can be quite intrusive if we don't record the error the second
> time around. This does move the race whereby the user could discard one
> error state as the second is being captured, but that race exists in the
> current code and we hope that recapturing error state is only done for
> debugging.
>
> Note that as we discard the error state for simulated errors, igt that
> exercise error capture continue to function.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---

Patch does more than what is described here, all of i915_gem_request 
changes are part of this patch, accidentally squashed may be.

regards
Arun

>   drivers/gpu/drm/i915/Makefile           |   1 +
>   drivers/gpu/drm/i915/i915_drv.h         | 210 +---------
>   drivers/gpu/drm/i915/i915_gem.c         | 653 +------------------------------
>   drivers/gpu/drm/i915/i915_gem_request.c | 659 ++++++++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/i915_gem_request.h | 245 ++++++++++++
>   drivers/gpu/drm/i915/i915_gpu_error.c   |   3 +
>   6 files changed, 916 insertions(+), 855 deletions(-)
>   create mode 100644 drivers/gpu/drm/i915/i915_gem_request.c
>   create mode 100644 drivers/gpu/drm/i915/i915_gem_request.h
>
> diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
> index f20007440821..14cef1d2343c 100644
> --- a/drivers/gpu/drm/i915/Makefile
> +++ b/drivers/gpu/drm/i915/Makefile
> @@ -32,6 +32,7 @@ i915-y += i915_cmd_parser.o \
>   	  i915_gem_gtt.o \
>   	  i915_gem.o \
>   	  i915_gem_render_state.o \
> +	  i915_gem_request.o \
>   	  i915_gem_shrinker.o \
>   	  i915_gem_stolen.o \
>   	  i915_gem_tiling.o \
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 15a0c6bdf500..939cd45043c7 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -60,6 +60,7 @@
>   #include "i915_gem.h"
>   #include "i915_gem_gtt.h"
>   #include "i915_gem_render_state.h"
> +#include "i915_gem_request.h"
>
>   /* General customization:
>    */
> @@ -2339,172 +2340,6 @@ static inline struct scatterlist *__sg_next(struct scatterlist *sg)
>   	     (((__iter).curr += PAGE_SIZE) < (__iter).max) ||		\
>   	     ((__iter) = __sgt_iter(__sg_next((__iter).sgp), false), 0))
>
> -/**
> - * Request queue structure.
> - *
> - * The request queue allows us to note sequence numbers that have been emitted
> - * and may be associated with active buffers to be retired.
> - *
> - * By keeping this list, we can avoid having to do questionable sequence
> - * number comparisons on buffer last_read|write_seqno. It also allows an
> - * emission time to be associated with the request for tracking how far ahead
> - * of the GPU the submission is.
> - *
> - * The requests are reference counted, so upon creation they should have an
> - * initial reference taken using kref_init
> - */
> -struct drm_i915_gem_request {
> -	struct kref ref;
> -
> -	/** On Which ring this request was generated */
> -	struct drm_i915_private *i915;
> -	struct intel_engine_cs *engine;
> -	unsigned reset_counter;
> -	struct intel_signal_node signaling;
> -
> -	 /** GEM sequence number associated with the previous request,
> -	  * when the HWS breadcrumb is equal to this the GPU is processing
> -	  * this request.
> -	  */
> -	u32 previous_seqno;
> -
> -	 /** GEM sequence number associated with this request,
> -	  * when the HWS breadcrumb is equal or greater than this the GPU
> -	  * has finished processing this request.
> -	  */
> -	u32 seqno;
> -
> -	/** Position in the ringbuffer of the start of the request */
> -	u32 head;
> -
> -	/**
> -	 * Position in the ringbuffer of the start of the postfix.
> -	 * This is required to calculate the maximum available ringbuffer
> -	 * space without overwriting the postfix.
> -	 */
> -	 u32 postfix;
> -
> -	/** Position in the ringbuffer of the end of the whole request */
> -	u32 tail;
> -
> -	/** Preallocate space in the ringbuffer for the emitting the request */
> -	u32 reserved_space;
> -
> -	/**
> -	 * Context and ring buffer related to this request
> -	 * Contexts are refcounted, so when this request is associated with a
> -	 * context, we must increment the context's refcount, to guarantee that
> -	 * it persists while any request is linked to it. Requests themselves
> -	 * are also refcounted, so the request will only be freed when the last
> -	 * reference to it is dismissed, and the code in
> -	 * i915_gem_request_free() will then decrement the refcount on the
> -	 * context.
> -	 */
> -	struct i915_gem_context *ctx;
> -	struct intel_ringbuffer *ringbuf;
> -
> -	/**
> -	 * Context related to the previous request.
> -	 * As the contexts are accessed by the hardware until the switch is
> -	 * completed to a new context, the hardware may still be writing
> -	 * to the context object after the breadcrumb is visible. We must
> -	 * not unpin/unbind/prune that object whilst still active and so
> -	 * we keep the previous context pinned until the following (this)
> -	 * request is retired.
> -	 */
> -	struct i915_gem_context *previous_context;
> -
> -	/** Batch buffer related to this request if any (used for
> -	    error state dump only) */
> -	struct drm_i915_gem_object *batch_obj;
> -
> -	/** Time at which this request was emitted, in jiffies. */
> -	unsigned long emitted_jiffies;
> -
> -	/** global list entry for this request */
> -	struct list_head list;
> -
> -	struct drm_i915_file_private *file_priv;
> -	/** file_priv list entry for this request */
> -	struct list_head client_list;
> -
> -	/** process identifier submitting this request */
> -	struct pid *pid;
> -
> -	/**
> -	 * The ELSP only accepts two elements at a time, so we queue
> -	 * context/tail pairs on a given queue (ring->execlist_queue) until the
> -	 * hardware is available. The queue serves a double purpose: we also use
> -	 * it to keep track of the up to 2 contexts currently in the hardware
> -	 * (usually one in execution and the other queued up by the GPU): We
> -	 * only remove elements from the head of the queue when the hardware
> -	 * informs us that an element has been completed.
> -	 *
> -	 * All accesses to the queue are mediated by a spinlock
> -	 * (ring->execlist_lock).
> -	 */
> -
> -	/** Execlist link in the submission queue.*/
> -	struct list_head execlist_link;
> -
> -	/** Execlists no. of times this request has been sent to the ELSP */
> -	int elsp_submitted;
> -
> -	/** Execlists context hardware id. */
> -	unsigned ctx_hw_id;
> -};
> -
> -struct drm_i915_gem_request * __must_check
> -i915_gem_request_alloc(struct intel_engine_cs *engine,
> -		       struct i915_gem_context *ctx);
> -void i915_gem_request_free(struct kref *req_ref);
> -int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
> -				   struct drm_file *file);
> -
> -static inline uint32_t
> -i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
> -{
> -	return req ? req->seqno : 0;
> -}
> -
> -static inline struct intel_engine_cs *
> -i915_gem_request_get_engine(struct drm_i915_gem_request *req)
> -{
> -	return req ? req->engine : NULL;
> -}
> -
> -static inline struct drm_i915_gem_request *
> -i915_gem_request_reference(struct drm_i915_gem_request *req)
> -{
> -	if (req)
> -		kref_get(&req->ref);
> -	return req;
> -}
> -
> -static inline void
> -i915_gem_request_unreference(struct drm_i915_gem_request *req)
> -{
> -	kref_put(&req->ref, i915_gem_request_free);
> -}
> -
> -static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
> -					   struct drm_i915_gem_request *src)
> -{
> -	if (src)
> -		i915_gem_request_reference(src);
> -
> -	if (*pdst)
> -		i915_gem_request_unreference(*pdst);
> -
> -	*pdst = src;
> -}
> -
> -/*
> - * XXX: i915_gem_request_completed should be here but currently needs the
> - * definition of i915_seqno_passed() which is below. It will be moved in
> - * a later patch when the call to i915_seqno_passed() is obsoleted...
> - */
> -
>   /*
>    * A command that requires special handling by the command parser.
>    */
> @@ -3208,37 +3043,6 @@ void i915_gem_track_fb(struct drm_i915_gem_object *old,
>   		       struct drm_i915_gem_object *new,
>   		       unsigned frontbuffer_bits);
>
> -/**
> - * Returns true if seq1 is later than seq2.
> - */
> -static inline bool
> -i915_seqno_passed(uint32_t seq1, uint32_t seq2)
> -{
> -	return (int32_t)(seq1 - seq2) >= 0;
> -}
> -
> -static inline bool i915_gem_request_started(const struct drm_i915_gem_request *req)
> -{
> -	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
> -				 req->previous_seqno);
> -}
> -
> -static inline bool i915_gem_request_completed(const struct drm_i915_gem_request *req)
> -{
> -	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
> -				 req->seqno);
> -}
> -
> -bool __i915_spin_request(const struct drm_i915_gem_request *request,
> -			 int state, unsigned long timeout_us);
> -static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
> -				     int state, unsigned long timeout_us)
> -{
> -	return (i915_gem_request_started(request) &&
> -		__i915_spin_request(request, state, timeout_us));
> -}
> -
> -int __must_check i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno);
>   int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);
>
>   struct drm_i915_gem_request *
> @@ -3296,18 +3100,6 @@ void i915_gem_init_swizzling(struct drm_device *dev);
>   void i915_gem_cleanup_engines(struct drm_device *dev);
>   int __must_check i915_gem_wait_for_idle(struct drm_i915_private *dev_priv);
>   int __must_check i915_gem_suspend(struct drm_device *dev);
> -void __i915_add_request(struct drm_i915_gem_request *req,
> -			struct drm_i915_gem_object *batch_obj,
> -			bool flush_caches);
> -#define i915_add_request(req) \
> -	__i915_add_request(req, NULL, true)
> -#define i915_add_request_no_flush(req) \
> -	__i915_add_request(req, NULL, false)
> -int __i915_wait_request(struct drm_i915_gem_request *req,
> -			bool interruptible,
> -			s64 *timeout,
> -			struct intel_rps_client *rps);
> -int __must_check i915_wait_request(struct drm_i915_gem_request *req);
>   int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
>   int __must_check
>   i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index f48f54193972..95782cf85dcc 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1105,361 +1105,6 @@ put_rpm:
>   	return ret;
>   }
>
> -static int
> -i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
> -{
> -	if (__i915_terminally_wedged(reset_counter))
> -		return -EIO;
> -
> -	if (__i915_reset_in_progress(reset_counter)) {
> -		/* Non-interruptible callers can't handle -EAGAIN, hence return
> -		 * -EIO unconditionally for these. */
> -		if (!interruptible)
> -			return -EIO;
> -
> -		return -EAGAIN;
> -	}
> -
> -	return 0;
> -}
> -
> -static unsigned long local_clock_us(unsigned *cpu)
> -{
> -	unsigned long t;
> -
> -	/* Cheaply and approximately convert from nanoseconds to microseconds.
> -	 * The result and subsequent calculations are also defined in the same
> -	 * approximate microseconds units. The principal source of timing
> -	 * error here is from the simple truncation.
> -	 *
> -	 * Note that local_clock() is only defined wrt to the current CPU;
> -	 * the comparisons are no longer valid if we switch CPUs. Instead of
> -	 * blocking preemption for the entire busywait, we can detect the CPU
> -	 * switch and use that as indicator of system load and a reason to
> -	 * stop busywaiting, see busywait_stop().
> -	 */
> -	*cpu = get_cpu();
> -	t = local_clock() >> 10;
> -	put_cpu();
> -
> -	return t;
> -}
> -
> -static bool busywait_stop(unsigned long timeout, unsigned cpu)
> -{
> -	unsigned this_cpu;
> -
> -	if (time_after(local_clock_us(&this_cpu), timeout))
> -		return true;
> -
> -	return this_cpu != cpu;
> -}
> -
> -bool __i915_spin_request(const struct drm_i915_gem_request *req,
> -			 int state, unsigned long timeout_us)
> -{
> -	unsigned cpu;
> -
> -	/* When waiting for high frequency requests, e.g. during synchronous
> -	 * rendering split between the CPU and GPU, the finite amount of time
> -	 * required to set up the irq and wait upon it limits the response
> -	 * rate. By busywaiting on the request completion for a short while we
> -	 * can service the high frequency waits as quick as possible. However,
> -	 * if it is a slow request, we want to sleep as quickly as possible.
> -	 * The tradeoff between waiting and sleeping is roughly the time it
> -	 * takes to sleep on a request, on the order of a microsecond.
> -	 */
> -
> -	timeout_us += local_clock_us(&cpu);
> -	do {
> -		if (i915_gem_request_completed(req))
> -			return true;
> -
> -		if (signal_pending_state(state, current))
> -			break;
> -
> -		if (busywait_stop(timeout_us, cpu))
> -			break;
> -
> -		cpu_relax_lowlatency();
> -	} while (!need_resched());
> -
> -	return false;
> -}
> -
> -/**
> - * __i915_wait_request - wait until execution of request has finished
> - * @req: duh!
> - * @interruptible: do an interruptible wait (normally yes)
> - * @timeout: in - how long to wait (NULL forever); out - how much time remaining
> - *
> - * Note: It is of utmost importance that the passed in seqno and reset_counter
> - * values have been read by the caller in an smp safe manner. Where read-side
> - * locks are involved, it is sufficient to read the reset_counter before
> - * unlocking the lock that protects the seqno. For lockless tricks, the
> - * reset_counter _must_ be read before, and an appropriate smp_rmb must be
> - * inserted.
> - *
> - * Returns 0 if the request was found within the alloted time. Else returns the
> - * errno with remaining time filled in timeout argument.
> - */
> -int __i915_wait_request(struct drm_i915_gem_request *req,
> -			bool interruptible,
> -			s64 *timeout,
> -			struct intel_rps_client *rps)
> -{
> -	int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
> -	DEFINE_WAIT(reset);
> -	struct intel_wait wait;
> -	unsigned long timeout_remain;
> -	int ret = 0;
> -
> -	might_sleep();
> -
> -	if (list_empty(&req->list))
> -		return 0;
> -
> -	if (i915_gem_request_completed(req))
> -		return 0;
> -
> -	timeout_remain = MAX_SCHEDULE_TIMEOUT;
> -	if (timeout) {
> -		if (WARN_ON(*timeout < 0))
> -			return -EINVAL;
> -
> -		if (*timeout == 0)
> -			return -ETIME;
> -
> -		/* Record current time in case interrupted, or wedged */
> -		timeout_remain = nsecs_to_jiffies_timeout(*timeout);
> -		*timeout += ktime_get_raw_ns();
> -	}
> -
> -	trace_i915_gem_request_wait_begin(req);
> -
> -	/* This client is about to stall waiting for the GPU. In many cases
> -	 * this is undesirable and limits the throughput of the system, as
> -	 * many clients cannot continue processing user input/output whilst
> -	 * blocked. RPS autotuning may take tens of milliseconds to respond
> -	 * to the GPU load and thus incurs additional latency for the client.
> -	 * We can circumvent that by promoting the GPU frequency to maximum
> -	 * before we wait. This makes the GPU throttle up much more quickly
> -	 * (good for benchmarks and user experience, e.g. window animations),
> -	 * but at a cost of spending more power processing the workload
> -	 * (bad for battery). Not all clients even want their results
> -	 * immediately and for them we should just let the GPU select its own
> -	 * frequency to maximise efficiency. To prevent a single client from
> -	 * forcing the clocks too high for the whole system, we only allow
> -	 * each client to waitboost once in a busy period.
> -	 */
> -	if (INTEL_INFO(req->i915)->gen >= 6)
> -		gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
> -
> -	/* Optimistic spin for the next ~jiffie before touching IRQs */
> -	if (i915_spin_request(req, state, 5))
> -		goto complete;
> -
> -	intel_wait_init(&wait, req->seqno);
> -	set_current_state(state);
> -	if (intel_engine_add_wait(req->engine, &wait))
> -		/* In order to check that we haven't missed the interrupt
> -		 * as we enabled it, we need to kick ourselves to do a
> -		 * coherent check on the seqno before we sleep.
> -		 */
> -		goto wakeup;
> -
> -	add_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
> -	for (;;) {
> -		if (signal_pending_state(state, current)) {
> -			ret = -ERESTARTSYS;
> -			break;
> -		}
> -
> -		/* Ensure that even if the GPU hangs, we get woken up. */
> -		i915_queue_hangcheck(req->i915);
> -
> -		timeout_remain = io_schedule_timeout(timeout_remain);
> -		if (timeout_remain == 0) {
> -			ret = -ETIME;
> -			break;
> -		}
> -
> -		if (intel_wait_complete(&wait))
> -			break;
> -
> -wakeup:
> -		set_current_state(state);
> -
> -		/* Carefully check if the request is complete, giving time
> -		 * for the seqno to be visible following the interrupt.
> -		 * We also have to check in case we are kicked by the GPU
> -		 * reset in order to drop the struct_mutex.
> -		 */
> -		if (__i915_request_irq_complete(req))
> -			break;
> -
> -		/* Only spin if we know the GPU is processing this request */
> -		if (i915_spin_request(req, state, 2))
> -			break;
> -	}
> -	remove_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
> -
> -	intel_engine_remove_wait(req->engine, &wait);
> -	__set_current_state(TASK_RUNNING);
> -complete:
> -	trace_i915_gem_request_wait_end(req);
> -
> -	if (timeout) {
> -		*timeout -= ktime_get_raw_ns();
> -		if (*timeout < 0)
> -			*timeout = 0;
> -
> -		/*
> -		 * Apparently ktime isn't accurate enough and occasionally has a
> -		 * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
> -		 * things up to make the test happy. We allow up to 1 jiffy.
> -		 *
> -		 * This is a regrssion from the timespec->ktime conversion.
> -		 */
> -		if (ret == -ETIME && *timeout < jiffies_to_usecs(1)*1000)
> -			*timeout = 0;
> -	}
> -
> -	if (rps && req->seqno == req->engine->last_submitted_seqno) {
> -		/* The GPU is now idle and this client has stalled.
> -		 * Since no other client has submitted a request in the
> -		 * meantime, assume that this client is the only one
> -		 * supplying work to the GPU but is unable to keep that
> -		 * work supplied because it is waiting. Since the GPU is
> -		 * then never kept fully busy, RPS autoclocking will
> -		 * keep the clocks relatively low, causing further delays.
> -		 * Compensate by giving the synchronous client credit for
> -		 * a waitboost next time.
> -		 */
> -		spin_lock(&req->i915->rps.client_lock);
> -		list_del_init(&rps->link);
> -		spin_unlock(&req->i915->rps.client_lock);
> -	}
> -
> -	return ret;
> -}
> -
> -int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
> -				   struct drm_file *file)
> -{
> -	struct drm_i915_file_private *file_priv;
> -
> -	WARN_ON(!req || !file || req->file_priv);
> -
> -	if (!req || !file)
> -		return -EINVAL;
> -
> -	if (req->file_priv)
> -		return -EINVAL;
> -
> -	file_priv = file->driver_priv;
> -
> -	spin_lock(&file_priv->mm.lock);
> -	req->file_priv = file_priv;
> -	list_add_tail(&req->client_list, &file_priv->mm.request_list);
> -	spin_unlock(&file_priv->mm.lock);
> -
> -	req->pid = get_pid(task_pid(current));
> -
> -	return 0;
> -}
> -
> -static inline void
> -i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
> -{
> -	struct drm_i915_file_private *file_priv = request->file_priv;
> -
> -	if (!file_priv)
> -		return;
> -
> -	spin_lock(&file_priv->mm.lock);
> -	list_del(&request->client_list);
> -	request->file_priv = NULL;
> -	spin_unlock(&file_priv->mm.lock);
> -
> -	put_pid(request->pid);
> -	request->pid = NULL;
> -}
> -
> -static void i915_gem_request_retire(struct drm_i915_gem_request *request)
> -{
> -	trace_i915_gem_request_retire(request);
> -
> -	/* We know the GPU must have read the request to have
> -	 * sent us the seqno + interrupt, so use the position
> -	 * of tail of the request to update the last known position
> -	 * of the GPU head.
> -	 *
> -	 * Note this requires that we are always called in request
> -	 * completion order.
> -	 */
> -	request->ringbuf->last_retired_head = request->postfix;
> -
> -	list_del_init(&request->list);
> -	i915_gem_request_remove_from_client(request);
> -
> -	if (request->previous_context) {
> -		if (i915.enable_execlists)
> -			intel_lr_context_unpin(request->previous_context,
> -					       request->engine);
> -	}
> -
> -	i915_gem_context_unreference(request->ctx);
> -	i915_gem_request_unreference(request);
> -}
> -
> -static void
> -__i915_gem_request_retire__upto(struct drm_i915_gem_request *req)
> -{
> -	struct intel_engine_cs *engine = req->engine;
> -	struct drm_i915_gem_request *tmp;
> -
> -	lockdep_assert_held(&engine->i915->dev->struct_mutex);
> -
> -	if (list_empty(&req->list))
> -		return;
> -
> -	do {
> -		tmp = list_first_entry(&engine->request_list,
> -				       typeof(*tmp), list);
> -
> -		i915_gem_request_retire(tmp);
> -	} while (tmp != req);
> -
> -	WARN_ON(i915_verify_lists(engine->dev));
> -}
> -
> -/**
> - * Waits for a request to be signaled, and cleans up the
> - * request and object lists appropriately for that event.
> - */
> -int
> -i915_wait_request(struct drm_i915_gem_request *req)
> -{
> -	struct drm_i915_private *dev_priv = req->i915;
> -	bool interruptible;
> -	int ret;
> -
> -	interruptible = dev_priv->mm.interruptible;
> -
> -	BUG_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
> -
> -	ret = __i915_wait_request(req, interruptible, NULL, NULL);
> -	if (ret)
> -		return ret;
> -
> -	/* If the GPU hung, we want to keep the requests to find the guilty. */
> -	if (req->reset_counter == i915_reset_counter(&dev_priv->gpu_error))
> -		__i915_gem_request_retire__upto(req);
> -
> -	return 0;
> -}
> -
>   /**
>    * Ensures that all rendering to the object has completed and the object is
>    * safe to unbind from the GTT or access from the CPU.
> @@ -1514,7 +1159,7 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
>   		i915_gem_object_retire__write(obj);
>
>   	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
> -		__i915_gem_request_retire__upto(req);
> +		i915_gem_request_retire_upto(req);
>   }
>
>   /* A nonblocking variant of the above wait. This is a highly dangerous routine
> @@ -2515,194 +2160,6 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
>   	drm_gem_object_unreference(&obj->base);
>   }
>
> -static int
> -i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
> -{
> -	struct intel_engine_cs *engine;
> -	int ret;
> -
> -	/* Carefully retire all requests without writing to the rings */
> -	for_each_engine(engine, dev_priv) {
> -		ret = intel_engine_idle(engine);
> -		if (ret)
> -			return ret;
> -	}
> -	i915_gem_retire_requests(dev_priv);
> -
> -	/* If the seqno wraps around, we need to clear the breadcrumb rbtree */
> -	if (!i915_seqno_passed(seqno, dev_priv->next_seqno)) {
> -		while (intel_kick_waiters(dev_priv) ||
> -		       intel_kick_signalers(dev_priv))
> -			yield();
> -	}
> -
> -	/* Finally reset hw state */
> -	for_each_engine(engine, dev_priv)
> -		intel_ring_init_seqno(engine, seqno);
> -
> -	return 0;
> -}
> -
> -int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
> -{
> -	struct drm_i915_private *dev_priv = dev->dev_private;
> -	int ret;
> -
> -	if (seqno == 0)
> -		return -EINVAL;
> -
> -	/* HWS page needs to be set less than what we
> -	 * will inject to ring
> -	 */
> -	ret = i915_gem_init_seqno(dev_priv, seqno - 1);
> -	if (ret)
> -		return ret;
> -
> -	/* Carefully set the last_seqno value so that wrap
> -	 * detection still works
> -	 */
> -	dev_priv->next_seqno = seqno;
> -	dev_priv->last_seqno = seqno - 1;
> -	if (dev_priv->last_seqno == 0)
> -		dev_priv->last_seqno--;
> -
> -	return 0;
> -}
> -
> -int
> -i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
> -{
> -	/* reserve 0 for non-seqno */
> -	if (dev_priv->next_seqno == 0) {
> -		int ret = i915_gem_init_seqno(dev_priv, 0);
> -		if (ret)
> -			return ret;
> -
> -		dev_priv->next_seqno = 1;
> -	}
> -
> -	*seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
> -	return 0;
> -}
> -
> -static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
> -			       const struct intel_engine_cs *engine)
> -{
> -	dev_priv->gt.active_engines |= intel_engine_flag(engine);
> -	if (dev_priv->gt.awake)
> -		return;
> -
> -	intel_runtime_pm_get_noresume(dev_priv);
> -	dev_priv->gt.awake = true;
> -
> -	intel_enable_gt_powersave(dev_priv);
> -	i915_update_gfx_val(dev_priv);
> -	if (INTEL_INFO(dev_priv)->gen >= 6)
> -		gen6_rps_busy(dev_priv);
> -
> -	queue_delayed_work(dev_priv->wq,
> -			   &dev_priv->gt.retire_work,
> -			   round_jiffies_up_relative(HZ));
> -}
> -
> -/*
> - * NB: This function is not allowed to fail. Doing so would mean the the
> - * request is not being tracked for completion but the work itself is
> - * going to happen on the hardware. This would be a Bad Thing(tm).
> - */
> -void __i915_add_request(struct drm_i915_gem_request *request,
> -			struct drm_i915_gem_object *obj,
> -			bool flush_caches)
> -{
> -	struct intel_engine_cs *engine;
> -	struct drm_i915_private *dev_priv;
> -	struct intel_ringbuffer *ringbuf;
> -	u32 request_start;
> -	u32 reserved_tail;
> -	int ret;
> -
> -	if (WARN_ON(request == NULL))
> -		return;
> -
> -	engine = request->engine;
> -	dev_priv = request->i915;
> -	ringbuf = request->ringbuf;
> -
> -	/*
> -	 * To ensure that this call will not fail, space for its emissions
> -	 * should already have been reserved in the ring buffer. Let the ring
> -	 * know that it is time to use that space up.
> -	 */
> -	request_start = intel_ring_get_tail(ringbuf);
> -	reserved_tail = request->reserved_space;
> -	request->reserved_space = 0;
> -
> -	/*
> -	 * Emit any outstanding flushes - execbuf can fail to emit the flush
> -	 * after having emitted the batchbuffer command. Hence we need to fix
> -	 * things up similar to emitting the lazy request. The difference here
> -	 * is that the flush _must_ happen before the next request, no matter
> -	 * what.
> -	 */
> -	if (flush_caches) {
> -		if (i915.enable_execlists)
> -			ret = logical_ring_flush_all_caches(request);
> -		else
> -			ret = intel_ring_flush_all_caches(request);
> -		/* Not allowed to fail! */
> -		WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
> -	}
> -
> -	trace_i915_gem_request_add(request);
> -
> -	request->head = request_start;
> -
> -	/* Whilst this request exists, batch_obj will be on the
> -	 * active_list, and so will hold the active reference. Only when this
> -	 * request is retired will the the batch_obj be moved onto the
> -	 * inactive_list and lose its active reference. Hence we do not need
> -	 * to explicitly hold another reference here.
> -	 */
> -	request->batch_obj = obj;
> -
> -	/* Seal the request and mark it as pending execution. Note that
> -	 * we may inspect this state, without holding any locks, during
> -	 * hangcheck. Hence we apply the barrier to ensure that we do not
> -	 * see a more recent value in the hws than we are tracking.
> -	 */
> -	request->emitted_jiffies = jiffies;
> -	request->previous_seqno = engine->last_submitted_seqno;
> -	smp_store_mb(engine->last_submitted_seqno, request->seqno);
> -	list_add_tail(&request->list, &engine->request_list);
> -
> -	/* Record the position of the start of the request so that
> -	 * should we detect the updated seqno part-way through the
> -	 * GPU processing the request, we never over-estimate the
> -	 * position of the head.
> -	 */
> -	request->postfix = intel_ring_get_tail(ringbuf);
> -
> -	if (i915.enable_execlists)
> -		ret = engine->emit_request(request);
> -	else {
> -		ret = engine->add_request(request);
> -
> -		request->tail = intel_ring_get_tail(ringbuf);
> -	}
> -	/* Not allowed to fail! */
> -	WARN(ret, "emit|add_request failed: %d!\n", ret);
> -	/* Sanity check that the reserved size was large enough. */
> -	ret = intel_ring_get_tail(ringbuf) - request_start;
> -	if (ret < 0)
> -		ret += ringbuf->size;
> -	WARN_ONCE(ret > reserved_tail,
> -		  "Not enough space reserved (%d bytes) "
> -		  "for adding the request (%d bytes)\n",
> -		  reserved_tail, ret);
> -
> -	i915_gem_mark_busy(dev_priv, engine);
> -}
> -
>   static bool i915_context_is_banned(const struct i915_gem_context *ctx)
>   {
>   	unsigned long elapsed;
> @@ -2734,102 +2191,6 @@ static void i915_set_reset_status(struct i915_gem_context *ctx,
>   	}
>   }
>
> -void i915_gem_request_free(struct kref *req_ref)
> -{
> -	struct drm_i915_gem_request *req = container_of(req_ref,
> -						 typeof(*req), ref);
> -	kmem_cache_free(req->i915->requests, req);
> -}
> -
> -static inline int
> -__i915_gem_request_alloc(struct intel_engine_cs *engine,
> -			 struct i915_gem_context *ctx,
> -			 struct drm_i915_gem_request **req_out)
> -{
> -	struct drm_i915_private *dev_priv = engine->i915;
> -	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
> -	struct drm_i915_gem_request *req;
> -	int ret;
> -
> -	if (!req_out)
> -		return -EINVAL;
> -
> -	*req_out = NULL;
> -
> -	/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
> -	 * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
> -	 * and restart.
> -	 */
> -	ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
> -	if (ret)
> -		return ret;
> -
> -	req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
> -	if (req == NULL)
> -		return -ENOMEM;
> -
> -	ret = i915_gem_get_seqno(engine->i915, &req->seqno);
> -	if (ret)
> -		goto err;
> -
> -	kref_init(&req->ref);
> -	req->i915 = dev_priv;
> -	req->engine = engine;
> -	req->reset_counter = reset_counter;
> -	req->ctx  = ctx;
> -	i915_gem_context_reference(req->ctx);
> -
> -	/*
> -	 * Reserve space in the ring buffer for all the commands required to
> -	 * eventually emit this request. This is to guarantee that the
> -	 * i915_add_request() call can't fail. Note that the reserve may need
> -	 * to be redone if the request is not actually submitted straight
> -	 * away, e.g. because a GPU scheduler has deferred it.
> -	 */
> -	req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
> -
> -	if (i915.enable_execlists)
> -		ret = intel_logical_ring_alloc_request_extras(req);
> -	else
> -		ret = intel_ring_alloc_request_extras(req);
> -	if (ret)
> -		goto err_ctx;
> -
> -	*req_out = req;
> -	return 0;
> -
> -err_ctx:
> -	i915_gem_context_unreference(ctx);
> -err:
> -	kmem_cache_free(dev_priv->requests, req);
> -	return ret;
> -}
> -
> -/**
> - * i915_gem_request_alloc - allocate a request structure
> - *
> - * @engine: engine that we wish to issue the request on.
> - * @ctx: context that the request will be associated with.
> - *       This can be NULL if the request is not directly related to
> - *       any specific user context, in which case this function will
> - *       choose an appropriate context to use.
> - *
> - * Returns a pointer to the allocated request if successful,
> - * or an error code if not.
> - */
> -struct drm_i915_gem_request *
> -i915_gem_request_alloc(struct intel_engine_cs *engine,
> -		       struct i915_gem_context *ctx)
> -{
> -	struct drm_i915_gem_request *req;
> -	int err;
> -
> -	if (ctx == NULL)
> -		ctx = engine->i915->kernel_context;
> -	err = __i915_gem_request_alloc(engine, ctx, &req);
> -	return err ? ERR_PTR(err) : req;
> -}
> -
>   struct drm_i915_gem_request *
>   i915_gem_find_active_request(struct intel_engine_cs *engine)
>   {
> @@ -2903,14 +2264,14 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
>   	 * implicit references on things like e.g. ppgtt address spaces through
>   	 * the request.
>   	 */
> -	while (!list_empty(&engine->request_list)) {
> +	if (!list_empty(&engine->request_list)) {
>   		struct drm_i915_gem_request *request;
>
> -		request = list_first_entry(&engine->request_list,
> -					   struct drm_i915_gem_request,
> -					   list);
> +		request = list_last_entry(&engine->request_list,
> +					  struct drm_i915_gem_request,
> +					  list);
>
> -		i915_gem_request_retire(request);
> +		i915_gem_request_retire_upto(request);
>   	}
>
>   	/* Having flushed all requests from all queues, we know that all
> @@ -2974,7 +2335,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *engine)
>   		if (!i915_gem_request_completed(request))
>   			break;
>
> -		i915_gem_request_retire(request);
> +		i915_gem_request_retire_upto(request);
>   	}
>
>   	/* Move any buffers on the active list that are no longer referenced
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> new file mode 100644
> index 000000000000..34b2f151cdfc
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -0,0 +1,659 @@
> +/*
> + * Copyright © 2008-2015 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include "i915_drv.h"
> +
> +static int i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
> +{
> +	if (__i915_terminally_wedged(reset_counter))
> +		return -EIO;
> +
> +	if (__i915_reset_in_progress(reset_counter)) {
> +		/* Non-interruptible callers can't handle -EAGAIN, hence return
> +		 * -EIO unconditionally for these. */
> +		if (!interruptible)
> +			return -EIO;
> +
> +		return -EAGAIN;
> +	}
> +
> +	return 0;
> +}
> +
> +static int i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
> +{
> +	struct intel_engine_cs *engine;
> +	int ret;
> +
> +	/* Carefully retire all requests without writing to the rings */
> +	for_each_engine(engine, dev_priv) {
> +		ret = intel_engine_idle(engine);
> +		if (ret)
> +			return ret;
> +	}
> +	i915_gem_retire_requests(dev_priv);
> +
> +	/* If the seqno wraps around, we need to clear the breadcrumb rbtree */
> +	if (!i915_seqno_passed(seqno, dev_priv->next_seqno)) {
> +		while (intel_kick_waiters(dev_priv) ||
> +		       intel_kick_signalers(dev_priv))
> +			yield();
> +	}
> +
> +	/* Finally reset hw state */
> +	for_each_engine(engine, dev_priv)
> +		intel_ring_init_seqno(engine, seqno);
> +
> +	return 0;
> +}
> +
> +int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
> +{
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +	int ret;
> +
> +	if (seqno == 0)
> +		return -EINVAL;
> +
> +	/* HWS page needs to be set less than what we
> +	 * will inject to ring
> +	 */
> +	ret = i915_gem_init_seqno(dev_priv, seqno - 1);
> +	if (ret)
> +		return ret;
> +
> +	/* Carefully set the last_seqno value so that wrap
> +	 * detection still works
> +	 */
> +	dev_priv->next_seqno = seqno;
> +	dev_priv->last_seqno = seqno - 1;
> +	if (dev_priv->last_seqno == 0)
> +		dev_priv->last_seqno--;
> +
> +	return 0;
> +}
> +
> +static int i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
> +{
> +	/* reserve 0 for non-seqno */
> +	if (unlikely(dev_priv->next_seqno == 0)) {
> +		int ret = i915_gem_init_seqno(dev_priv, 0);
> +		if (ret)
> +			return ret;
> +
> +		dev_priv->next_seqno = 1;
> +	}
> +
> +	*seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
> +	return 0;
> +}
> +
> +static inline int
> +__i915_gem_request_alloc(struct intel_engine_cs *engine,
> +			 struct i915_gem_context *ctx,
> +			 struct drm_i915_gem_request **req_out)
> +{
> +	struct drm_i915_private *dev_priv = engine->i915;
> +	unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
> +	struct drm_i915_gem_request *req;
> +	int ret;
> +
> +	if (!req_out)
> +		return -EINVAL;
> +
> +	*req_out = NULL;
> +
> +	/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
> +	 * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
> +	 * and restart.
> +	 */
> +	ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
> +	if (ret)
> +		return ret;
> +
> +	req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
> +	if (req == NULL)
> +		return -ENOMEM;
> +
> +	ret = i915_gem_get_seqno(dev_priv, &req->seqno);
> +	if (ret)
> +		goto err;
> +
> +	kref_init(&req->ref);
> +	req->i915 = dev_priv;
> +	req->engine = engine;
> +	req->reset_counter = reset_counter;
> +	req->ctx = ctx;
> +	i915_gem_context_reference(ctx);
> +
> +	/*
> +	 * Reserve space in the ring buffer for all the commands required to
> +	 * eventually emit this request. This is to guarantee that the
> +	 * i915_add_request() call can't fail. Note that the reserve may need
> +	 * to be redone if the request is not actually submitted straight
> +	 * away, e.g. because a GPU scheduler has deferred it.
> +	 */
> +	req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST;
> +
> +	if (i915.enable_execlists)
> +		ret = intel_logical_ring_alloc_request_extras(req);
> +	else
> +		ret = intel_ring_alloc_request_extras(req);
> +	if (ret)
> +		goto err_ctx;
> +
> +	*req_out = req;
> +	return 0;
> +
> +err_ctx:
> +	i915_gem_context_unreference(ctx);
> +err:
> +	kmem_cache_free(dev_priv->requests, req);
> +	return ret;
> +}
> +
> +/**
> + * i915_gem_request_alloc - allocate a request structure
> + *
> + * @engine: engine that we wish to issue the request on.
> + * @ctx: context that the request will be associated with.
> + *       This can be NULL if the request is not directly related to
> + *       any specific user context, in which case this function will
> + *       choose an appropriate context to use.
> + *
> + * Returns a pointer to the allocated request if successful,
> + * or an error code if not.
> + */
> +struct drm_i915_gem_request *
> +i915_gem_request_alloc(struct intel_engine_cs *engine,
> +		       struct i915_gem_context *ctx)
> +{
> +	struct drm_i915_gem_request *req;
> +	int err;
> +
> +	if (ctx == NULL)
> +		ctx = engine->i915->kernel_context;
> +	err = __i915_gem_request_alloc(engine, ctx, &req);
> +	return err ? ERR_PTR(err) : req;
> +}
> +
> +int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
> +				   struct drm_file *file)
> +{
> +	struct drm_i915_private *dev_private;
> +	struct drm_i915_file_private *file_priv;
> +
> +	WARN_ON(!req || !file || req->file_priv);
> +
> +	if (!req || !file)
> +		return -EINVAL;
> +
> +	if (req->file_priv)
> +		return -EINVAL;
> +
> +	dev_private = req->i915;
> +	file_priv = file->driver_priv;
> +
> +	spin_lock(&file_priv->mm.lock);
> +	req->file_priv = file_priv;
> +	list_add_tail(&req->client_list, &file_priv->mm.request_list);
> +	spin_unlock(&file_priv->mm.lock);
> +
> +	req->pid = get_pid(task_pid(current));
> +
> +	return 0;
> +}
> +
> +static inline void
> +i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
> +{
> +	struct drm_i915_file_private *file_priv = request->file_priv;
> +
> +	if (!file_priv)
> +		return;
> +
> +	spin_lock(&file_priv->mm.lock);
> +	list_del(&request->client_list);
> +	request->file_priv = NULL;
> +	spin_unlock(&file_priv->mm.lock);
> +
> +	put_pid(request->pid);
> +	request->pid = NULL;
> +}
> +
> +static void i915_gem_request_retire(struct drm_i915_gem_request *request)
> +{
> +	trace_i915_gem_request_retire(request);
> +	list_del_init(&request->list);
> +
> +	/* We know the GPU must have read the request to have
> +	 * sent us the seqno + interrupt, so use the position
> +	 * of tail of the request to update the last known position
> +	 * of the GPU head.
> +	 *
> +	 * Note this requires that we are always called in request
> +	 * completion order.
> +	 */
> +	request->ringbuf->last_retired_head = request->postfix;
> +
> +	i915_gem_request_remove_from_client(request);
> +
> +	if (request->previous_context) {
> +		if (i915.enable_execlists)
> +			intel_lr_context_unpin(request->previous_context,
> +					       request->engine);
> +	}
> +
> +	i915_gem_context_unreference(request->ctx);
> +	i915_gem_request_unreference(request);
> +}
> +
> +void i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
> +{
> +	struct intel_engine_cs *engine = req->engine;
> +	struct drm_i915_gem_request *tmp;
> +
> +	lockdep_assert_held(&req->i915->dev->struct_mutex);
> +
> +	if (list_empty(&req->list))
> +		return;
> +
> +	do {
> +		tmp = list_first_entry(&engine->request_list,
> +				       typeof(*tmp), list);
> +
> +		i915_gem_request_retire(tmp);
> +	} while (tmp != req);
> +
> +	WARN_ON(i915_verify_lists(engine->dev));
> +}
> +
> +static void i915_gem_mark_busy(struct drm_i915_private *dev_priv,
> +			       const struct intel_engine_cs *engine)
> +{
> +	dev_priv->gt.active_engines |= intel_engine_flag(engine);
> +	if (dev_priv->gt.awake)
> +		return;
> +
> +	intel_runtime_pm_get_noresume(dev_priv);
> +	dev_priv->gt.awake = true;
> +
> +	intel_enable_gt_powersave(dev_priv);
> +	i915_update_gfx_val(dev_priv);
> +	if (INTEL_INFO(dev_priv)->gen >= 6)
> +		gen6_rps_busy(dev_priv);
> +
> +	queue_delayed_work(dev_priv->wq,
> +			   &dev_priv->gt.retire_work,
> +			   round_jiffies_up_relative(HZ));
> +}
> +
> +/*
> + * NB: This function is not allowed to fail. Doing so would mean the the
> + * request is not being tracked for completion but the work itself is
> + * going to happen on the hardware. This would be a Bad Thing(tm).
> + */
> +void __i915_add_request(struct drm_i915_gem_request *request,
> +			struct drm_i915_gem_object *obj,
> +			bool flush_caches)
> +{
> +	struct intel_engine_cs *engine;
> +	struct drm_i915_private *dev_priv;
> +	struct intel_ringbuffer *ringbuf;
> +	u32 request_start;
> +	u32 reserved_tail;
> +	int ret;
> +
> +	if (WARN_ON(request == NULL))
> +		return;
> +
> +	engine = request->engine;
> +	dev_priv = request->i915;
> +	ringbuf = request->ringbuf;
> +
> +	/*
> +	 * To ensure that this call will not fail, space for its emissions
> +	 * should already have been reserved in the ring buffer. Let the ring
> +	 * know that it is time to use that space up.
> +	 */
> +	request_start = intel_ring_get_tail(ringbuf);
> +	reserved_tail = request->reserved_space;
> +	request->reserved_space = 0;
> +
> +	/*
> +	 * Emit any outstanding flushes - execbuf can fail to emit the flush
> +	 * after having emitted the batchbuffer command. Hence we need to fix
> +	 * things up similar to emitting the lazy request. The difference here
> +	 * is that the flush _must_ happen before the next request, no matter
> +	 * what.
> +	 */
> +	if (flush_caches) {
> +		if (i915.enable_execlists)
> +			ret = logical_ring_flush_all_caches(request);
> +		else
> +			ret = intel_ring_flush_all_caches(request);
> +		/* Not allowed to fail! */
> +		WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
> +	}
> +
> +	trace_i915_gem_request_add(request);
> +
> +	request->head = request_start;
> +
> +	/* Whilst this request exists, batch_obj will be on the
> +	 * active_list, and so will hold the active reference. Only when this
> +	 * request is retired will the the batch_obj be moved onto the
> +	 * inactive_list and lose its active reference. Hence we do not need
> +	 * to explicitly hold another reference here.
> +	 */
> +	request->batch_obj = obj;
> +
> +	/* Seal the request and mark it as pending execution. Note that
> +	 * we may inspect this state, without holding any locks, during
> +	 * hangcheck. Hence we apply the barrier to ensure that we do not
> +	 * see a more recent value in the hws than we are tracking.
> +	 */
> +	request->emitted_jiffies = jiffies;
> +	request->previous_seqno = engine->last_submitted_seqno;
> +	smp_store_mb(engine->last_submitted_seqno, request->seqno);
> +	list_add_tail(&request->list, &engine->request_list);
> +
> +	/* Record the position of the start of the request so that
> +	 * should we detect the updated seqno part-way through the
> +	 * GPU processing the request, we never over-estimate the
> +	 * position of the head.
> +	 */
> +	request->postfix = intel_ring_get_tail(ringbuf);
> +
> +	if (i915.enable_execlists)
> +		ret = engine->emit_request(request);
> +	else {
> +		ret = engine->add_request(request);
> +
> +		request->tail = intel_ring_get_tail(ringbuf);
> +	}
> +	/* Not allowed to fail! */
> +	WARN(ret, "emit|add_request failed: %d!\n", ret);
> +	/* Sanity check that the reserved size was large enough. */
> +	ret = intel_ring_get_tail(ringbuf) - request_start;
> +	if (ret < 0)
> +		ret += ringbuf->size;
> +	WARN_ONCE(ret > reserved_tail,
> +		  "Not enough space reserved (%d bytes) "
> +		  "for adding the request (%d bytes)\n",
> +		  reserved_tail, ret);
> +
> +	i915_gem_mark_busy(dev_priv, engine);
> +}
> +
> +static unsigned long local_clock_us(unsigned *cpu)
> +{
> +	unsigned long t;
> +
> +	/* Cheaply and approximately convert from nanoseconds to microseconds.
> +	 * The result and subsequent calculations are also defined in the same
> +	 * approximate microseconds units. The principal source of timing
> +	 * error here is from the simple truncation.
> +	 *
> +	 * Note that local_clock() is only defined wrt to the current CPU;
> +	 * the comparisons are no longer valid if we switch CPUs. Instead of
> +	 * blocking preemption for the entire busywait, we can detect the CPU
> +	 * switch and use that as indicator of system load and a reason to
> +	 * stop busywaiting, see busywait_stop().
> +	 */
> +	*cpu = get_cpu();
> +	t = local_clock() >> 10;
> +	put_cpu();
> +
> +	return t;
> +}
> +
> +static bool busywait_stop(unsigned long timeout, unsigned cpu)
> +{
> +	unsigned this_cpu;
> +
> +	if (time_after(local_clock_us(&this_cpu), timeout))
> +		return true;
> +
> +	return this_cpu != cpu;
> +}
> +
> +bool __i915_spin_request(const struct drm_i915_gem_request *req,
> +			 int state, unsigned long timeout_us)
> +{
> +	unsigned cpu;
> +
> +	/* When waiting for high frequency requests, e.g. during synchronous
> +	 * rendering split between the CPU and GPU, the finite amount of time
> +	 * required to set up the irq and wait upon it limits the response
> +	 * rate. By busywaiting on the request completion for a short while we
> +	 * can service the high frequency waits as quick as possible. However,
> +	 * if it is a slow request, we want to sleep as quickly as possible.
> +	 * The tradeoff between waiting and sleeping is roughly the time it
> +	 * takes to sleep on a request, on the order of a microsecond.
> +	 */
> +
> +	timeout_us += local_clock_us(&cpu);
> +	do {
> +		if (i915_gem_request_completed(req))
> +			return true;
> +
> +		if (signal_pending_state(state, current))
> +			break;
> +
> +		if (busywait_stop(timeout_us, cpu))
> +			break;
> +
> +		cpu_relax_lowlatency();
> +	} while (!need_resched());
> +
> +	return false;
> +}
> +
> +/**
> + * __i915_wait_request - wait until execution of request has finished
> + * @req: duh!
> + * @interruptible: do an interruptible wait (normally yes)
> + * @timeout: in - how long to wait (NULL forever); out - how much time remaining
> + *
> + * Note: It is of utmost importance that the passed in seqno and reset_counter
> + * values have been read by the caller in an smp safe manner. Where read-side
> + * locks are involved, it is sufficient to read the reset_counter before
> + * unlocking the lock that protects the seqno. For lockless tricks, the
> + * reset_counter _must_ be read before, and an appropriate smp_rmb must be
> + * inserted.
> + *
> + * Returns 0 if the request was found within the alloted time. Else returns the
> + * errno with remaining time filled in timeout argument.
> + */
> +int __i915_wait_request(struct drm_i915_gem_request *req,
> +			bool interruptible,
> +			s64 *timeout,
> +			struct intel_rps_client *rps)
> +{
> +	int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
> +	DEFINE_WAIT(reset);
> +	struct intel_wait wait;
> +	unsigned long timeout_remain;
> +	int ret = 0;
> +
> +	might_sleep();
> +
> +	if (list_empty(&req->list))
> +		return 0;
> +
> +	if (i915_gem_request_completed(req))
> +		return 0;
> +
> +	timeout_remain = MAX_SCHEDULE_TIMEOUT;
> +	if (timeout) {
> +		if (WARN_ON(*timeout < 0))
> +			return -EINVAL;
> +
> +		if (*timeout == 0)
> +			return -ETIME;
> +
> +		/* Record current time in case interrupted, or wedged */
> +		timeout_remain = nsecs_to_jiffies_timeout(*timeout);
> +		*timeout += ktime_get_raw_ns();
> +	}
> +
> +	trace_i915_gem_request_wait_begin(req);
> +
> +	/* This client is about to stall waiting for the GPU. In many cases
> +	 * this is undesirable and limits the throughput of the system, as
> +	 * many clients cannot continue processing user input/output whilst
> +	 * blocked. RPS autotuning may take tens of milliseconds to respond
> +	 * to the GPU load and thus incurs additional latency for the client.
> +	 * We can circumvent that by promoting the GPU frequency to maximum
> +	 * before we wait. This makes the GPU throttle up much more quickly
> +	 * (good for benchmarks and user experience, e.g. window animations),
> +	 * but at a cost of spending more power processing the workload
> +	 * (bad for battery). Not all clients even want their results
> +	 * immediately and for them we should just let the GPU select its own
> +	 * frequency to maximise efficiency. To prevent a single client from
> +	 * forcing the clocks too high for the whole system, we only allow
> +	 * each client to waitboost once in a busy period.
> +	 */
> +	if (INTEL_INFO(req->i915)->gen >= 6)
> +		gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
> +
> +	/* Optimistic spin for the next ~jiffie before touching IRQs */
> +	if (i915_spin_request(req, state, 5))
> +		goto complete;
> +
> +	intel_wait_init(&wait, req->seqno);
> +	set_current_state(state);
> +	if (intel_engine_add_wait(req->engine, &wait))
> +		/* In order to check that we haven't missed the interrupt
> +		 * as we enabled it, we need to kick ourselves to do a
> +		 * coherent check on the seqno before we sleep.
> +		 */
> +		goto wakeup;
> +
> +	add_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
> +	for (;;) {
> +		if (signal_pending_state(state, current)) {
> +			ret = -ERESTARTSYS;
> +			break;
> +		}
> +
> +		/* Ensure that even if the GPU hangs, we get woken up. */
> +		i915_queue_hangcheck(req->i915);
> +
> +		timeout_remain = io_schedule_timeout(timeout_remain);
> +		if (timeout_remain == 0) {
> +			ret = -ETIME;
> +			break;
> +		}
> +
> +		if (intel_wait_complete(&wait))
> +			break;
> +
> +wakeup:
> +		set_current_state(state);
> +
> +		/* Carefully check if the request is complete, giving time
> +		 * for the seqno to be visible following the interrupt.
> +		 * We also have to check in case we are kicked by the GPU
> +		 * reset in order to drop the struct_mutex.
> +		 */
> +		if (__i915_request_irq_complete(req))
> +			break;
> +
> +		/* Only spin if we know the GPU is processing this request */
> +		if (i915_spin_request(req, state, 2))
> +			break;
> +	}
> +	remove_wait_queue(&req->i915->gpu_error.wait_queue, &reset);
> +
> +	intel_engine_remove_wait(req->engine, &wait);
> +	__set_current_state(TASK_RUNNING);
> +complete:
> +	trace_i915_gem_request_wait_end(req);
> +
> +	if (timeout) {
> +		*timeout -= ktime_get_raw_ns();
> +		if (*timeout < 0)
> +			*timeout = 0;
> +
> +		/*
> +		 * Apparently ktime isn't accurate enough and occasionally has a
> +		 * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
> +		 * things up to make the test happy. We allow up to 1 jiffy.
> +		 *
> +		 * This is a regrssion from the timespec->ktime conversion.
> +		 */
> +		if (ret == -ETIME && *timeout < jiffies_to_usecs(1)*1000)
> +			*timeout = 0;
> +	}
> +
> +	if (rps && req->seqno == req->engine->last_submitted_seqno) {
> +		/* The GPU is now idle and this client has stalled.
> +		 * Since no other client has submitted a request in the
> +		 * meantime, assume that this client is the only one
> +		 * supplying work to the GPU but is unable to keep that
> +		 * work supplied because it is waiting. Since the GPU is
> +		 * then never kept fully busy, RPS autoclocking will
> +		 * keep the clocks relatively low, causing further delays.
> +		 * Compensate by giving the synchronous client credit for
> +		 * a waitboost next time.
> +		 */
> +		spin_lock(&req->i915->rps.client_lock);
> +		list_del_init(&rps->link);
> +		spin_unlock(&req->i915->rps.client_lock);
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * Waits for a request to be signaled, and cleans up the
> + * request and object lists appropriately for that event.
> + */
> +int i915_wait_request(struct drm_i915_gem_request *req)
> +{
> +	int ret;
> +
> +	BUG_ON(req == NULL);
> +	BUG_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));
> +
> +	ret = __i915_wait_request(req, req->i915->mm.interruptible,
> +				  NULL, NULL);
> +	if (ret)
> +		return ret;
> +
> +	/* If the GPU hung, we want to keep the requests to find the guilty. */
> +	if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
> +		i915_gem_request_retire_upto(req);
> +
> +	return 0;
> +}
> +
> +void i915_gem_request_free(struct kref *req_ref)
> +{
> +	struct drm_i915_gem_request *req =
> +	       	container_of(req_ref, typeof(*req), ref);
> +	kmem_cache_free(req->i915->requests, req);
> +}
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
> new file mode 100644
> index 000000000000..166e0733d2d8
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/i915_gem_request.h
> @@ -0,0 +1,245 @@
> +/*
> + * Copyright © 2008-2015 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef I915_GEM_REQUEST_H
> +#define I915_GEM_REQUEST_H
> +
> +/**
> + * Request queue structure.
> + *
> + * The request queue allows us to note sequence numbers that have been emitted
> + * and may be associated with active buffers to be retired.
> + *
> + * By keeping this list, we can avoid having to do questionable sequence
> + * number comparisons on buffer last_read|write_seqno. It also allows an
> + * emission time to be associated with the request for tracking how far ahead
> + * of the GPU the submission is.
> + *
> + * The requests are reference counted, so upon creation they should have an
> + * initial reference taken using kref_init
> + */
> +struct drm_i915_gem_request {
> +	struct kref ref;
> +
> +	/** On Which ring this request was generated */
> +	struct drm_i915_private *i915;
> +
> +	/**
> +	 * Context and ring buffer related to this request
> +	 * Contexts are refcounted, so when this request is associated with a
> +	 * context, we must increment the context's refcount, to guarantee that
> +	 * it persists while any request is linked to it. Requests themselves
> +	 * are also refcounted, so the request will only be freed when the last
> +	 * reference to it is dismissed, and the code in
> +	 * i915_gem_request_free() will then decrement the refcount on the
> +	 * context.
> +	 */
> +	struct i915_gem_context *ctx;
> +	struct intel_engine_cs *engine;
> +	struct intel_ringbuffer *ringbuf;
> +	struct intel_signal_node signaling;
> +
> +	unsigned reset_counter;
> +
> +	/** GEM sequence number associated with the previous request,
> +	 * when the HWS breadcrumb is equal to this the GPU is processing
> +	 * this request.
> +	 */
> +	u32 previous_seqno;
> +
> +	/** GEM sequence number associated with this request,
> +	 * when the HWS breadcrumb is equal or greater than this the GPU
> +	 * has finished processing this request.
> +	 */
> +	u32 seqno;
> +
> +	/** Position in the ringbuffer of the start of the request */
> +	u32 head;
> +
> +	/**
> +	 * Position in the ringbuffer of the start of the postfix.
> +	 * This is required to calculate the maximum available ringbuffer
> +	 * space without overwriting the postfix.
> +	 */
> +	u32 postfix;
> +
> +	/** Position in the ringbuffer of the end of the whole request */
> +	u32 tail;
> +
> +	/** Preallocate space in the ringbuffer for the emitting the request */
> +	u32 reserved_space;
> +
> +
> +	/**
> +	 * Context related to the previous request.
> +	 * As the contexts are accessed by the hardware until the switch is
> +	 * completed to a new context, the hardware may still be writing
> +	 * to the context object after the breadcrumb is visible. We must
> +	 * not unpin/unbind/prune that object whilst still active and so
> +	 * we keep the previous context pinned until the following (this)
> +	 * request is retired.
> +	 */
> +	struct i915_gem_context *previous_context;
> +
> +
> +	/** Batch buffer related to this request if any (used for
> +	 * error state dump only) */
> +	struct drm_i915_gem_object *batch_obj;
> +
> +	/** Time at which this request was emitted, in jiffies. */
> +	unsigned long emitted_jiffies;
> +
> +	/** global list entry for this request */
> +	struct list_head list;
> +
> +	struct drm_i915_file_private *file_priv;
> +	/** file_priv list entry for this request */
> +	struct list_head client_list;
> +
> +	/** process identifier submitting this request */
> +	struct pid *pid;
> +
> +	/**
> +	 * The ELSP only accepts two elements at a time, so we queue
> +	 * context/tail pairs on a given queue (ring->execlist_queue) until the
> +	 * hardware is available. The queue serves a double purpose: we also use
> +	 * it to keep track of the up to 2 contexts currently in the hardware
> +	 * (usually one in execution and the other queued up by the GPU): We
> +	 * only remove elements from the head of the queue when the hardware
> +	 * informs us that an element has been completed.
> +	 *
> +	 * All accesses to the queue are mediated by a spinlock
> +	 * (ring->execlist_lock).
> +	 */
> +
> +	/** Execlist link in the submission queue.*/
> +	struct list_head execlist_link;
> +
> +	/** Execlists no. of times this request has been sent to the ELSP */
> +	int elsp_submitted;
> +
> +	/** Execlists context hardware id. */
> +	unsigned ctx_hw_id;
> +};
> +
> +static inline struct drm_i915_private *
> +__request_to_i915(const struct drm_i915_gem_request *request)
> +{
> +	return request->i915;
> +}
> +
> +struct drm_i915_gem_request * __must_check
> +i915_gem_request_alloc(struct intel_engine_cs *engine,
> +		       struct i915_gem_context *ctx);
> +void i915_gem_request_free(struct kref *req_ref);
> +int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
> +				   struct drm_file *file);
> +void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
> +
> +static inline uint32_t
> +i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
> +{
> +	return req ? req->seqno : 0;
> +}
> +
> +static inline struct intel_engine_cs *
> +i915_gem_request_get_engine(struct drm_i915_gem_request *req)
> +{
> +	return req ? req->engine : NULL;
> +}
> +
> +static inline struct drm_i915_gem_request *
> +i915_gem_request_reference(struct drm_i915_gem_request *req)
> +{
> +	if (req)
> +		kref_get(&req->ref);
> +	return req;
> +}
> +
> +static inline void
> +i915_gem_request_unreference(struct drm_i915_gem_request *req)
> +{
> +	kref_put(&req->ref, i915_gem_request_free);
> +}
> +
> +static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
> +					   struct drm_i915_gem_request *src)
> +{
> +	if (src)
> +		i915_gem_request_reference(src);
> +
> +	if (*pdst)
> +		i915_gem_request_unreference(*pdst);
> +
> +	*pdst = src;
> +}
> +
> +void __i915_add_request(struct drm_i915_gem_request *req,
> +			struct drm_i915_gem_object *batch_obj,
> +			bool flush_caches);
> +#define i915_add_request(req) \
> +	__i915_add_request(req, NULL, true)
> +#define i915_add_request_no_flush(req) \
> +	__i915_add_request(req, NULL, false)
> +
> +struct intel_rps_client;
> +
> +int __i915_wait_request(struct drm_i915_gem_request *req,
> +			bool interruptible,
> +			s64 *timeout,
> +			struct intel_rps_client *rps);
> +int __must_check i915_wait_request(struct drm_i915_gem_request *req);
> +
> +static inline u32 intel_engine_get_seqno(struct intel_engine_cs *engine);
> +
> +/**
> + * Returns true if seq1 is later than seq2.
> + */
> +static inline bool
> +i915_seqno_passed(uint32_t seq1, uint32_t seq2)
> +{
> +	return (int32_t)(seq1 - seq2) >= 0;
> +}
> +static inline bool i915_gem_request_started(const struct drm_i915_gem_request *req)
> +{
> +	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
> +				 req->previous_seqno);
> +}
> +
> +static inline bool i915_gem_request_completed(const struct drm_i915_gem_request *req)
> +{
> +	return i915_seqno_passed(intel_engine_get_seqno(req->engine),
> +				 req->seqno);
> +}
> +
> +bool __i915_spin_request(const struct drm_i915_gem_request *request,
> +			 int state, unsigned long timeout_us);
> +static inline bool i915_spin_request(const struct drm_i915_gem_request *request,
> +				     int state, unsigned long timeout_us)
> +{
> +	return (i915_gem_request_started(request) &&
> +		__i915_spin_request(request, state, timeout_us));
> +}
> +
> +#endif /* I915_GEM_REQUEST_H */
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index a066dcfcdd38..3ba5302ce19f 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1400,6 +1400,9 @@ void i915_capture_error_state(struct drm_i915_private *dev_priv,
>   	struct drm_i915_error_state *error;
>   	unsigned long flags;
>
> +	if (READ_ONCE(dev_priv->gpu_error.first_error))
> +		return;
> +
>   	/* Account for pipe specific data like PIPE*STAT */
>   	error = kzalloc(sizeof(*error), GFP_ATOMIC);
>   	if (!error) {
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface
  2016-06-03 16:36 ` [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface Chris Wilson
@ 2016-06-08 11:50   ` Arun Siluvery
  0 siblings, 0 replies; 87+ messages in thread
From: Arun Siluvery @ 2016-06-08 11:50 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On 03/06/2016 22:06, Chris Wilson wrote:
> Now that we have (near) universal GPU recovery code, we can inject a
> real hang from userspace and not need any fakery. Not only does this
> mean that the testing is far more realistic, but we can simplify the
> kernel in the process.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_debugfs.c     | 35 --------------------------
>   drivers/gpu/drm/i915/i915_drv.c         | 17 ++-----------
>   drivers/gpu/drm/i915/i915_drv.h         | 19 --------------
>   drivers/gpu/drm/i915/i915_gem.c         | 44 ++++++++++-----------------------
>   drivers/gpu/drm/i915/intel_lrc.c        |  3 ---
>   drivers/gpu/drm/i915/intel_ringbuffer.c |  8 ------
>   drivers/gpu/drm/i915/intel_ringbuffer.h |  1 -
>   7 files changed, 15 insertions(+), 112 deletions(-)
>

looks good to me,
Reviewed-by: Arun Siluvery <arun.siluvery@linux.intel.com>

regards
Arun

> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index dd6cf222e8f5..8f576b443ff6 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -4821,40 +4821,6 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_wedged_fops,
>   			"%llu\n");
>
>   static int
> -i915_ring_stop_get(void *data, u64 *val)
> -{
> -	struct drm_device *dev = data;
> -	struct drm_i915_private *dev_priv = dev->dev_private;
> -
> -	*val = dev_priv->gpu_error.stop_rings;
> -
> -	return 0;
> -}
> -
> -static int
> -i915_ring_stop_set(void *data, u64 val)
> -{
> -	struct drm_device *dev = data;
> -	struct drm_i915_private *dev_priv = dev->dev_private;
> -	int ret;
> -
> -	DRM_DEBUG_DRIVER("Stopping rings 0x%08llx\n", val);
> -
> -	ret = mutex_lock_interruptible(&dev->struct_mutex);
> -	if (ret)
> -		return ret;
> -
> -	dev_priv->gpu_error.stop_rings = val;
> -	mutex_unlock(&dev->struct_mutex);
> -
> -	return 0;
> -}
> -
> -DEFINE_SIMPLE_ATTRIBUTE(i915_ring_stop_fops,
> -			i915_ring_stop_get, i915_ring_stop_set,
> -			"0x%08llx\n");
> -
> -static int
>   i915_ring_missed_irq_get(void *data, u64 *val)
>   {
>   	struct drm_device *dev = data;
> @@ -5457,7 +5423,6 @@ static const struct i915_debugfs_files {
>   	{"i915_max_freq", &i915_max_freq_fops},
>   	{"i915_min_freq", &i915_min_freq_fops},
>   	{"i915_cache_sharing", &i915_cache_sharing_fops},
> -	{"i915_ring_stop", &i915_ring_stop_fops},
>   	{"i915_ring_missed_irq", &i915_ring_missed_irq_fops},
>   	{"i915_ring_test_irq", &i915_ring_test_irq_fops},
>   	{"i915_gem_drop_caches", &i915_drop_caches_fops},
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 7ba040141722..f2ac0cae929b 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -2125,24 +2125,11 @@ int i915_reset(struct drm_i915_private *dev_priv)
>   		goto error;
>   	}
>
> +	pr_notice("drm/i915: Resetting chip after gpu hang\n");
> +
>   	i915_gem_reset(dev);
>
>   	ret = intel_gpu_reset(dev_priv, ALL_ENGINES);
> -
> -	/* Also reset the gpu hangman. */
> -	if (error->stop_rings != 0) {
> -		DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
> -		error->stop_rings = 0;
> -		if (ret == -ENODEV) {
> -			DRM_INFO("Reset not implemented, but ignoring "
> -				 "error for simulated gpu hangs\n");
> -			ret = 0;
> -		}
> -	}
> -
> -	if (i915_stop_ring_allow_warn(dev_priv))
> -		pr_notice("drm/i915: Resetting chip after gpu hang\n");
> -
>   	if (ret) {
>   		if (ret != -ENODEV)
>   			DRM_ERROR("Failed to reset chip: %i\n", ret);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 3f075adf9e84..a48c0f4e1d42 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1393,13 +1393,6 @@ struct i915_gpu_error {
>   	 */
>   	wait_queue_head_t reset_queue;
>
> -	/* Userspace knobs for gpu hang simulation;
> -	 * combines both a ring mask, and extra flags
> -	 */
> -	u32 stop_rings;
> -#define I915_STOP_RING_ALLOW_BAN       (1 << 31)
> -#define I915_STOP_RING_ALLOW_WARN      (1 << 30)
> -
>   	/* For missed irq/seqno simulation. */
>   	unsigned long test_irq_rings;
>   };
> @@ -3292,18 +3285,6 @@ static inline u32 i915_reset_count(struct i915_gpu_error *error)
>   	return ((i915_reset_counter(error) & ~I915_WEDGED) + 1) / 2;
>   }
>
> -static inline bool i915_stop_ring_allow_ban(struct drm_i915_private *dev_priv)
> -{
> -	return dev_priv->gpu_error.stop_rings == 0 ||
> -		dev_priv->gpu_error.stop_rings & I915_STOP_RING_ALLOW_BAN;
> -}
> -
> -static inline bool i915_stop_ring_allow_warn(struct drm_i915_private *dev_priv)
> -{
> -	return dev_priv->gpu_error.stop_rings == 0 ||
> -		dev_priv->gpu_error.stop_rings & I915_STOP_RING_ALLOW_WARN;
> -}
> -
>   void i915_gem_reset(struct drm_device *dev);
>   bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
>   int __must_check i915_gem_init(struct drm_device *dev);
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0f487e3b920c..f48f54193972 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2703,44 +2703,30 @@ void __i915_add_request(struct drm_i915_gem_request *request,
>   	i915_gem_mark_busy(dev_priv, engine);
>   }
>
> -static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
> -				   const struct i915_gem_context *ctx)
> +static bool i915_context_is_banned(const struct i915_gem_context *ctx)
>   {
>   	unsigned long elapsed;
>
> -	elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
> -
>   	if (ctx->hang_stats.banned)
>   		return true;
>
> +	elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
>   	if (ctx->hang_stats.ban_period_seconds &&
>   	    elapsed <= ctx->hang_stats.ban_period_seconds) {
> -		if (!i915_gem_context_is_default(ctx)) {
> -			DRM_DEBUG("context hanging too fast, banning!\n");
> -			return true;
> -		} else if (i915_stop_ring_allow_ban(dev_priv)) {
> -			if (i915_stop_ring_allow_warn(dev_priv))
> -				DRM_ERROR("gpu hanging too fast, banning!\n");
> -			return true;
> -		}
> +		DRM_DEBUG("context hanging too fast, banning!\n");
> +		return true;
>   	}
>
>   	return false;
>   }
>
> -static void i915_set_reset_status(struct drm_i915_private *dev_priv,
> -				  struct i915_gem_context *ctx,
> +static void i915_set_reset_status(struct i915_gem_context *ctx,
>   				  const bool guilty)
>   {
> -	struct i915_ctx_hang_stats *hs;
> -
> -	if (WARN_ON(!ctx))
> -		return;
> -
> -	hs = &ctx->hang_stats;
> +	struct i915_ctx_hang_stats *hs = &ctx->hang_stats;
>
>   	if (guilty) {
> -		hs->banned = i915_context_is_banned(dev_priv, ctx);
> +		hs->banned = i915_context_is_banned(ctx);
>   		hs->batch_active++;
>   		hs->guilty_ts = get_seconds();
>   	} else {
> @@ -2867,27 +2853,23 @@ i915_gem_find_active_request(struct intel_engine_cs *engine)
>   	return NULL;
>   }
>
> -static void i915_gem_reset_engine_status(struct drm_i915_private *dev_priv,
> -				       struct intel_engine_cs *engine)
> +static void i915_gem_reset_engine_status(struct intel_engine_cs *engine)
>   {
>   	struct drm_i915_gem_request *request;
>   	bool ring_hung;
>
>   	request = i915_gem_find_active_request(engine);
> -
>   	if (request == NULL)
>   		return;
>
>   	ring_hung = engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG;
>
> -	i915_set_reset_status(dev_priv, request->ctx, ring_hung);
> -
> +	i915_set_reset_status(request->ctx, ring_hung);
>   	list_for_each_entry_continue(request, &engine->request_list, list)
> -		i915_set_reset_status(dev_priv, request->ctx, false);
> +		i915_set_reset_status(request->ctx, false);
>   }
>
> -static void i915_gem_reset_engine_cleanup(struct drm_i915_private *dev_priv,
> -					struct intel_engine_cs *engine)
> +static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
>   {
>   	struct intel_ringbuffer *buffer;
>
> @@ -2957,10 +2939,10 @@ void i915_gem_reset(struct drm_device *dev)
>   	 * their reference to the objects, the inspection must be done first.
>   	 */
>   	for_each_engine(engine, dev_priv)
> -		i915_gem_reset_engine_status(dev_priv, engine);
> +		i915_gem_reset_engine_status(engine);
>
>   	for_each_engine(engine, dev_priv)
> -		i915_gem_reset_engine_cleanup(dev_priv, engine);
> +		i915_gem_reset_engine_cleanup(engine);
>
>   	i915_gem_context_reset(dev);
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 9e19b2c5b3ae..0742a849acce 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -764,9 +764,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
>   	intel_logical_ring_emit(ringbuf, MI_NOOP);
>   	intel_logical_ring_advance(ringbuf);
>
> -	if (intel_engine_stopped(engine))
> -		return 0;
> -
>   	/* We keep the previous context alive until we retire the following
>   	 * request. This ensures that any the context object is still pinned
>   	 * for any residual writes the HW makes into it on the context switch
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index 161c0792b1bf..327ad7fdf118 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -58,18 +58,10 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
>   					    ringbuf->tail, ringbuf->size);
>   }
>
> -bool intel_engine_stopped(struct intel_engine_cs *engine)
> -{
> -	struct drm_i915_private *dev_priv = engine->i915;
> -	return dev_priv->gpu_error.stop_rings & intel_engine_flag(engine);
> -}
> -
>   static void __intel_ring_advance(struct intel_engine_cs *engine)
>   {
>   	struct intel_ringbuffer *ringbuf = engine->buffer;
>   	ringbuf->tail &= ringbuf->size - 1;
> -	if (intel_engine_stopped(engine))
> -		return;
>   	engine->write_tail(engine, ringbuf->tail);
>   }
>
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index d0cd9a1aa80e..6017367e94fb 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -480,7 +480,6 @@ static inline void intel_ring_advance(struct intel_engine_cs *engine)
>   }
>   int __intel_ring_space(int head, int tail, int size);
>   void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
> -bool intel_engine_stopped(struct intel_engine_cs *engine);
>
>   int __must_check intel_engine_idle(struct intel_engine_cs *engine);
>   void intel_ring_init_seqno(struct intel_engine_cs *engine, u32 seqno);
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one
  2016-06-08 11:14   ` Arun Siluvery
@ 2016-06-08 12:06     ` Chris Wilson
  0 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-08 12:06 UTC (permalink / raw)
  To: Arun Siluvery; +Cc: intel-gfx

On Wed, Jun 08, 2016 at 04:44:45PM +0530, Arun Siluvery wrote:
> On 03/06/2016 22:06, Chris Wilson wrote:
> >As we only ever keep the first error state around, we can avoid some
> >work that can be quite intrusive if we don't record the error the second
> >time around. This does move the race whereby the user could discard one
> >error state as the second is being captured, but that race exists in the
> >current code and we hope that recapturing error state is only done for
> >debugging.
> >
> >Note that as we discard the error state for simulated errors, igt that
> >exercise error capture continue to function.
> >
> >Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >---
> 
> Patch does more than what is described here, all of i915_gem_request
> changes are part of this patch, accidentally squashed may be.

Thanks, I spotted that as I posted it, and was able to recover the 2
patches from reflog.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 01/62] drm/i915: Only start retire worker when idle
  2016-06-08 11:06       ` Chris Wilson
@ 2016-06-08 12:07         ` Joonas Lahtinen
  0 siblings, 0 replies; 87+ messages in thread
From: Joonas Lahtinen @ 2016-06-08 12:07 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On ke, 2016-06-08 at 12:06 +0100, Chris Wilson wrote:
> On Wed, Jun 08, 2016 at 11:53:15AM +0100, Chris Wilson wrote:
> > 
> > On Tue, Jun 07, 2016 at 02:31:07PM +0300, Joonas Lahtinen wrote:
> > > 
> > > On pe, 2016-06-03 at 17:36 +0100, Chris Wilson wrote:
> > > > 
> > > >  i915_gem_idle_work_handler(struct work_struct *work)
> > > >  {
> > > >  	struct drm_i915_private *dev_priv =
> > > > -		container_of(work, typeof(*dev_priv), mm.idle_work.work);
> > > > +		container_of(work, typeof(*dev_priv), gt.idle_work.work);
> > > >  	struct drm_device *dev = dev_priv->dev;
> > > >  	struct intel_engine_cs *engine;
> > > >  
> > > > -	for_each_engine(engine, dev_priv)
> > > > -		if (!list_empty(&engine->request_list))
> > > > -			return;
> > > > +	if (!READ_ONCE(dev_priv->gt.awake))
> > > > +		return;
> > > >  
> > > > -	/* we probably should sync with hangcheck here, using cancel_work_sync.
> > > > -	 * Also locking seems to be fubar here, engine->request_list is protected
> > > > -	 * by dev->struct_mutex. */
> > > > +	mutex_lock(&dev->struct_mutex);
> > > > +	if (dev_priv->gt.active_engines)
> > > > +		goto out;
> > > >  
> > > > -	intel_mark_idle(dev_priv);
> > > > +	for_each_engine(engine, dev_priv)
> > > > +		i915_gem_batch_pool_fini(&engine->batch_pool);
> > > >  
> > > > -	if (mutex_trylock(&dev->struct_mutex)) {
> > > > -		for_each_engine(engine, dev_priv)
> > > > -			i915_gem_batch_pool_fini(&engine->batch_pool);
> > > > +	GEM_BUG_ON(!dev_priv->gt.awake);
> > > > +	dev_priv->gt.awake = false;
> > > >  
> > > > -		mutex_unlock(&dev->struct_mutex);
> > > > +	if (INTEL_INFO(dev_priv)->gen >= 6)
> > > > +		gen6_rps_idle(dev_priv);
> > > > +	intel_runtime_pm_put(dev_priv);
> > > > +out:
> > > > +	mutex_unlock(&dev->struct_mutex);
> > > > +
> > > > +	if (!dev_priv->gt.awake &&
> > > No READ_ONCE here, even we just unlocked the mutex. So lacks some
> > > consistency.
> > > 
> > > Also, this assumes we might be pre-empted between unlocking mutex and
> > > making this test, so I'm little bit confused. Do you want to optimize
> > > by avoiding calling cancel_delayed_work_sync?
> > General principle to never call work_sync functions with locks held. I
> > had actually thought I had fixed this up (but realized that I just
> > rewrote hangcheck later on instead ;)
> > 
> > Ok, what I think is safer here is
> > 
> > 	bool hangcheck = cancel_delay_work_sync(hangcheck_work)
> > 
> > 	mutex_lock()
> > 	if (actually_idle()) {
> > 		awake = false;
> > 		missed_irq_rings |= intel_kick_waiters();
> > 	}
> > 	mutex_unlock();
> > 
> > 	if (awake && hangcheck)
> > 		queue_hangcheck()
> > 	
> > So always kick the hangcheck and reeanble if we tried to idle too early.
> > This will potentially delay hangcheck by one full hangcheck period if we
> > do encounter that race. But we shouldn't be hitting this race that
> > often, or hanging the GPU for that mterr.
> Actual delta:
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 406046f66e36..856da4036fb3 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -3066,10 +3066,15 @@ i915_gem_idle_work_handler(struct work_struct *work)
>                 container_of(work, typeof(*dev_priv), gt.idle_work.work);
>         struct drm_device *dev = dev_priv->dev;
>         struct intel_engine_cs *engine;
> +       unsigned stuck_engines;
> +       bool rearm_hangcheck;
>  
>         if (!READ_ONCE(dev_priv->gt.awake))
>                 return;
>  
> +       rearm_hangcheck =
> +               cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
> +
>         mutex_lock(&dev->struct_mutex);
>         if (dev_priv->gt.active_engines)
>                 goto out;
> @@ -3079,6 +3084,13 @@ i915_gem_idle_work_handler(struct work_struct *work)
>  
>         GEM_BUG_ON(!dev_priv->gt.awake);
>         dev_priv->gt.awake = false;
> +       rearm_hangcheck = false;
> +
> +       stuck_engines = intel_kick_waiters(dev_priv);
> +       if (unlikely(stuck_engines)) {
> +               DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
> +               dev_priv->gpu_error.missed_irq_rings |= stuck_engines;
> +       }
>  
>         if (INTEL_INFO(dev_priv)->gen >= 6)
>                 gen6_rps_idle(dev_priv);
> @@ -3086,14 +3098,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
>  out:
>         mutex_unlock(&dev->struct_mutex);
>  
> -       if (!dev_priv->gt.awake &&
> -           cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work)) {
> -               unsigned stuck = intel_kick_waiters(dev_priv);
> -               if (unlikely(stuck)) {
> -                       DRM_DEBUG_DRIVER("kicked stuck waiters...missed irq\n");
> -                       dev_priv->gpu_error.missed_irq_rings |= stuck;
> -               }
> -       }
> +       if (rearm_hangcheck)
> +               i915_queue_hangcheck(dev_priv);

As discussed in IRC, should not race, so with above hunk;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

>  }
> -Chris
> 
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 11/62] drm/i915: Clean up GPU hang message
  2016-06-03 16:36 ` [PATCH 11/62] drm/i915: Clean up GPU hang message Chris Wilson
@ 2016-06-14  8:13   ` Mika Kuoppala
  0 siblings, 0 replies; 87+ messages in thread
From: Mika Kuoppala @ 2016-06-14  8:13 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Remove some redundant kernel messages as we deduce a hung GPU and
> capture the error state.
>
> v2: Fix "hang" vs "no progress" message whilst I was there
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_irq.c | 41 ++++++++++++++++++++++++++---------------
>  1 file changed, 26 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 34e25fc2b90a..860235d1e0bf 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -3083,9 +3083,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>  		container_of(work, typeof(*dev_priv),
>  			     gpu_error.hangcheck_work.work);
>  	struct intel_engine_cs *engine;
> -	enum intel_engine_id id;
> -	int busy_count = 0, rings_hung = 0;
> -	bool stuck[I915_NUM_ENGINES] = { 0 };
> +	unsigned hung = 0, stuck = 0;
> +	int busy_count = 0;
>  #define BUSY 1
>  #define KICK 5
>  #define HUNG 20
> @@ -3103,7 +3102,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>  	 */
>  	intel_uncore_arm_unclaimed_mmio_detection(dev_priv);
>  
> -	for_each_engine_id(engine, dev_priv, id) {
> +	for_each_engine(engine, dev_priv) {
>  		bool busy = intel_engine_has_waiter(engine);
>  		u64 acthd;
>  		u32 seqno;
> @@ -3166,10 +3165,15 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>  					break;
>  				case HANGCHECK_HUNG:
>  					engine->hangcheck.score += HUNG;
> -					stuck[id] = true;
>  					break;
>  				}
>  			}
> +
> +			if (engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG) {
> +				hung |= intel_engine_flag(engine);
> +				if (engine->hangcheck.action != HANGCHECK_HUNG)
> +					stuck |= intel_engine_flag(engine);
> +			}
>  		} else {
>  			engine->hangcheck.action = HANGCHECK_ACTIVE;
>  
> @@ -3194,17 +3198,24 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>  		busy_count += busy;
>  	}
>  
> -	for_each_engine_id(engine, dev_priv, id) {
> -		if (engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG) {
> -			DRM_INFO("%s on %s\n",
> -				 stuck[id] ? "stuck" : "no progress",
> -				 engine->name);
> -			rings_hung |= intel_engine_flag(engine);
> -		}
> -	}
> +	if (hung) {
> +		char msg[80];
> +		int len;
>  
> -	if (rings_hung)
> -		i915_handle_error(dev_priv, rings_hung, "Engine(s) hung");
> +		/* If some rings hung but others were still busy, only
> +		 * blame the hanging rings in the synopsis.
> +		 */
> +		if (stuck != hung)
> +			hung &= ~stuck;
> +		len = snprintf(msg, sizeof(msg),
> +			       "%s on ", stuck == hung ? "No progress" : "Hang");
> +		for_each_engine_masked(engine, dev_priv, hung)
> +			len += snprintf(msg + len, sizeof(msg) - len,
> +					"%s, ", engine->name);
> +		msg[len-2] = '\0';
> +

msg[len-1] ?

snprintf returns the bytes that would have been written so there
is possibility to overwrite the stack here. Safer to use
scnprintf 

-Mika



> +		return i915_handle_error(dev_priv, hung, msg);
> +	}
>  
>  	/* Reset timer in case GPU hangs without another request being added */
>  	if (busy_count)
> -- 
> 2.8.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles
  2016-06-03 16:36 ` [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles Chris Wilson
@ 2016-06-16  8:49   ` Michał Winiarski
  2016-06-16 11:09     ` Chris Wilson
  0 siblings, 1 reply; 87+ messages in thread
From: Michał Winiarski @ 2016-06-16  8:49 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx, Jesse Barnes

On Fri, Jun 03, 2016 at 05:36:31PM +0100, Chris Wilson wrote:
> Make sure that the RPS bottom-half is flushed before we set the idle
> frequency when we decide the GPU is idle. This should prevent any races
> with the bottom-half and setting the idle frequency, and ensures that
> the bottom-half is bounded by the GPU's rpm reference taken for when it
> is active (i.e. between gen6_rps_busy() and gen6_rps_idle()).
> 
> v2: Avoid recursively using the i915->wq - RPS does not touch the
> struct_mutex so has no place being on the ordered i915->wq.
> v3: Enable/disable interrupts for RPS busy/idle in order to prevent
> further HW access from RPS outside of the wakeref.

The race can be easily observed since:

commit aed242ff7ebb697e4dff912bd4dc7ec7192f7581
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Wed Mar 18 09:48:21 2015 +0000

    drm/i915: Relax RPS contraints to allows setting minfreq on idle

Because idle_freq != min_freq_softlimit for BDW and HSW - we see a failure in
pm_rps. Flushing RPS bottom-half partially fixes that. We need to either modify
the test to match the current behaviour, or switch back to min_freq_softlimit
as soon as we transition idle->active.

References: https://bugs.freedesktop.org/show_bug.cgi?id=89728 
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>

-Michał

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Imre Deak <imre.deak@intel.com>
> Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
> ---
>  drivers/gpu/drm/i915/i915_drv.c |  3 ---
>  drivers/gpu/drm/i915/i915_irq.c | 32 ++++++++++++--------------------
>  drivers/gpu/drm/i915/intel_pm.c | 14 ++++++++++----
>  3 files changed, 22 insertions(+), 27 deletions(-)
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

* Re: [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles
  2016-06-16  8:49   ` Michał Winiarski
@ 2016-06-16 11:09     ` Chris Wilson
  0 siblings, 0 replies; 87+ messages in thread
From: Chris Wilson @ 2016-06-16 11:09 UTC (permalink / raw)
  To: Michał Winiarski; +Cc: intel-gfx, Jesse Barnes

On Thu, Jun 16, 2016 at 10:49:17AM +0200, Michał Winiarski wrote:
> On Fri, Jun 03, 2016 at 05:36:31PM +0100, Chris Wilson wrote:
> > Make sure that the RPS bottom-half is flushed before we set the idle
> > frequency when we decide the GPU is idle. This should prevent any races
> > with the bottom-half and setting the idle frequency, and ensures that
> > the bottom-half is bounded by the GPU's rpm reference taken for when it
> > is active (i.e. between gen6_rps_busy() and gen6_rps_idle()).
> > 
> > v2: Avoid recursively using the i915->wq - RPS does not touch the
> > struct_mutex so has no place being on the ordered i915->wq.
> > v3: Enable/disable interrupts for RPS busy/idle in order to prevent
> > further HW access from RPS outside of the wakeref.
> 
> The race can be easily observed since:
> 
> commit aed242ff7ebb697e4dff912bd4dc7ec7192f7581
> Author: Chris Wilson <chris@chris-wilson.co.uk>
> Date:   Wed Mar 18 09:48:21 2015 +0000
> 
>     drm/i915: Relax RPS contraints to allows setting minfreq on idle
> 
> Because idle_freq != min_freq_softlimit for BDW and HSW - we see a failure in
> pm_rps. Flushing RPS bottom-half partially fixes that. We need to either modify
> the test to match the current behaviour, or switch back to min_freq_softlimit
> as soon as we transition idle->active.

Ensuring we are at or above min_freq_softlimit from gen6_rps_busy() is
not a bad plan, that matches the user expectations.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 87+ messages in thread

end of thread, other threads:[~2016-06-16 11:09 UTC | newest]

Thread overview: 87+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-03 16:36 The vma leak fix from yonder Chris Wilson
2016-06-03 16:36 ` [PATCH 01/62] drm/i915: Only start retire worker when idle Chris Wilson
2016-06-07 11:31   ` Joonas Lahtinen
2016-06-08 10:53     ` Chris Wilson
2016-06-08 11:06       ` Chris Wilson
2016-06-08 12:07         ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 02/62] drm/i915: Do not keep postponing the idle-work Chris Wilson
2016-06-07 11:34   ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 03/62] drm/i915: Remove redundant queue_delayed_work() from throttle ioctl Chris Wilson
2016-06-07 11:39   ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 04/62] drm/i915: Restore waitboost credit to the synchronous waiter Chris Wilson
2016-06-08  9:04   ` Daniel Vetter
2016-06-08 10:38     ` Chris Wilson
2016-06-03 16:36 ` [PATCH 05/62] drm/i915: Add background commentary to "waitboosting" Chris Wilson
2016-06-03 16:36 ` [PATCH 06/62] drm/i915: Flush the RPS bottom-half when the GPU idles Chris Wilson
2016-06-16  8:49   ` Michał Winiarski
2016-06-16 11:09     ` Chris Wilson
2016-06-03 16:36 ` [PATCH 07/62] drm/i915: Remove temporary RPM wakeref assert disables Chris Wilson
2016-06-03 16:36 ` [PATCH 08/62] drm/i915: Remove stop-rings debugfs interface Chris Wilson
2016-06-08 11:50   ` Arun Siluvery
2016-06-03 16:36 ` [PATCH 09/62] drm/i915: Record the ringbuffer associated with the request Chris Wilson
2016-06-03 16:36 ` [PATCH 10/62] drm/i915: Allow userspace to request no-error-capture upon GPU hangs Chris Wilson
2016-06-03 16:36 ` [PATCH 11/62] drm/i915: Clean up GPU hang message Chris Wilson
2016-06-14  8:13   ` Mika Kuoppala
2016-06-03 16:36 ` [PATCH 12/62] drm/i915: Skip capturing an error state if we already have one Chris Wilson
2016-06-08 11:14   ` Arun Siluvery
2016-06-08 12:06     ` Chris Wilson
2016-06-03 16:36 ` [PATCH 13/62] drm/i915: Derive GEM requests from dma-fence Chris Wilson
2016-06-08  9:14   ` Daniel Vetter
2016-06-08 10:33     ` Chris Wilson
2016-06-03 16:36 ` [PATCH 14/62] drm/i915: Rename request reference/unreference to get/put Chris Wilson
2016-06-08  9:15   ` Daniel Vetter
2016-06-03 16:36 ` [PATCH 15/62] drm/i915: Rename i915_gem_context_reference/unreference() Chris Wilson
2016-06-06 12:12   ` Joonas Lahtinen
2016-06-03 16:36 ` [PATCH 16/62] drm/i915: Wrap drm_gem_object_lookup in i915_gem_object_lookup Chris Wilson
2016-06-03 16:36 ` [PATCH 17/62] drm/i915: Wrap drm_gem_object_reference in i915_gem_object_get Chris Wilson
2016-06-03 16:36 ` [PATCH 18/62] drm/i915: Rename drm_gem_object_unreference in preparation for lockless free Chris Wilson
2016-06-03 16:36 ` [PATCH 19/62] drm/i915: Rename drm_gem_object_unreference_unlocked " Chris Wilson
2016-06-03 16:36 ` [PATCH 20/62] drm/i915: Disable waitboosting for fence_wait() Chris Wilson
2016-06-03 16:36 ` [PATCH 21/62] drm/i915: Disable waitboosting for mmioflips/semaphores Chris Wilson
2016-06-03 16:36 ` [PATCH 22/62] drm/i915: Treat ringbuffer writes as write to normal memory Chris Wilson
2016-06-03 16:36 ` [PATCH 23/62] drm/i915: Rename ring->virtual_start as ring->vaddr Chris Wilson
2016-06-03 16:36 ` [PATCH 24/62] drm/i915: Convert i915_semaphores_is_enabled over to early sanitize Chris Wilson
2016-06-03 16:36 ` [PATCH 25/62] drm/i915: Unify intel_logical_ring_emit and intel_ring_emit Chris Wilson
2016-06-03 16:36 ` [PATCH 26/62] drm/i915: Rename request->ring to request->engine Chris Wilson
2016-06-06 13:42   ` Tvrtko Ursulin
2016-06-03 16:36 ` [PATCH 27/62] drm/i915: Rename request->ringbuf to request->ring Chris Wilson
2016-06-06 13:44   ` Tvrtko Ursulin
2016-06-08  9:18     ` Daniel Vetter
2016-06-03 16:36 ` [PATCH 28/62] drm/i915: Rename backpointer from intel_ringbuffer to intel_engine_cs Chris Wilson
2016-06-06 13:45   ` Tvrtko Ursulin
2016-06-03 16:36 ` [PATCH 29/62] drm/i915: Rename intel_context[engine].ringbuf Chris Wilson
2016-06-03 16:36 ` [PATCH 30/62] drm/i915: Rename struct intel_ringbuffer to struct intel_ring Chris Wilson
2016-06-03 16:36 ` [PATCH 31/62] drm/i915: Rename residual ringbuf parameters Chris Wilson
2016-06-03 16:36 ` [PATCH 32/62] drm/i915: Rename intel_pin_and_map_ring() Chris Wilson
2016-06-03 16:36 ` [PATCH 33/62] drm/i915: Remove obsolete engine->gpu_caches_dirty Chris Wilson
2016-06-03 16:36 ` [PATCH 34/62] drm/i915: Simplify request_alloc by returning the allocated request Chris Wilson
2016-06-03 16:37 ` [PATCH 35/62] drm/i915: Unify legacy/execlists emission of MI_BATCHBUFFER_START Chris Wilson
2016-06-03 16:37 ` [PATCH 36/62] drm/i915: Convert engine->write_tail to operate on a request Chris Wilson
2016-06-03 16:37 ` [PATCH 37/62] drm/i915: Unify request submission Chris Wilson
2016-06-03 16:37 ` [PATCH 38/62] drm/i915: Stop passing caller's num_dwords to engine->semaphore.signal() Chris Wilson
2016-06-03 16:37 ` [PATCH 39/62] drm/i915: Reuse legacy breadcrumbs + tail emission Chris Wilson
2016-06-03 16:37 ` [PATCH 40/62] drm/i915: Remove duplicate golden render state init from execlists Chris Wilson
2016-06-03 16:37 ` [PATCH 41/62] drm/i915: Unify legacy/execlists submit_execbuf callbacks Chris Wilson
2016-06-03 16:37 ` [PATCH 42/62] drm/i915: Simplify calling engine->sync_to Chris Wilson
2016-06-03 16:37 ` [PATCH 43/62] drm/i915: Introduce i915_gem_active for request tracking Chris Wilson
2016-06-03 16:37 ` [PATCH 44/62] drm/i915: Prepare i915_gem_active for annotations Chris Wilson
2016-06-03 16:37 ` [PATCH 45/62] drm/i915: Mark up i915_gem_active for locking annotation Chris Wilson
2016-06-03 16:37 ` [PATCH 46/62] drm/i915: Refactor blocking waits Chris Wilson
2016-06-03 16:37 ` [PATCH 47/62] drm/i915: Rename request->list to link for consistency Chris Wilson
2016-06-03 16:37 ` [PATCH 48/62] drm/i915: Remove obsolete i915_gem_object_flush_active() Chris Wilson
2016-06-03 16:37 ` [PATCH 49/62] drm/i915: Refactor activity tracking for requests Chris Wilson
2016-06-03 16:37 ` [PATCH 50/62] drm/i915: Double check activity before relocations Chris Wilson
2016-06-03 16:37 ` [PATCH 51/62] drm/i915: Move request list retirement to i915_gem_request.c Chris Wilson
2016-06-03 16:37 ` [PATCH 52/62] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers Chris Wilson
2016-06-03 16:37 ` [PATCH 53/62] drm/i915: Split early global GTT initialisation Chris Wilson
2016-06-03 16:37 ` [PATCH 54/62] drm/i915: Store owning file on the i915_address_space Chris Wilson
2016-06-03 16:37 ` [PATCH 55/62] drm/i915: i915_vma_move_to_active prep patch Chris Wilson
2016-06-03 16:37 ` [PATCH 56/62] drm/i915: Count how many VMA are bound for an object Chris Wilson
2016-06-03 16:37 ` [PATCH 57/62] drm/i915: Be more careful when unbinding vma Chris Wilson
2016-06-03 16:37 ` [PATCH 58/62] drm/i915: Kill drop_pages() Chris Wilson
2016-06-03 16:37 ` [PATCH 59/62] drm/i915: Track active vma requests Chris Wilson
2016-06-03 16:37 ` [PATCH 60/62] drm/i915: Release vma when the handle is closed Chris Wilson
2016-06-03 16:37 ` [PATCH 61/62] drm/i915: Mark the context and address space as closed Chris Wilson
2016-06-03 16:37 ` [PATCH 62/62] Revert "drm/i915: Clean up associated VMAs on context destruction" Chris Wilson
2016-06-05  5:24 ` ✗ Ro.CI.BAT: failure for series starting with [01/62] drm/i915: Only start retire worker when idle Patchwork
2016-06-08  9:30 ` The vma leak fix from yonder Daniel Vetter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.