All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/26] drm/i915: Keep engine alive as we retire the context
@ 2019-06-18  7:41 Chris Wilson
  2019-06-18  7:41 ` [PATCH 02/26] drm/i915: Skip shrinking already freed pages Chris Wilson
                   ` (29 more replies)
  0 siblings, 30 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Though we pin the context first before taking the pm wakeref, during
retire we need to unpin before dropping the pm wakeref (breaking the
"natural" onion). During the unpin, we may need to attach a cleanup
operation on to the engine wakeref, ergo we want to keep the engine
awake until after the unpin.

v2: Push the engine wakeref into the barrier so we keep the onion unwind
ordering in the request itself

Fixes: ce476c80b8bf ("drm/i915: Keep contexts pinned until after the next kernel context switch")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_active.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 2d019ac6db20..41ed2798687d 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -4,6 +4,8 @@
  * Copyright © 2019 Intel Corporation
  */
 
+#include "gt/intel_engine_pm.h"
+
 #include "i915_drv.h"
 #include "i915_active.h"
 #include "i915_globals.h"
@@ -268,8 +270,9 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 					    struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *i915 = engine->i915;
+	struct llist_node *pos, *next;
 	unsigned long tmp;
-	int err = 0;
+	int err;
 
 	GEM_BUG_ON(!engine->mask);
 	for_each_engine_masked(engine, i915, engine->mask, tmp) {
@@ -279,7 +282,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 		node = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
 		if (unlikely(!node)) {
 			err = -ENOMEM;
-			break;
+			goto unwind;
 		}
 
 		i915_active_request_init(&node->base,
@@ -288,10 +291,24 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 		node->ref = ref;
 		ref->count++;
 
+		intel_engine_pm_get(engine);
 		llist_add((struct llist_node *)&node->base.link,
 			  &ref->barriers);
 	}
 
+	return 0;
+
+unwind:
+	llist_for_each_safe(pos, next, llist_del_all(&ref->barriers)) {
+		struct active_node *node;
+
+		node = container_of((struct list_head *)pos,
+				    typeof(*node), base.link);
+		engine = (void *)rcu_access_pointer(node->base.request);
+
+		intel_engine_pm_put(engine);
+		kfree(node);
+	}
 	return err;
 }
 
@@ -328,6 +345,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 
 		llist_add((struct llist_node *)&node->base.link,
 			  &engine->barrier_tasks);
+		intel_engine_pm_put(engine);
 	}
 	i915_active_release(ref);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 02/26] drm/i915: Skip shrinking already freed pages
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18 11:59   ` Chris Wilson
  2019-06-18 16:06   ` Mika Kuoppala
  2019-06-18  7:41 ` [PATCH 03/26] drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait() Chris Wilson
                   ` (28 subsequent siblings)
  29 siblings, 2 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Previously, we want to shrink the pages of freed objects before they
were RCU collected. However, by removing the struct_mutex serialisation
around the active reference, we need to acquire an extra reference
around the wait. Unfortunately this means that we have to skip objects
that are waiting RCU collection.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c   | 47 +-------------------
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c |  5 +++
 2 files changed, 6 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 272ce30ce1d3..1b571fd26ed4 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -149,33 +149,6 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
 	}
 }
 
-static bool discard_backing_storage(struct drm_i915_gem_object *obj)
-{
-	/*
-	 * If we are the last user of the backing storage (be it shmemfs
-	 * pages or stolen etc), we know that the pages are going to be
-	 * immediately released. In this case, we can then skip copying
-	 * back the contents from the GPU.
-	 */
-	if (!i915_gem_object_is_shrinkable(obj))
-		return false;
-
-	if (obj->mm.madv != I915_MADV_WILLNEED)
-		return false;
-
-	if (!obj->base.filp)
-		return true;
-
-	/* At first glance, this looks racy, but then again so would be
-	 * userspace racing mmap against close. However, the first external
-	 * reference to the filp can only be obtained through the
-	 * i915_gem_mmap_ioctl() which safeguards us against the user
-	 * acquiring such a reference whilst we are in the middle of
-	 * freeing the object.
-	 */
-	return file_count(obj->base.filp) == 1;
-}
-
 static void __i915_gem_free_objects(struct drm_i915_private *i915,
 				    struct llist_node *freed)
 {
@@ -225,8 +198,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
 		if (obj->ops->release)
 			obj->ops->release(obj);
 
-		if (WARN_ON(i915_gem_object_has_pinned_pages(obj)))
-			atomic_set(&obj->mm.pages_pin_count, 0);
+		atomic_set(&obj->mm.pages_pin_count, 0);
 		__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
 		GEM_BUG_ON(i915_gem_object_has_pages(obj));
 
@@ -324,23 +296,6 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
 
-	if (obj->mm.quirked)
-		__i915_gem_object_unpin_pages(obj);
-
-	if (discard_backing_storage(obj)) {
-		struct drm_i915_private *i915 = to_i915(obj->base.dev);
-
-		obj->mm.madv = I915_MADV_DONTNEED;
-
-		if (i915_gem_object_has_pages(obj)) {
-			unsigned long flags;
-
-			spin_lock_irqsave(&i915->mm.obj_lock, flags);
-			list_move_tail(&obj->mm.link, &i915->mm.purge_list);
-			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
-		}
-	}
-
 	/*
 	 * Before we free the object, make sure any pure RCU-only
 	 * read-side critical sections are complete, e.g.
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index c851c4029597..3a926a8755c6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -241,6 +241,9 @@ i915_gem_shrink(struct drm_i915_private *i915,
 			if (!can_release_pages(obj))
 				continue;
 
+			if (!kref_get_unless_zero(&obj->base.refcount))
+				continue;
+
 			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
 
 			if (unsafe_drop_pages(obj)) {
@@ -253,7 +256,9 @@ i915_gem_shrink(struct drm_i915_private *i915,
 				}
 				mutex_unlock(&obj->mm.lock);
 			}
+
 			scanned += obj->base.size >> PAGE_SHIFT;
+			i915_gem_object_put(obj);
 
 			spin_lock_irqsave(&i915->mm.obj_lock, flags);
 		}
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 03/26] drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait()
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
  2019-06-18  7:41 ` [PATCH 02/26] drm/i915: Skip shrinking already freed pages Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-19 11:44   ` Mika Kuoppala
  2019-06-18  7:41 ` [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring Chris Wilson
                   ` (27 subsequent siblings)
  29 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Since commit eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on
struct_mutex"), the I915_WAIT_LOCKED flags passed to i915_request_wait()
has been defunct. Now go ahead and remove it from all callers.

References: eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../drm/i915/gem/selftests/i915_gem_context.c | 14 +++------
 drivers/gpu/drm/i915/gt/intel_ringbuffer.c    |  2 +-
 drivers/gpu/drm/i915/gt/intel_workarounds.c   |  2 +-
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |  9 ++----
 drivers/gpu/drm/i915/gt/selftest_lrc.c        | 24 ++++++---------
 .../gpu/drm/i915/gt/selftest_workarounds.c    |  6 ++--
 drivers/gpu/drm/i915/i915_active.h            |  2 +-
 drivers/gpu/drm/i915/i915_request.c           |  4 ---
 drivers/gpu/drm/i915/i915_trace.h             |  3 +-
 drivers/gpu/drm/i915/selftests/i915_request.c | 30 +++++++------------
 .../gpu/drm/i915/selftests/i915_timeline.c    |  6 ++--
 11 files changed, 36 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index 03ac5003abf1..eaa2b16574c7 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -83,9 +83,7 @@ static int live_nop_switch(void *arg)
 			}
 			i915_request_add(rq);
 		}
-		if (i915_request_wait(rq,
-				      I915_WAIT_LOCKED,
-				      HZ / 5) < 0) {
+		if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 			pr_err("Failed to populated %d contexts\n", nctx);
 			i915_gem_set_wedged(i915);
 			err = -EIO;
@@ -128,9 +126,7 @@ static int live_nop_switch(void *arg)
 
 				i915_request_add(rq);
 			}
-			if (i915_request_wait(rq,
-					      I915_WAIT_LOCKED,
-					      HZ / 5) < 0) {
+			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 				pr_err("Switching between %ld contexts timed out\n",
 				       prime);
 				i915_gem_set_wedged(i915);
@@ -893,7 +889,7 @@ __read_slice_count(struct drm_i915_private *i915,
 	if (spin)
 		igt_spinner_end(spin);
 
-	ret = i915_request_wait(rq, I915_WAIT_LOCKED, MAX_SCHEDULE_TIMEOUT);
+	ret = i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT);
 	i915_request_put(rq);
 	if (ret < 0)
 		return ret;
@@ -980,9 +976,7 @@ __sseu_finish(struct drm_i915_private *i915,
 		igt_spinner_end(spin);
 
 	if ((flags & TEST_IDLE) && ret == 0) {
-		ret = i915_gem_wait_for_idle(i915,
-					     I915_WAIT_LOCKED,
-					     MAX_SCHEDULE_TIMEOUT);
+		ret = i915_gem_wait_for_idle(i915, 0, MAX_SCHEDULE_TIMEOUT);
 		if (ret)
 			return ret;
 
diff --git a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
index bd35ad202459..c6023bc9452d 100644
--- a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
@@ -1815,7 +1815,7 @@ static noinline int wait_for_space(struct intel_ring *ring, unsigned int bytes)
 		return -ENOSPC;
 
 	timeout = i915_request_wait(target,
-				    I915_WAIT_INTERRUPTIBLE | I915_WAIT_LOCKED,
+				    I915_WAIT_INTERRUPTIBLE,
 				    MAX_SCHEDULE_TIMEOUT);
 	if (timeout < 0)
 		return timeout;
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 165b0a45e009..9624d9e776e3 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -1384,7 +1384,7 @@ static int engine_wa_list_verify(struct intel_context *ce,
 		goto err_vma;
 
 	i915_request_add(rq);
-	if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
+	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 		err = -ETIME;
 		goto err_vma;
 	}
diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index b0b2998e56b8..1ee4c923044f 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -339,8 +339,7 @@ static int igt_hang_sanitycheck(void *arg)
 
 		timeout = 0;
 		igt_wedge_on_timeout(&w, i915, HZ / 10 /* 100ms timeout*/)
-			timeout = i915_request_wait(rq,
-						    I915_WAIT_LOCKED,
+			timeout = i915_request_wait(rq, 0,
 						    MAX_SCHEDULE_TIMEOUT);
 		if (i915_reset_failed(i915))
 			timeout = -EIO;
@@ -1098,7 +1097,7 @@ static int igt_reset_wait(void *arg)
 
 	reset_count = fake_hangcheck(i915, ALL_ENGINES);
 
-	timeout = i915_request_wait(rq, I915_WAIT_LOCKED, 10);
+	timeout = i915_request_wait(rq, 0, 10);
 	if (timeout < 0) {
 		pr_err("i915_request_wait failed on a stuck request: err=%ld\n",
 		       timeout);
@@ -1666,9 +1665,7 @@ static int igt_atomic_reset_engine(struct intel_engine_cs *engine,
 		struct igt_wedge_me w;
 
 		igt_wedge_on_timeout(&w, i915, HZ / 20 /* 50ms timeout*/)
-			i915_request_wait(rq,
-					  I915_WAIT_LOCKED,
-					  MAX_SCHEDULE_TIMEOUT);
+			i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT);
 		if (i915_reset_failed(i915))
 			err = -EIO;
 	}
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index d84d31e3da19..401e8b539297 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -192,7 +192,7 @@ static int live_busywait_preempt(void *arg)
 		}
 
 		/* Low priority request should be busywaiting now */
-		if (i915_request_wait(lo, I915_WAIT_LOCKED, 1) != -ETIME) {
+		if (i915_request_wait(lo, 0, 1) != -ETIME) {
 			pr_err("%s: Busywaiting request did not!\n",
 			       engine->name);
 			err = -EIO;
@@ -220,7 +220,7 @@ static int live_busywait_preempt(void *arg)
 		intel_ring_advance(hi, cs);
 		i915_request_add(hi);
 
-		if (i915_request_wait(lo, I915_WAIT_LOCKED, HZ / 5) < 0) {
+		if (i915_request_wait(lo, 0, HZ / 5) < 0) {
 			struct drm_printer p = drm_info_printer(i915->drm.dev);
 
 			pr_err("%s: Failed to preempt semaphore busywait!\n",
@@ -739,7 +739,6 @@ static int live_suppress_wait_preempt(void *arg)
 			GEM_BUG_ON(!i915_request_started(rq[0]));
 
 			if (i915_request_wait(rq[depth],
-					      I915_WAIT_LOCKED |
 					      I915_WAIT_PRIORITY,
 					      1) != -ETIME) {
 				pr_err("%s: Waiter depth:%d completed!\n",
@@ -841,7 +840,7 @@ static int live_chain_preempt(void *arg)
 			 __func__, engine->name, ring_size);
 
 		igt_spinner_end(&lo.spin);
-		if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 2) < 0) {
+		if (i915_request_wait(rq, 0, HZ / 2) < 0) {
 			pr_err("Timed out waiting to flush %s\n", engine->name);
 			goto err_wedged;
 		}
@@ -882,7 +881,7 @@ static int live_chain_preempt(void *arg)
 			engine->schedule(rq, &attr);
 
 			igt_spinner_end(&hi.spin);
-			if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
+			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 				struct drm_printer p =
 					drm_info_printer(i915->drm.dev);
 
@@ -898,7 +897,7 @@ static int live_chain_preempt(void *arg)
 			if (IS_ERR(rq))
 				goto err_wedged;
 			i915_request_add(rq);
-			if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
+			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 				struct drm_printer p =
 					drm_info_printer(i915->drm.dev);
 
@@ -1396,9 +1395,7 @@ static int nop_virtual_engine(struct drm_i915_private *i915,
 		}
 
 		for (nc = 0; nc < nctx; nc++) {
-			if (i915_request_wait(request[nc],
-					      I915_WAIT_LOCKED,
-					      HZ / 10) < 0) {
+			if (i915_request_wait(request[nc], 0, HZ / 10) < 0) {
 				pr_err("%s(%s): wait for %llx:%lld timed out\n",
 				       __func__, ve[0]->engine->name,
 				       request[nc]->fence.context,
@@ -1545,7 +1542,7 @@ static int mask_virtual_engine(struct drm_i915_private *i915,
 	}
 
 	for (n = 0; n < nsibling; n++) {
-		if (i915_request_wait(request[n], I915_WAIT_LOCKED, HZ / 10) < 0) {
+		if (i915_request_wait(request[n], 0, HZ / 10) < 0) {
 			pr_err("%s(%s): wait for %llx:%lld timed out\n",
 			       __func__, ve->engine->name,
 			       request[n]->fence.context,
@@ -1720,9 +1717,7 @@ static int bond_virtual_engine(struct drm_i915_private *i915,
 		}
 		onstack_fence_fini(&fence);
 
-		if (i915_request_wait(rq[0],
-				      I915_WAIT_LOCKED,
-				      HZ / 10) < 0) {
+		if (i915_request_wait(rq[0], 0, HZ / 10) < 0) {
 			pr_err("Master request did not execute (on %s)!\n",
 			       rq[0]->engine->name);
 			err = -EIO;
@@ -1730,8 +1725,7 @@ static int bond_virtual_engine(struct drm_i915_private *i915,
 		}
 
 		for (n = 0; n < nsibling; n++) {
-			if (i915_request_wait(rq[n + 1],
-					      I915_WAIT_LOCKED,
+			if (i915_request_wait(rq[n + 1], 0,
 					      MAX_SCHEDULE_TIMEOUT) < 0) {
 				err = -EIO;
 				goto out;
diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
index 91449d5157f6..87426f9183cd 100644
--- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
@@ -541,7 +541,7 @@ static int check_dirty_whitelist(struct i915_gem_context *ctx,
 		if (err)
 			goto out_batch;
 
-		if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
+		if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 			pr_err("%s: Futzing %x timedout; cancelling test\n",
 			       engine->name, reg);
 			i915_gem_set_wedged(ctx->i915);
@@ -739,7 +739,7 @@ static int read_whitelisted_registers(struct i915_gem_context *ctx,
 err_req:
 	i915_request_add(rq);
 
-	if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0)
+	if (i915_request_wait(rq, 0, HZ / 5) < 0)
 		err = -EIO;
 
 	return err;
@@ -790,7 +790,7 @@ static int scrub_whitelisted_registers(struct i915_gem_context *ctx,
 
 err_request:
 	i915_request_add(rq);
-	if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0)
+	if (i915_request_wait(rq, 0, HZ / 5) < 0)
 		err = -EIO;
 
 err_unpin:
diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
index d55d37673944..c14eebf6d074 100644
--- a/drivers/gpu/drm/i915/i915_active.h
+++ b/drivers/gpu/drm/i915/i915_active.h
@@ -330,7 +330,7 @@ i915_active_request_retire(struct i915_active_request *active,
 		return 0;
 
 	ret = i915_request_wait(request,
-				I915_WAIT_INTERRUPTIBLE | I915_WAIT_LOCKED,
+				I915_WAIT_INTERRUPTIBLE,
 				MAX_SCHEDULE_TIMEOUT);
 	if (ret < 0)
 		return ret;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 9819483d1b5d..d7fd77e8a789 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1360,10 +1360,6 @@ static void request_wait_wake(struct dma_fence *fence, struct dma_fence_cb *cb)
  * maximum of @timeout jiffies (with MAX_SCHEDULE_TIMEOUT implying an
  * unbounded wait).
  *
- * If the caller holds the struct_mutex, the caller must pass I915_WAIT_LOCKED
- * in via the flags, and vice versa if the struct_mutex is not held, the caller
- * must not specify that the wait is locked.
- *
  * Returns the remaining time (in jiffies) if the request completed, which may
  * be zero or -ETIME if the request is unfinished after the timeout expires.
  * May return -EINTR is called with I915_WAIT_INTERRUPTIBLE and a signal is
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 5c8cfaa70d72..f4ce643b3bc3 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -863,10 +863,9 @@ TRACE_EVENT(i915_request_wait_begin,
 			   __entry->flags = flags;
 			   ),
 
-	    TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, blocking=%u, flags=0x%x",
+	    TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, flags=0x%x",
 		      __entry->dev, __entry->class, __entry->instance,
 		      __entry->hw_id, __entry->ctx, __entry->seqno,
-		      !!(__entry->flags & I915_WAIT_LOCKED),
 		      __entry->flags)
 );
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index 3de24f3d4ed5..298bb7116c51 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -74,12 +74,12 @@ static int igt_wait_request(void *arg)
 		goto out_unlock;
 	}
 
-	if (i915_request_wait(request, I915_WAIT_LOCKED, 0) != -ETIME) {
+	if (i915_request_wait(request, 0, 0) != -ETIME) {
 		pr_err("request wait (busy query) succeeded (expected timeout before submit!)\n");
 		goto out_unlock;
 	}
 
-	if (i915_request_wait(request, I915_WAIT_LOCKED, T) != -ETIME) {
+	if (i915_request_wait(request, 0, T) != -ETIME) {
 		pr_err("request wait succeeded (expected timeout before submit!)\n");
 		goto out_unlock;
 	}
@@ -91,7 +91,7 @@ static int igt_wait_request(void *arg)
 
 	i915_request_add(request);
 
-	if (i915_request_wait(request, I915_WAIT_LOCKED, 0) != -ETIME) {
+	if (i915_request_wait(request, 0, 0) != -ETIME) {
 		pr_err("request wait (busy query) succeeded (expected timeout after submit!)\n");
 		goto out_unlock;
 	}
@@ -101,12 +101,12 @@ static int igt_wait_request(void *arg)
 		goto out_unlock;
 	}
 
-	if (i915_request_wait(request, I915_WAIT_LOCKED, T / 2) != -ETIME) {
+	if (i915_request_wait(request, 0, T / 2) != -ETIME) {
 		pr_err("request wait succeeded (expected timeout!)\n");
 		goto out_unlock;
 	}
 
-	if (i915_request_wait(request, I915_WAIT_LOCKED, T) == -ETIME) {
+	if (i915_request_wait(request, 0, T) == -ETIME) {
 		pr_err("request wait timed out!\n");
 		goto out_unlock;
 	}
@@ -116,7 +116,7 @@ static int igt_wait_request(void *arg)
 		goto out_unlock;
 	}
 
-	if (i915_request_wait(request, I915_WAIT_LOCKED, T) == -ETIME) {
+	if (i915_request_wait(request, 0, T) == -ETIME) {
 		pr_err("request wait timed out when already complete!\n");
 		goto out_unlock;
 	}
@@ -574,9 +574,7 @@ static int live_nop_request(void *arg)
 
 				i915_request_add(request);
 			}
-			i915_request_wait(request,
-					  I915_WAIT_LOCKED,
-					  MAX_SCHEDULE_TIMEOUT);
+			i915_request_wait(request, 0, MAX_SCHEDULE_TIMEOUT);
 
 			times[1] = ktime_sub(ktime_get_raw(), times[1]);
 			if (prime == 1)
@@ -706,9 +704,7 @@ static int live_empty_request(void *arg)
 			err = PTR_ERR(request);
 			goto out_batch;
 		}
-		i915_request_wait(request,
-				  I915_WAIT_LOCKED,
-				  MAX_SCHEDULE_TIMEOUT);
+		i915_request_wait(request, 0, MAX_SCHEDULE_TIMEOUT);
 
 		for_each_prime_number_from(prime, 1, 8192) {
 			times[1] = ktime_get_raw();
@@ -720,9 +716,7 @@ static int live_empty_request(void *arg)
 					goto out_batch;
 				}
 			}
-			i915_request_wait(request,
-					  I915_WAIT_LOCKED,
-					  MAX_SCHEDULE_TIMEOUT);
+			i915_request_wait(request, 0, MAX_SCHEDULE_TIMEOUT);
 
 			times[1] = ktime_sub(ktime_get_raw(), times[1]);
 			if (prime == 1)
@@ -895,8 +889,7 @@ static int live_all_engines(void *arg)
 	for_each_engine(engine, i915, id) {
 		long timeout;
 
-		timeout = i915_request_wait(request[id],
-					    I915_WAIT_LOCKED,
+		timeout = i915_request_wait(request[id], 0,
 					    MAX_SCHEDULE_TIMEOUT);
 		if (timeout < 0) {
 			err = timeout;
@@ -1013,8 +1006,7 @@ static int live_sequential_engines(void *arg)
 			goto out_request;
 		}
 
-		timeout = i915_request_wait(request[id],
-					    I915_WAIT_LOCKED,
+		timeout = i915_request_wait(request[id], 0,
 					    MAX_SCHEDULE_TIMEOUT);
 		if (timeout < 0) {
 			err = timeout;
diff --git a/drivers/gpu/drm/i915/selftests/i915_timeline.c b/drivers/gpu/drm/i915/selftests/i915_timeline.c
index 724bf3650b3e..76d3977f1d4b 100644
--- a/drivers/gpu/drm/i915/selftests/i915_timeline.c
+++ b/drivers/gpu/drm/i915/selftests/i915_timeline.c
@@ -724,7 +724,7 @@ static int live_hwsp_wrap(void *arg)
 
 		i915_request_add(rq);
 
-		if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
+		if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 			pr_err("Wait for timeline writes timed out!\n");
 			err = -EIO;
 			goto out;
@@ -797,9 +797,7 @@ static int live_hwsp_recycle(void *arg)
 				goto out;
 			}
 
-			if (i915_request_wait(rq,
-					      I915_WAIT_LOCKED,
-					      HZ / 5) < 0) {
+			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
 				pr_err("Wait for timeline writes timed out!\n");
 				i915_timeline_put(tl);
 				err = -EIO;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
  2019-06-18  7:41 ` [PATCH 02/26] drm/i915: Skip shrinking already freed pages Chris Wilson
  2019-06-18  7:41 ` [PATCH 03/26] drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait() Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-19 13:12   ` Mika Kuoppala
  2019-06-18  7:41 ` [PATCH 05/26] drm/i915/execlists: Preempt-to-busy Chris Wilson
                   ` (26 subsequent siblings)
  29 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

In the unlikely case the request completes while we regard it as not even
executing on the GPU (see the next patch!), we have to flush any pending
execution callbacks at retirement and ensure that we do not add any
more.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_request.c | 93 +++++++++++++++--------------
 1 file changed, 49 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index d7fd77e8a789..51b068a57193 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -119,6 +119,50 @@ const struct dma_fence_ops i915_fence_ops = {
 	.release = i915_fence_release,
 };
 
+static void irq_execute_cb(struct irq_work *wrk)
+{
+	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
+
+	i915_sw_fence_complete(cb->fence);
+	kmem_cache_free(global.slab_execute_cbs, cb);
+}
+
+static void irq_execute_cb_hook(struct irq_work *wrk)
+{
+	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
+
+	cb->hook(container_of(cb->fence, struct i915_request, submit),
+		 &cb->signal->fence);
+	i915_request_put(cb->signal);
+
+	irq_execute_cb(wrk);
+}
+
+static void __notify_execute_cb(struct i915_request *rq)
+{
+	struct execute_cb *cb;
+
+	lockdep_assert_held(&rq->lock);
+
+	if (list_empty(&rq->execute_cb))
+		return;
+
+	list_for_each_entry(cb, &rq->execute_cb, link)
+		irq_work_queue(&cb->work);
+
+	/*
+	 * XXX Rollback on __i915_request_unsubmit()
+	 *
+	 * In the future, perhaps when we have an active time-slicing scheduler,
+	 * it will be interesting to unsubmit parallel execution and remove
+	 * busywaits from the GPU until their master is restarted. This is
+	 * quite hairy, we have to carefully rollback the fence and do a
+	 * preempt-to-idle cycle on the target engine, all the while the
+	 * master execute_cb may refire.
+	 */
+	INIT_LIST_HEAD(&rq->execute_cb);
+}
+
 static inline void
 i915_request_remove_from_client(struct i915_request *request)
 {
@@ -246,6 +290,11 @@ static bool i915_request_retire(struct i915_request *rq)
 		GEM_BUG_ON(!atomic_read(&rq->i915->gt_pm.rps.num_waiters));
 		atomic_dec(&rq->i915->gt_pm.rps.num_waiters);
 	}
+	if (!test_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags)) {
+		set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
+		__notify_execute_cb(rq);
+	}
+	GEM_BUG_ON(!list_empty(&rq->execute_cb));
 	spin_unlock(&rq->lock);
 
 	local_irq_enable();
@@ -285,50 +334,6 @@ void i915_request_retire_upto(struct i915_request *rq)
 	} while (i915_request_retire(tmp) && tmp != rq);
 }
 
-static void irq_execute_cb(struct irq_work *wrk)
-{
-	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
-
-	i915_sw_fence_complete(cb->fence);
-	kmem_cache_free(global.slab_execute_cbs, cb);
-}
-
-static void irq_execute_cb_hook(struct irq_work *wrk)
-{
-	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
-
-	cb->hook(container_of(cb->fence, struct i915_request, submit),
-		 &cb->signal->fence);
-	i915_request_put(cb->signal);
-
-	irq_execute_cb(wrk);
-}
-
-static void __notify_execute_cb(struct i915_request *rq)
-{
-	struct execute_cb *cb;
-
-	lockdep_assert_held(&rq->lock);
-
-	if (list_empty(&rq->execute_cb))
-		return;
-
-	list_for_each_entry(cb, &rq->execute_cb, link)
-		irq_work_queue(&cb->work);
-
-	/*
-	 * XXX Rollback on __i915_request_unsubmit()
-	 *
-	 * In the future, perhaps when we have an active time-slicing scheduler,
-	 * it will be interesting to unsubmit parallel execution and remove
-	 * busywaits from the GPU until their master is restarted. This is
-	 * quite hairy, we have to carefully rollback the fence and do a
-	 * preempt-to-idle cycle on the target engine, all the while the
-	 * master execute_cb may refire.
-	 */
-	INIT_LIST_HEAD(&rq->execute_cb);
-}
-
 static int
 __i915_request_await_execution(struct i915_request *rq,
 			       struct i915_request *signal,
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 05/26] drm/i915/execlists: Preempt-to-busy
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (2 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 06/26] drm/i915/execlists: Minimalistic timeslicing Chris Wilson
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

When using a global seqno, we required a precise stop-the-workd event to
handle preemption and unwind the global seqno counter. To accomplish
this, we would preempt to a special out-of-band context and wait for the
machine to report that it was idle. Given an idle machine, we could very
precisely see which requests had completed and which we needed to feed
back into the run queue.

However, now that we have scrapped the global seqno, we no longer need
to precisely unwind the global counter and only track requests by their
per-context seqno. This allows us to loosely unwind inflight requests
while scheduling a preemption, with the enormous caveat that the
requests we put back on the run queue are still _inflight_ (until the
preemption request is complete). This makes request tracking much more
messy, as at any point then we can see a completed request that we
believe is not currently scheduled for execution. We also have to be
careful not to rewind RING_TAIL past RING_HEAD on preempting to the
running context, and for this we use a semaphore to prevent completion
of the request before continuing.

To accomplish this feat, we change how we track requests scheduled to
the HW. Instead of appending our requests onto a single list as we
submit, we track each submission to ELSP as its own block. Then upon
receiving the CS preemption event, we promote the pending block to the
inflight block (discarding what was previously being tracked). As normal
CS completion events arrive, we then remove stale entries from the
inflight tracker.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |   2 +-
 drivers/gpu/drm/i915/gt/intel_context_types.h |   5 +
 drivers/gpu/drm/i915/gt/intel_engine.h        |  61 +-
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  61 +-
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  52 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 671 ++++++++----------
 drivers/gpu/drm/i915/i915_gpu_error.c         |  19 +-
 drivers/gpu/drm/i915/i915_request.c           |   6 +
 drivers/gpu/drm/i915/i915_request.h           |   1 +
 drivers/gpu/drm/i915/i915_scheduler.c         |   3 +-
 drivers/gpu/drm/i915/i915_utils.h             |  12 +
 drivers/gpu/drm/i915/intel_guc_submission.c   | 175 ++---
 drivers/gpu/drm/i915/selftests/i915_request.c |   8 +-
 13 files changed, 465 insertions(+), 611 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 0f2c22a3bcb6..35871c8a42a6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -646,7 +646,7 @@ static void init_contexts(struct drm_i915_private *i915)
 
 static bool needs_preempt_context(struct drm_i915_private *i915)
 {
-	return HAS_EXECLISTS(i915);
+	return USES_GUC_SUBMISSION(i915);
 }
 
 int i915_gem_contexts_init(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index e95be4be9612..b565c3ff4378 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -13,6 +13,7 @@
 #include <linux/types.h>
 
 #include "i915_active_types.h"
+#include "i915_utils.h"
 #include "intel_engine_types.h"
 #include "intel_sseu.h"
 
@@ -38,6 +39,10 @@ struct intel_context {
 	struct i915_gem_context *gem_context;
 	struct intel_engine_cs *engine;
 	struct intel_engine_cs *inflight;
+#define intel_context_inflight(ce) ptr_mask_bits((ce)->inflight, 2)
+#define intel_context_inflight_count(ce)  ptr_unmask_bits((ce)->inflight, 2)
+#define intel_context_inflight_inc(ce) ptr_count_inc(&(ce)->inflight)
+#define intel_context_inflight_dec(ce) ptr_count_dec(&(ce)->inflight)
 
 	struct list_head signal_link;
 	struct list_head signals;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index 2f1c6871ee95..9bb6ff76680e 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -125,71 +125,26 @@ hangcheck_action_to_str(const enum intel_engine_hangcheck_action a)
 
 void intel_engines_set_scheduler_caps(struct drm_i915_private *i915);
 
-static inline void
-execlists_set_active(struct intel_engine_execlists *execlists,
-		     unsigned int bit)
-{
-	__set_bit(bit, (unsigned long *)&execlists->active);
-}
-
-static inline bool
-execlists_set_active_once(struct intel_engine_execlists *execlists,
-			  unsigned int bit)
-{
-	return !__test_and_set_bit(bit, (unsigned long *)&execlists->active);
-}
-
-static inline void
-execlists_clear_active(struct intel_engine_execlists *execlists,
-		       unsigned int bit)
-{
-	__clear_bit(bit, (unsigned long *)&execlists->active);
-}
-
-static inline void
-execlists_clear_all_active(struct intel_engine_execlists *execlists)
+static inline unsigned int
+execlists_num_ports(const struct intel_engine_execlists * const execlists)
 {
-	execlists->active = 0;
+	return execlists->port_mask + 1;
 }
 
-static inline bool
-execlists_is_active(const struct intel_engine_execlists *execlists,
-		    unsigned int bit)
+static inline struct i915_request *
+execlists_active(const struct intel_engine_execlists *execlists)
 {
-	return test_bit(bit, (unsigned long *)&execlists->active);
+	GEM_BUG_ON(execlists->active - execlists->inflight >
+		   execlists_num_ports(execlists));
+	return READ_ONCE(*execlists->active);
 }
 
-void execlists_user_begin(struct intel_engine_execlists *execlists,
-			  const struct execlist_port *port);
-void execlists_user_end(struct intel_engine_execlists *execlists);
-
 void
 execlists_cancel_port_requests(struct intel_engine_execlists * const execlists);
 
 struct i915_request *
 execlists_unwind_incomplete_requests(struct intel_engine_execlists *execlists);
 
-static inline unsigned int
-execlists_num_ports(const struct intel_engine_execlists * const execlists)
-{
-	return execlists->port_mask + 1;
-}
-
-static inline struct execlist_port *
-execlists_port_complete(struct intel_engine_execlists * const execlists,
-			struct execlist_port * const port)
-{
-	const unsigned int m = execlists->port_mask;
-
-	GEM_BUG_ON(port_index(port, execlists) != 0);
-	GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_USER));
-
-	memmove(port, port + 1, m * sizeof(struct execlist_port));
-	memset(port + m, 0, sizeof(struct execlist_port));
-
-	return port;
-}
-
 static inline u32
 intel_read_status_page(const struct intel_engine_cs *engine, int reg)
 {
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 898692989313..65cf08a8b209 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -508,6 +508,10 @@ void intel_engine_init_execlists(struct intel_engine_cs *engine)
 	GEM_BUG_ON(!is_power_of_2(execlists_num_ports(execlists)));
 	GEM_BUG_ON(execlists_num_ports(execlists) > EXECLIST_MAX_PORTS);
 
+	memset(execlists->pending, 0, sizeof(execlists->pending));
+	execlists->active =
+		memset(execlists->inflight, 0, sizeof(execlists->inflight));
+
 	execlists->queue_priority_hint = INT_MIN;
 	execlists->queue = RB_ROOT_CACHED;
 }
@@ -1152,7 +1156,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
 		return true;
 
 	/* Waiting to drain ELSP? */
-	if (READ_ONCE(engine->execlists.active)) {
+	if (execlists_active(&engine->execlists)) {
 		struct tasklet_struct *t = &engine->execlists.tasklet;
 
 		synchronize_hardirq(engine->i915->drm.irq);
@@ -1169,7 +1173,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
 		/* Otherwise flush the tasklet if it was on another cpu */
 		tasklet_unlock_wait(t);
 
-		if (READ_ONCE(engine->execlists.active))
+		if (execlists_active(&engine->execlists))
 			return false;
 	}
 
@@ -1366,6 +1370,7 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
 	}
 
 	if (HAS_EXECLISTS(dev_priv)) {
+		struct i915_request * const *port, *rq;
 		const u32 *hws =
 			&engine->status_page.addr[I915_HWS_CSB_BUF0_INDEX];
 		const u8 num_entries = execlists->csb_size;
@@ -1398,26 +1403,28 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
 		}
 
 		rcu_read_lock();
-		for (idx = 0; idx < execlists_num_ports(execlists); idx++) {
-			struct i915_request *rq;
-			unsigned int count;
-
-			rq = port_unpack(&execlists->port[idx], &count);
-			if (rq) {
-				char hdr[80];
-
-				snprintf(hdr, sizeof(hdr),
-					 "\t\tELSP[%d] count=%d, ring:{start:%08x, hwsp:%08x, seqno:%08x}, rq: ",
-					 idx, count,
-					 i915_ggtt_offset(rq->ring->vma),
-					 rq->timeline->hwsp_offset,
-					 hwsp_seqno(rq));
-				print_request(m, rq, hdr);
-			} else {
-				drm_printf(m, "\t\tELSP[%d] idle\n", idx);
-			}
+		for (port = execlists->active; (rq = *port); port++) {
+			char hdr[80];
+
+			snprintf(hdr, sizeof(hdr),
+				 "\t\tActive[%d] ring:{start:%08x, hwsp:%08x, seqno:%08x}, rq: ",
+				 (int)(port - execlists->active),
+				 i915_ggtt_offset(rq->ring->vma),
+				 rq->timeline->hwsp_offset,
+				 hwsp_seqno(rq));
+			print_request(m, rq, hdr);
+		}
+		for (port = execlists->pending; (rq = *port); port++) {
+			char hdr[80];
+
+			snprintf(hdr, sizeof(hdr),
+				 "\t\tPending[%d] ring:{start:%08x, hwsp:%08x, seqno:%08x}, rq: ",
+				 (int)(port - execlists->pending),
+				 i915_ggtt_offset(rq->ring->vma),
+				 rq->timeline->hwsp_offset,
+				 hwsp_seqno(rq));
+			print_request(m, rq, hdr);
 		}
-		drm_printf(m, "\t\tHW active? 0x%x\n", execlists->active);
 		rcu_read_unlock();
 	} else if (INTEL_GEN(dev_priv) > 6) {
 		drm_printf(m, "\tPP_DIR_BASE: 0x%08x\n",
@@ -1581,15 +1588,19 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
 	}
 
 	if (engine->stats.enabled++ == 0) {
-		const struct execlist_port *port = execlists->port;
-		unsigned int num_ports = execlists_num_ports(execlists);
+		struct i915_request * const *port;
+		struct i915_request *rq;
 
 		engine->stats.enabled_at = ktime_get();
 
 		/* XXX submission method oblivious? */
-		while (num_ports-- && port_isset(port)) {
+		for (port = execlists->active; (rq = *port); port++)
 			engine->stats.active++;
-			port++;
+
+		for (port = execlists->pending; (rq = *port); port++) {
+			/* Exclude any contexts already counted in active */
+			if (intel_context_inflight_count(rq->hw_context) == 1)
+				engine->stats.active++;
 		}
 
 		if (engine->stats.active)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index b2faca8e5dec..dd0082df42cc 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -160,51 +160,10 @@ struct intel_engine_execlists {
 	 */
 	u32 __iomem *ctrl_reg;
 
-	/**
-	 * @port: execlist port states
-	 *
-	 * For each hardware ELSP (ExecList Submission Port) we keep
-	 * track of the last request and the number of times we submitted
-	 * that port to hw. We then count the number of times the hw reports
-	 * a context completion or preemption. As only one context can
-	 * be active on hw, we limit resubmission of context to port[0]. This
-	 * is called Lite Restore, of the context.
-	 */
-	struct execlist_port {
-		/**
-		 * @request_count: combined request and submission count
-		 */
-		struct i915_request *request_count;
-#define EXECLIST_COUNT_BITS 2
-#define port_request(p) ptr_mask_bits((p)->request_count, EXECLIST_COUNT_BITS)
-#define port_count(p) ptr_unmask_bits((p)->request_count, EXECLIST_COUNT_BITS)
-#define port_pack(rq, count) ptr_pack_bits(rq, count, EXECLIST_COUNT_BITS)
-#define port_unpack(p, count) ptr_unpack_bits((p)->request_count, count, EXECLIST_COUNT_BITS)
-#define port_set(p, packed) ((p)->request_count = (packed))
-#define port_isset(p) ((p)->request_count)
-#define port_index(p, execlists) ((p) - (execlists)->port)
-
-		/**
-		 * @context_id: context ID for port
-		 */
-		GEM_DEBUG_DECL(u32 context_id);
-
 #define EXECLIST_MAX_PORTS 2
-	} port[EXECLIST_MAX_PORTS];
-
-	/**
-	 * @active: is the HW active? We consider the HW as active after
-	 * submitting any context for execution and until we have seen the
-	 * last context completion event. After that, we do not expect any
-	 * more events until we submit, and so can park the HW.
-	 *
-	 * As we have a small number of different sources from which we feed
-	 * the HW, we track the state of each inside a single bitfield.
-	 */
-	unsigned int active;
-#define EXECLISTS_ACTIVE_USER 0
-#define EXECLISTS_ACTIVE_PREEMPT 1
-#define EXECLISTS_ACTIVE_HWACK 2
+	struct i915_request * const *active;
+	struct i915_request *inflight[EXECLIST_MAX_PORTS + 1 /* sentinel */];
+	struct i915_request *pending[EXECLIST_MAX_PORTS + 1];
 
 	/**
 	 * @port_mask: number of execlist ports - 1
@@ -245,11 +204,6 @@ struct intel_engine_execlists {
 	 */
 	u32 *csb_status;
 
-	/**
-	 * @preempt_complete_status: expected CSB upon completing preemption
-	 */
-	u32 preempt_complete_status;
-
 	/**
 	 * @csb_size: context status buffer FIFO size
 	 */
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index c400c66d0ee5..65c91b7db59d 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -161,6 +161,8 @@
 #define GEN8_CTX_STATUS_COMPLETED_MASK \
 	 (GEN8_CTX_STATUS_COMPLETE | GEN8_CTX_STATUS_PREEMPTED)
 
+#define CTX_DESC_FORCE_RESTORE BIT_ULL(2)
+
 /* Typical size of the average request (2 pipecontrols and a MI_BB) */
 #define EXECLISTS_REQUEST_SIZE 64 /* bytes */
 #define WA_TAIL_DWORDS 2
@@ -221,6 +223,14 @@ static void execlists_init_reg_state(u32 *reg_state,
 				     struct intel_engine_cs *engine,
 				     struct intel_ring *ring);
 
+static inline u32 intel_hws_preempt_address(struct intel_engine_cs *engine)
+{
+	return (i915_ggtt_offset(engine->status_page.vma) +
+		I915_GEM_HWS_PREEMPT_ADDR);
+}
+
+#define ring_pause(E) ((E)->status_page.addr[I915_GEM_HWS_PREEMPT])
+
 static inline struct i915_priolist *to_priolist(struct rb_node *rb)
 {
 	return rb_entry(rb, struct i915_priolist, node);
@@ -271,12 +281,6 @@ static inline bool need_preempt(const struct intel_engine_cs *engine,
 {
 	int last_prio;
 
-	if (!engine->preempt_context)
-		return false;
-
-	if (i915_request_completed(rq))
-		return false;
-
 	/*
 	 * Check if the current priority hint merits a preemption attempt.
 	 *
@@ -338,9 +342,6 @@ __maybe_unused static inline bool
 assert_priority_queue(const struct i915_request *prev,
 		      const struct i915_request *next)
 {
-	const struct intel_engine_execlists *execlists =
-		&prev->engine->execlists;
-
 	/*
 	 * Without preemption, the prev may refer to the still active element
 	 * which we refuse to let go.
@@ -348,7 +349,7 @@ assert_priority_queue(const struct i915_request *prev,
 	 * Even with preemption, there are times when we think it is better not
 	 * to preempt and leave an ostensibly lower priority request in flight.
 	 */
-	if (port_request(execlists->port) == prev)
+	if (i915_request_is_active(prev))
 		return true;
 
 	return rq_prio(prev) >= rq_prio(next);
@@ -442,13 +443,11 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
 		struct intel_engine_cs *owner;
 
 		if (i915_request_completed(rq))
-			break;
+			continue; /* XXX */
 
 		__i915_request_unsubmit(rq);
 		unwind_wa_tail(rq);
 
-		GEM_BUG_ON(rq->hw_context->inflight);
-
 		/*
 		 * Push the request back into the queue for later resubmission.
 		 * If this request is not native to this physical engine (i.e.
@@ -500,32 +499,32 @@ execlists_context_status_change(struct i915_request *rq, unsigned long status)
 				   status, rq);
 }
 
-inline void
-execlists_user_begin(struct intel_engine_execlists *execlists,
-		     const struct execlist_port *port)
+static inline struct i915_request *
+execlists_schedule_in(struct i915_request *rq, int idx)
 {
-	execlists_set_active_once(execlists, EXECLISTS_ACTIVE_USER);
-}
+	struct intel_context *ce = rq->hw_context;
+	int count;
 
-inline void
-execlists_user_end(struct intel_engine_execlists *execlists)
-{
-	execlists_clear_active(execlists, EXECLISTS_ACTIVE_USER);
-}
+	trace_i915_request_in(rq, idx);
 
-static inline void
-execlists_context_schedule_in(struct i915_request *rq)
-{
-	GEM_BUG_ON(rq->hw_context->inflight);
+	count = intel_context_inflight_count(ce);
+	if (!count) {
+		intel_context_get(ce);
+		ce->inflight = rq->engine;
+
+		execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
+		intel_engine_context_in(ce->inflight);
+	}
+
+	intel_context_inflight_inc(ce);
+	GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
 
-	execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN);
-	intel_engine_context_in(rq->engine);
-	rq->hw_context->inflight = rq->engine;
+	return i915_request_get(rq);
 }
 
-static void kick_siblings(struct i915_request *rq)
+static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 {
-	struct virtual_engine *ve = to_virtual_engine(rq->hw_context->engine);
+	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 	struct i915_request *next = READ_ONCE(ve->request);
 
 	if (next && next->execution_mask & ~rq->execution_mask)
@@ -533,29 +532,42 @@ static void kick_siblings(struct i915_request *rq)
 }
 
 static inline void
-execlists_context_schedule_out(struct i915_request *rq, unsigned long status)
+execlists_schedule_out(struct i915_request *rq)
 {
-	rq->hw_context->inflight = NULL;
-	intel_engine_context_out(rq->engine);
-	execlists_context_status_change(rq, status);
+	struct intel_context *ce = rq->hw_context;
+
+	GEM_BUG_ON(!intel_context_inflight_count(ce));
+
 	trace_i915_request_out(rq);
 
-	/*
-	 * If this is part of a virtual engine, its next request may have
-	 * been blocked waiting for access to the active context. We have
-	 * to kick all the siblings again in case we need to switch (e.g.
-	 * the next request is not runnable on this engine). Hopefully,
-	 * we will already have submitted the next request before the
-	 * tasklet runs and do not need to rebuild each virtual tree
-	 * and kick everyone again.
-	 */
-	if (rq->engine != rq->hw_context->engine)
-		kick_siblings(rq);
+	intel_context_inflight_dec(ce);
+	if (!intel_context_inflight_count(ce)) {
+		intel_engine_context_out(ce->inflight);
+		execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
+
+		ce->inflight = NULL;
+		intel_context_put(ce);
+
+		/*
+		 * If this is part of a virtual engine, its next request may
+		 * have been blocked waiting for access to the active context.
+		 * We have to kick all the siblings again in case we need to
+		 * switch (e.g. the next request is not runnable on this
+		 * engine). Hopefully, we will already have submitted the next
+		 * request before the tasklet runs and do not need to rebuild
+		 * each virtual tree and kick everyone again.
+		 */
+		if (rq->engine != ce->engine)
+			kick_siblings(rq, ce);
+	}
+
+	i915_request_put(rq);
 }
 
-static u64 execlists_update_context(struct i915_request *rq)
+static u64 execlists_update_context(const struct i915_request *rq)
 {
 	struct intel_context *ce = rq->hw_context;
+	u64 desc;
 
 	ce->lrc_reg_state[CTX_RING_TAIL + 1] =
 		intel_ring_set_tail(rq->ring, rq->tail);
@@ -576,7 +588,11 @@ static u64 execlists_update_context(struct i915_request *rq)
 	 * wmb).
 	 */
 	mb();
-	return ce->lrc_desc;
+
+	desc = ce->lrc_desc;
+	ce->lrc_desc &= ~CTX_DESC_FORCE_RESTORE;
+
+	return desc;
 }
 
 static inline void write_desc(struct intel_engine_execlists *execlists, u64 desc, u32 port)
@@ -590,12 +606,54 @@ static inline void write_desc(struct intel_engine_execlists *execlists, u64 desc
 	}
 }
 
+static __maybe_unused void
+trace_ports(const struct intel_engine_execlists *execlists,
+	    const char *msg,
+	    struct i915_request * const *ports)
+{
+	const struct intel_engine_cs *engine =
+		container_of(execlists, typeof(*engine), execlists);
+
+	GEM_TRACE("%s: %s { %llx:%lld%s, %llx:%lld }\n",
+		  engine->name, msg,
+		  ports[0]->fence.context,
+		  ports[0]->fence.seqno,
+		  i915_request_completed(ports[0]) ? "!" :
+		  i915_request_started(ports[0]) ? "*" :
+		  "",
+		  ports[1] ? ports[1]->fence.context : 0,
+		  ports[1] ? ports[1]->fence.seqno : 0);
+}
+
+static __maybe_unused bool
+assert_pending_valid(const struct intel_engine_execlists *execlists,
+		     const char *msg)
+{
+	struct i915_request * const *port, *rq;
+	struct intel_context *ce = NULL;
+
+	trace_ports(execlists, msg, execlists->pending);
+
+	if (execlists->pending[execlists_num_ports(execlists)])
+		return false;
+
+	for (port = execlists->pending; (rq = *port); port++) {
+		if (ce == rq->hw_context)
+			return false;
+
+		ce = rq->hw_context;
+	}
+
+	return ce;
+}
+
 static void execlists_submit_ports(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists *execlists = &engine->execlists;
-	struct execlist_port *port = execlists->port;
 	unsigned int n;
 
+	GEM_BUG_ON(!assert_pending_valid(execlists, "submit"));
+
 	/*
 	 * We can skip acquiring intel_runtime_pm_get() here as it was taken
 	 * on our behalf by the request (see i915_gem_mark_busy()) and it will
@@ -613,38 +671,16 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 	 * of elsq entries, keep this in mind before changing the loop below.
 	 */
 	for (n = execlists_num_ports(execlists); n--; ) {
-		struct i915_request *rq;
-		unsigned int count;
-		u64 desc;
+		struct i915_request *rq = execlists->pending[n];
 
-		rq = port_unpack(&port[n], &count);
-		if (rq) {
-			GEM_BUG_ON(count > !n);
-			if (!count++)
-				execlists_context_schedule_in(rq);
-			port_set(&port[n], port_pack(rq, count));
-			desc = execlists_update_context(rq);
-			GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
-
-			GEM_TRACE("%s in[%d]:  ctx=%d.%d, fence %llx:%lld (current %d), prio=%d\n",
-				  engine->name, n,
-				  port[n].context_id, count,
-				  rq->fence.context, rq->fence.seqno,
-				  hwsp_seqno(rq),
-				  rq_prio(rq));
-		} else {
-			GEM_BUG_ON(!n);
-			desc = 0;
-		}
-
-		write_desc(execlists, desc, n);
+		write_desc(execlists,
+			   rq ? execlists_update_context(rq) : 0,
+			   n);
 	}
 
 	/* we need to manually load the submit queue */
 	if (execlists->ctrl_reg)
 		writel(EL_CTRL_LOAD, execlists->ctrl_reg);
-
-	execlists_clear_active(execlists, EXECLISTS_ACTIVE_HWACK);
 }
 
 static bool ctx_single_port_submission(const struct intel_context *ce)
@@ -668,6 +704,7 @@ static bool can_merge_ctx(const struct intel_context *prev,
 static bool can_merge_rq(const struct i915_request *prev,
 			 const struct i915_request *next)
 {
+	GEM_BUG_ON(prev == next);
 	GEM_BUG_ON(!assert_priority_queue(prev, next));
 
 	if (!can_merge_ctx(prev->hw_context, next->hw_context))
@@ -676,58 +713,6 @@ static bool can_merge_rq(const struct i915_request *prev,
 	return true;
 }
 
-static void port_assign(struct execlist_port *port, struct i915_request *rq)
-{
-	GEM_BUG_ON(rq == port_request(port));
-
-	if (port_isset(port))
-		i915_request_put(port_request(port));
-
-	port_set(port, port_pack(i915_request_get(rq), port_count(port)));
-}
-
-static void inject_preempt_context(struct intel_engine_cs *engine)
-{
-	struct intel_engine_execlists *execlists = &engine->execlists;
-	struct intel_context *ce = engine->preempt_context;
-	unsigned int n;
-
-	GEM_BUG_ON(execlists->preempt_complete_status !=
-		   upper_32_bits(ce->lrc_desc));
-
-	/*
-	 * Switch to our empty preempt context so
-	 * the state of the GPU is known (idle).
-	 */
-	GEM_TRACE("%s\n", engine->name);
-	for (n = execlists_num_ports(execlists); --n; )
-		write_desc(execlists, 0, n);
-
-	write_desc(execlists, ce->lrc_desc, n);
-
-	/* we need to manually load the submit queue */
-	if (execlists->ctrl_reg)
-		writel(EL_CTRL_LOAD, execlists->ctrl_reg);
-
-	execlists_clear_active(execlists, EXECLISTS_ACTIVE_HWACK);
-	execlists_set_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
-
-	(void)I915_SELFTEST_ONLY(execlists->preempt_hang.count++);
-}
-
-static void complete_preempt_context(struct intel_engine_execlists *execlists)
-{
-	GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
-
-	if (inject_preempt_hang(execlists))
-		return;
-
-	execlists_cancel_port_requests(execlists);
-	__unwind_incomplete_requests(container_of(execlists,
-						  struct intel_engine_cs,
-						  execlists));
-}
-
 static void virtual_update_register_offsets(u32 *regs,
 					    struct intel_engine_cs *engine)
 {
@@ -792,7 +777,7 @@ static bool virtual_matches(const struct virtual_engine *ve,
 	 * we reuse the register offsets). This is a very small
 	 * hystersis on the greedy seelction algorithm.
 	 */
-	inflight = READ_ONCE(ve->context.inflight);
+	inflight = intel_context_inflight(&ve->context);
 	if (inflight && inflight != engine)
 		return false;
 
@@ -815,13 +800,23 @@ static void virtual_xfer_breadcrumbs(struct virtual_engine *ve,
 	spin_unlock(&old->breadcrumbs.irq_lock);
 }
 
+static struct i915_request *
+last_active(const struct intel_engine_execlists *execlists)
+{
+	struct i915_request * const *last = execlists->active;
+
+	while (*last && i915_request_completed(*last))
+		last++;
+
+	return *last;
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct execlist_port *port = execlists->port;
-	const struct execlist_port * const last_port =
-		&execlists->port[execlists->port_mask];
-	struct i915_request *last = port_request(port);
+	struct i915_request **port = execlists->pending;
+	struct i915_request ** const last_port = port + execlists->port_mask;
+	struct i915_request *last;
 	struct rb_node *rb;
 	bool submit = false;
 
@@ -867,65 +862,72 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 		break;
 	}
 
+	/*
+	 * If the queue is higher priority than the last
+	 * request in the currently active context, submit afresh.
+	 * We will resubmit again afterwards in case we need to split
+	 * the active context to interject the preemption request,
+	 * i.e. we will retrigger preemption following the ack in case
+	 * of trouble.
+	 */
+	last = last_active(execlists);
 	if (last) {
-		/*
-		 * Don't resubmit or switch until all outstanding
-		 * preemptions (lite-restore) are seen. Then we
-		 * know the next preemption status we see corresponds
-		 * to this ELSP update.
-		 */
-		GEM_BUG_ON(!execlists_is_active(execlists,
-						EXECLISTS_ACTIVE_USER));
-		GEM_BUG_ON(!port_count(&port[0]));
-
-		/*
-		 * If we write to ELSP a second time before the HW has had
-		 * a chance to respond to the previous write, we can confuse
-		 * the HW and hit "undefined behaviour". After writing to ELSP,
-		 * we must then wait until we see a context-switch event from
-		 * the HW to indicate that it has had a chance to respond.
-		 */
-		if (!execlists_is_active(execlists, EXECLISTS_ACTIVE_HWACK))
-			return;
-
 		if (need_preempt(engine, last, rb)) {
-			inject_preempt_context(engine);
-			return;
-		}
+			GEM_TRACE("%s: preempting last=%llx:%lld, prio=%d, hint=%d\n",
+				  engine->name,
+				  last->fence.context,
+				  last->fence.seqno,
+				  last->sched.attr.priority,
+				  execlists->queue_priority_hint);
+			/*
+			 * Don't let the RING_HEAD advance past the breadcrumb
+			 * as we unwind (and until we resubmit) so that we do
+			 * not accidentally tell it to go backwards.
+			 */
+			ring_pause(engine) = 1;
 
-		/*
-		 * In theory, we could coalesce more requests onto
-		 * the second port (the first port is active, with
-		 * no preemptions pending). However, that means we
-		 * then have to deal with the possible lite-restore
-		 * of the second port (as we submit the ELSP, there
-		 * may be a context-switch) but also we may complete
-		 * the resubmission before the context-switch. Ergo,
-		 * coalescing onto the second port will cause a
-		 * preemption event, but we cannot predict whether
-		 * that will affect port[0] or port[1].
-		 *
-		 * If the second port is already active, we can wait
-		 * until the next context-switch before contemplating
-		 * new requests. The GPU will be busy and we should be
-		 * able to resubmit the new ELSP before it idles,
-		 * avoiding pipeline bubbles (momentary pauses where
-		 * the driver is unable to keep up the supply of new
-		 * work). However, we have to double check that the
-		 * priorities of the ports haven't been switch.
-		 */
-		if (port_count(&port[1]))
-			return;
+			/*
+			 * Note that we have not stopped the GPU at this point,
+			 * so we are unwinding the incomplete requests as they
+			 * remain inflight and so by the time we do complete
+			 * the preemption, some of the unwound requests may
+			 * complete!
+			 */
+			__unwind_incomplete_requests(engine);
 
-		/*
-		 * WaIdleLiteRestore:bdw,skl
-		 * Apply the wa NOOPs to prevent
-		 * ring:HEAD == rq:TAIL as we resubmit the
-		 * request. See gen8_emit_fini_breadcrumb() for
-		 * where we prepare the padding after the
-		 * end of the request.
-		 */
-		last->tail = last->wa_tail;
+			/*
+			 * If we need to return to the preempted context, we
+			 * need to skip the lite-restore and force it to
+			 * reload the RING_TAIL. Otherwise, the HW has a
+			 * tendency to ignore us rewinding the TAIL to the
+			 * end of an earlier request.
+			 */
+			last->hw_context->lrc_desc |= CTX_DESC_FORCE_RESTORE;
+			last = NULL;
+		} else {
+			/*
+			 * Otherwise if we already have a request pending
+			 * for execution after the current one, we can
+			 * just wait until the next CS event before
+			 * queuing more. In either case we will force a
+			 * lite-restore preemption event, but if we wait
+			 * we hopefully coalesce several updates into a single
+			 * submission.
+			 */
+			if (!list_is_last(&last->sched.link,
+					  &engine->active.requests))
+				return;
+
+			/*
+			 * WaIdleLiteRestore:bdw,skl
+			 * Apply the wa NOOPs to prevent
+			 * ring:HEAD == rq:TAIL as we resubmit the
+			 * request. See gen8_emit_fini_breadcrumb() for
+			 * where we prepare the padding after the
+			 * end of the request.
+			 */
+			last->tail = last->wa_tail;
+		}
 	}
 
 	while (rb) { /* XXX virtual is always taking precedence */
@@ -955,9 +957,24 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 				continue;
 			}
 
+			if (i915_request_completed(rq)) {
+				ve->request = NULL;
+				ve->base.execlists.queue_priority_hint = INT_MIN;
+				rb_erase_cached(rb, &execlists->virtual);
+				RB_CLEAR_NODE(rb);
+
+				rq->engine = engine;
+				__i915_request_submit(rq);
+
+				spin_unlock(&ve->base.active.lock);
+
+				rb = rb_first_cached(&execlists->virtual);
+				continue;
+			}
+
 			if (last && !can_merge_rq(last, rq)) {
 				spin_unlock(&ve->base.active.lock);
-				return; /* leave this rq for another engine */
+				return; /* leave this for another */
 			}
 
 			GEM_TRACE("%s: virtual rq=%llx:%lld%s, new engine? %s\n",
@@ -1006,9 +1023,10 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			}
 
 			__i915_request_submit(rq);
-			trace_i915_request_in(rq, port_index(port, execlists));
-			submit = true;
-			last = rq;
+			if (!i915_request_completed(rq)) {
+				submit = true;
+				last = rq;
+			}
 		}
 
 		spin_unlock(&ve->base.active.lock);
@@ -1021,6 +1039,9 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 		int i;
 
 		priolist_for_each_request_consume(rq, rn, p, i) {
+			if (i915_request_completed(rq))
+				goto skip;
+
 			/*
 			 * Can we combine this request with the current port?
 			 * It has to be the same context/ringbuffer and not
@@ -1060,19 +1081,14 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 				    ctx_single_port_submission(rq->hw_context))
 					goto done;
 
-
-				if (submit)
-					port_assign(port, last);
+				*port = execlists_schedule_in(last, port - execlists->pending);
 				port++;
-
-				GEM_BUG_ON(port_isset(port));
 			}
 
-			__i915_request_submit(rq);
-			trace_i915_request_in(rq, port_index(port, execlists));
-
 			last = rq;
 			submit = true;
+skip:
+			__i915_request_submit(rq);
 		}
 
 		rb_erase_cached(&p->node, &execlists->queue);
@@ -1097,54 +1113,30 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 	 * interrupt for secondary ports).
 	 */
 	execlists->queue_priority_hint = queue_prio(execlists);
+	GEM_TRACE("%s: queue_priority_hint:%d, submit:%s\n",
+		  engine->name, execlists->queue_priority_hint,
+		  yesno(submit));
 
 	if (submit) {
-		port_assign(port, last);
+		*port = execlists_schedule_in(last, port - execlists->pending);
+		memset(port + 1, 0, (last_port - port) * sizeof(*port));
 		execlists_submit_ports(engine);
 	}
-
-	/* We must always keep the beast fed if we have work piled up */
-	GEM_BUG_ON(rb_first_cached(&execlists->queue) &&
-		   !port_isset(execlists->port));
-
-	/* Re-evaluate the executing context setup after each preemptive kick */
-	if (last)
-		execlists_user_begin(execlists, execlists->port);
-
-	/* If the engine is now idle, so should be the flag; and vice versa. */
-	GEM_BUG_ON(execlists_is_active(&engine->execlists,
-				       EXECLISTS_ACTIVE_USER) ==
-		   !port_isset(engine->execlists.port));
 }
 
 void
 execlists_cancel_port_requests(struct intel_engine_execlists * const execlists)
 {
-	struct execlist_port *port = execlists->port;
-	unsigned int num_ports = execlists_num_ports(execlists);
-
-	while (num_ports-- && port_isset(port)) {
-		struct i915_request *rq = port_request(port);
-
-		GEM_TRACE("%s:port%u fence %llx:%lld, (current %d)\n",
-			  rq->engine->name,
-			  (unsigned int)(port - execlists->port),
-			  rq->fence.context, rq->fence.seqno,
-			  hwsp_seqno(rq));
+	struct i915_request * const *port, *rq;
 
-		GEM_BUG_ON(!execlists->active);
-		execlists_context_schedule_out(rq,
-					       i915_request_completed(rq) ?
-					       INTEL_CONTEXT_SCHEDULE_OUT :
-					       INTEL_CONTEXT_SCHEDULE_PREEMPTED);
+	for (port = execlists->pending; (rq = *port); port++)
+		execlists_schedule_out(rq);
+	memset(execlists->pending, 0, sizeof(execlists->pending));
 
-		i915_request_put(rq);
-
-		memset(port, 0, sizeof(*port));
-		port++;
-	}
-
-	execlists_clear_all_active(execlists);
+	for (port = execlists->active; (rq = *port); port++)
+		execlists_schedule_out(rq);
+	execlists->active =
+		memset(execlists->inflight, 0, sizeof(execlists->inflight));
 }
 
 static inline void
@@ -1163,7 +1155,6 @@ reset_in_progress(const struct intel_engine_execlists *execlists)
 static void process_csb(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct execlist_port *port = execlists->port;
 	const u32 * const buf = execlists->csb_status;
 	const u8 num_entries = execlists->csb_size;
 	u8 head, tail;
@@ -1197,9 +1188,7 @@ static void process_csb(struct intel_engine_cs *engine)
 	rmb();
 
 	do {
-		struct i915_request *rq;
 		unsigned int status;
-		unsigned int count;
 
 		if (++head == num_entries)
 			head = 0;
@@ -1222,68 +1211,37 @@ static void process_csb(struct intel_engine_cs *engine)
 		 * status notifier.
 		 */
 
-		GEM_TRACE("%s csb[%d]: status=0x%08x:0x%08x, active=0x%x\n",
+		GEM_TRACE("%s csb[%d]: status=0x%08x:0x%08x\n",
 			  engine->name, head,
-			  buf[2 * head + 0], buf[2 * head + 1],
-			  execlists->active);
+			  buf[2 * head + 0], buf[2 * head + 1]);
 
 		status = buf[2 * head];
-		if (status & (GEN8_CTX_STATUS_IDLE_ACTIVE |
-			      GEN8_CTX_STATUS_PREEMPTED))
-			execlists_set_active(execlists,
-					     EXECLISTS_ACTIVE_HWACK);
-		if (status & GEN8_CTX_STATUS_ACTIVE_IDLE)
-			execlists_clear_active(execlists,
-					       EXECLISTS_ACTIVE_HWACK);
-
-		if (!(status & GEN8_CTX_STATUS_COMPLETED_MASK))
-			continue;
-
-		/* We should never get a COMPLETED | IDLE_ACTIVE! */
-		GEM_BUG_ON(status & GEN8_CTX_STATUS_IDLE_ACTIVE);
+		if (status & GEN8_CTX_STATUS_IDLE_ACTIVE) {
+promote:
+			GEM_BUG_ON(!assert_pending_valid(execlists, "promote"));
+			execlists->active =
+				memcpy(execlists->inflight,
+				       execlists->pending,
+				       execlists_num_ports(execlists) *
+				       sizeof(*execlists->pending));
+			execlists->pending[0] = NULL;
 
-		if (status & GEN8_CTX_STATUS_COMPLETE &&
-		    buf[2*head + 1] == execlists->preempt_complete_status) {
-			GEM_TRACE("%s preempt-idle\n", engine->name);
-			complete_preempt_context(execlists);
-			continue;
-		}
+			if (!inject_preempt_hang(execlists))
+				ring_pause(engine) = 0;
+		} else if (status & GEN8_CTX_STATUS_PREEMPTED) {
+			struct i915_request * const *port = execlists->active;
 
-		if (status & GEN8_CTX_STATUS_PREEMPTED &&
-		    execlists_is_active(execlists,
-					EXECLISTS_ACTIVE_PREEMPT))
-			continue;
+			trace_ports(execlists, "preempted", execlists->active);
 
-		GEM_BUG_ON(!execlists_is_active(execlists,
-						EXECLISTS_ACTIVE_USER));
+			while (*port)
+				execlists_schedule_out(*port++);
 
-		rq = port_unpack(port, &count);
-		GEM_TRACE("%s out[0]: ctx=%d.%d, fence %llx:%lld (current %d), prio=%d\n",
-			  engine->name,
-			  port->context_id, count,
-			  rq ? rq->fence.context : 0,
-			  rq ? rq->fence.seqno : 0,
-			  rq ? hwsp_seqno(rq) : 0,
-			  rq ? rq_prio(rq) : 0);
+			goto promote;
+		} else if (*execlists->active) {
+			struct i915_request *rq = *execlists->active++;
 
-		/* Check the context/desc id for this event matches */
-		GEM_DEBUG_BUG_ON(buf[2 * head + 1] != port->context_id);
-
-		GEM_BUG_ON(count == 0);
-		if (--count == 0) {
-			/*
-			 * On the final event corresponding to the
-			 * submission of this context, we expect either
-			 * an element-switch event or a completion
-			 * event (and on completion, the active-idle
-			 * marker). No more preemptions, lite-restore
-			 * or otherwise.
-			 */
-			GEM_BUG_ON(status & GEN8_CTX_STATUS_PREEMPTED);
-			GEM_BUG_ON(port_isset(&port[1]) &&
-				   !(status & GEN8_CTX_STATUS_ELEMENT_SWITCH));
-			GEM_BUG_ON(!port_isset(&port[1]) &&
-				   !(status & GEN8_CTX_STATUS_ACTIVE_IDLE));
+			trace_ports(execlists, "completed",
+				    execlists->active - 1);
 
 			/*
 			 * We rely on the hardware being strongly
@@ -1292,21 +1250,10 @@ static void process_csb(struct intel_engine_cs *engine)
 			 * user interrupt and CSB is processed.
 			 */
 			GEM_BUG_ON(!i915_request_completed(rq));
+			execlists_schedule_out(rq);
 
-			execlists_context_schedule_out(rq,
-						       INTEL_CONTEXT_SCHEDULE_OUT);
-			i915_request_put(rq);
-
-			GEM_TRACE("%s completed ctx=%d\n",
-				  engine->name, port->context_id);
-
-			port = execlists_port_complete(execlists, port);
-			if (port_isset(port))
-				execlists_user_begin(execlists, port);
-			else
-				execlists_user_end(execlists);
-		} else {
-			port_set(port, port_pack(rq, count));
+			GEM_BUG_ON(execlists->active - execlists->inflight >
+				   execlists_num_ports(execlists));
 		}
 	} while (head != tail);
 
@@ -1331,7 +1278,7 @@ static void __execlists_submission_tasklet(struct intel_engine_cs *const engine)
 	lockdep_assert_held(&engine->active.lock);
 
 	process_csb(engine);
-	if (!execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
+	if (!engine->execlists.pending[0])
 		execlists_dequeue(engine);
 }
 
@@ -1344,11 +1291,6 @@ static void execlists_submission_tasklet(unsigned long data)
 	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
 	unsigned long flags;
 
-	GEM_TRACE("%s awake?=%d, active=%x\n",
-		  engine->name,
-		  !!intel_wakeref_active(&engine->wakeref),
-		  engine->execlists.active);
-
 	spin_lock_irqsave(&engine->active.lock, flags);
 	__execlists_submission_tasklet(engine);
 	spin_unlock_irqrestore(&engine->active.lock, flags);
@@ -1375,12 +1317,16 @@ static void __submit_queue_imm(struct intel_engine_cs *engine)
 		tasklet_hi_schedule(&execlists->tasklet);
 }
 
-static void submit_queue(struct intel_engine_cs *engine, int prio)
+static void submit_queue(struct intel_engine_cs *engine,
+			 const struct i915_request *rq)
 {
-	if (prio > engine->execlists.queue_priority_hint) {
-		engine->execlists.queue_priority_hint = prio;
-		__submit_queue_imm(engine);
-	}
+	struct intel_engine_execlists *execlists = &engine->execlists;
+
+	if (rq_prio(rq) <= execlists->queue_priority_hint)
+		return;
+
+	execlists->queue_priority_hint = rq_prio(rq);
+	__submit_queue_imm(engine);
 }
 
 static void execlists_submit_request(struct i915_request *request)
@@ -1396,7 +1342,7 @@ static void execlists_submit_request(struct i915_request *request)
 	GEM_BUG_ON(RB_EMPTY_ROOT(&engine->execlists.queue.rb_root));
 	GEM_BUG_ON(list_empty(&request->sched.link));
 
-	submit_queue(engine, rq_prio(request));
+	submit_queue(engine, request);
 
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
@@ -2053,27 +1999,13 @@ static void execlists_reset_prepare(struct intel_engine_cs *engine)
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
-static bool lrc_regs_ok(const struct i915_request *rq)
-{
-	const struct intel_ring *ring = rq->ring;
-	const u32 *regs = rq->hw_context->lrc_reg_state;
-
-	/* Quick spot check for the common signs of context corruption */
-
-	if (regs[CTX_RING_BUFFER_CONTROL + 1] !=
-	    (RING_CTL_SIZE(ring->size) | RING_VALID))
-		return false;
-
-	if (regs[CTX_RING_BUFFER_START + 1] != i915_ggtt_offset(ring->vma))
-		return false;
-
-	return true;
-}
-
-static void reset_csb_pointers(struct intel_engine_execlists *execlists)
+static void reset_csb_pointers(struct intel_engine_cs *engine)
 {
+	struct intel_engine_execlists * const execlists = &engine->execlists;
 	const unsigned int reset_value = execlists->csb_size - 1;
 
+	ring_pause(engine) = 0;
+
 	/*
 	 * After a reset, the HW starts writing into CSB entry [0]. We
 	 * therefore have to set our HEAD pointer back one entry so that
@@ -2120,18 +2052,19 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	process_csb(engine); /* drain preemption events */
 
 	/* Following the reset, we need to reload the CSB read/write pointers */
-	reset_csb_pointers(&engine->execlists);
+	reset_csb_pointers(engine);
 
 	/*
 	 * Save the currently executing context, even if we completed
 	 * its request, it was still running at the time of the
 	 * reset and will have been clobbered.
 	 */
-	if (!port_isset(execlists->port))
-		goto out_clear;
+	rq = execlists_active(execlists);
+	if (!rq)
+		return;
 
-	rq = port_request(execlists->port);
 	ce = rq->hw_context;
+	rq = active_request(rq);
 
 	/*
 	 * Catch up with any missed context-switch interrupts.
@@ -2144,9 +2077,12 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	 */
 	execlists_cancel_port_requests(execlists);
 
-	rq = active_request(rq);
-	if (!rq)
+	if (!rq) {
+		ce->ring->head = ce->ring->tail;
 		goto out_replay;
+	}
+
+	ce->ring->head = intel_ring_wrap(ce->ring, rq->head);
 
 	/*
 	 * If this request hasn't started yet, e.g. it is waiting on a
@@ -2160,7 +2096,7 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	 * Otherwise, if we have not started yet, the request should replay
 	 * perfectly and we do not need to flag the result as being erroneous.
 	 */
-	if (!i915_request_started(rq) && lrc_regs_ok(rq))
+	if (!i915_request_started(rq))
 		goto out_replay;
 
 	/*
@@ -2175,7 +2111,7 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	 * image back to the expected values to skip over the guilty request.
 	 */
 	i915_reset_request(rq, stalled);
-	if (!stalled && lrc_regs_ok(rq))
+	if (!stalled)
 		goto out_replay;
 
 	/*
@@ -2195,17 +2131,13 @@ static void __execlists_reset(struct intel_engine_cs *engine, bool stalled)
 	execlists_init_reg_state(regs, ce, engine, ce->ring);
 
 out_replay:
-	/* Rerun the request; its payload has been neutered (if guilty). */
-	ce->ring->head =
-		rq ? intel_ring_wrap(ce->ring, rq->head) : ce->ring->tail;
+	GEM_TRACE("%s replay {head:%04x, tail:%04x\n",
+		  engine->name, ce->ring->head, ce->ring->tail);
 	intel_ring_update_space(ce->ring);
 	__execlists_update_reg_state(ce, engine);
 
 	/* Push back any incomplete requests for replay after the reset. */
 	__unwind_incomplete_requests(engine);
-
-out_clear:
-	execlists_clear_all_active(execlists);
 }
 
 static void execlists_reset(struct intel_engine_cs *engine, bool stalled)
@@ -2301,7 +2233,6 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine)
 
 	execlists->queue_priority_hint = INT_MIN;
 	execlists->queue = RB_ROOT_CACHED;
-	GEM_BUG_ON(port_isset(execlists->port));
 
 	GEM_BUG_ON(__tasklet_is_enabled(&execlists->tasklet));
 	execlists->tasklet.func = nop_submission_tasklet;
@@ -2519,15 +2450,29 @@ static u32 *gen8_emit_wa_tail(struct i915_request *request, u32 *cs)
 	return cs;
 }
 
+static u32 *emit_preempt_busywait(struct i915_request *request, u32 *cs)
+{
+	*cs++ = MI_SEMAPHORE_WAIT |
+		MI_SEMAPHORE_GLOBAL_GTT |
+		MI_SEMAPHORE_POLL |
+		MI_SEMAPHORE_SAD_EQ_SDD;
+	*cs++ = 0;
+	*cs++ = intel_hws_preempt_address(request->engine);
+	*cs++ = 0;
+
+	return cs;
+}
+
 static u32 *gen8_emit_fini_breadcrumb(struct i915_request *request, u32 *cs)
 {
 	cs = gen8_emit_ggtt_write(cs,
 				  request->fence.seqno,
 				  request->timeline->hwsp_offset,
 				  0);
-
 	*cs++ = MI_USER_INTERRUPT;
+
 	*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+	cs = emit_preempt_busywait(request, cs);
 
 	request->tail = intel_ring_offset(request, cs);
 	assert_ring_tail_valid(request->ring, request->tail);
@@ -2548,9 +2493,10 @@ static u32 *gen8_emit_fini_breadcrumb_rcs(struct i915_request *request, u32 *cs)
 				    PIPE_CONTROL_FLUSH_ENABLE |
 				    PIPE_CONTROL_CS_STALL,
 				    0);
-
 	*cs++ = MI_USER_INTERRUPT;
+
 	*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+	cs = emit_preempt_busywait(request, cs);
 
 	request->tail = intel_ring_offset(request, cs);
 	assert_ring_tail_valid(request->ring, request->tail);
@@ -2599,8 +2545,7 @@ void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
 	engine->flags |= I915_ENGINE_SUPPORTS_STATS;
 	if (!intel_vgpu_active(engine->i915))
 		engine->flags |= I915_ENGINE_HAS_SEMAPHORES;
-	if (engine->preempt_context &&
-	    HAS_LOGICAL_RING_PREEMPTION(engine->i915))
+	if (HAS_LOGICAL_RING_PREEMPTION(engine->i915))
 		engine->flags |= I915_ENGINE_HAS_PREEMPTION;
 }
 
@@ -2723,11 +2668,6 @@ int intel_execlists_submission_init(struct intel_engine_cs *engine)
 			i915_mmio_reg_offset(RING_ELSP(base));
 	}
 
-	execlists->preempt_complete_status = ~0u;
-	if (engine->preempt_context)
-		execlists->preempt_complete_status =
-			upper_32_bits(engine->preempt_context->lrc_desc);
-
 	execlists->csb_status =
 		&engine->status_page.addr[I915_HWS_CSB_BUF0_INDEX];
 
@@ -2739,7 +2679,7 @@ int intel_execlists_submission_init(struct intel_engine_cs *engine)
 	else
 		execlists->csb_size = GEN11_CSB_ENTRIES;
 
-	reset_csb_pointers(execlists);
+	reset_csb_pointers(engine);
 
 	return 0;
 }
@@ -2922,11 +2862,6 @@ populate_lr_context(struct intel_context *ce,
 	if (!engine->default_state)
 		regs[CTX_CONTEXT_CONTROL + 1] |=
 			_MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT);
-	if (ce->gem_context == engine->i915->preempt_context &&
-	    INTEL_GEN(engine->i915) < 11)
-		regs[CTX_CONTEXT_CONTROL + 1] |=
-			_MASKED_BIT_ENABLE(CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT |
-					   CTX_CTRL_ENGINE_CTX_SAVE_INHIBIT);
 
 	ret = 0;
 err_unpin_ctx:
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index b7e9fddef270..a497cf7acb6a 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1248,10 +1248,10 @@ static void error_record_engine_registers(struct i915_gpu_state *error,
 	}
 }
 
-static void record_request(struct i915_request *request,
+static void record_request(const struct i915_request *request,
 			   struct drm_i915_error_request *erq)
 {
-	struct i915_gem_context *ctx = request->gem_context;
+	const struct i915_gem_context *ctx = request->gem_context;
 
 	erq->flags = request->fence.flags;
 	erq->context = request->fence.context;
@@ -1315,20 +1315,15 @@ static void engine_record_requests(struct intel_engine_cs *engine,
 	ee->num_requests = count;
 }
 
-static void error_record_engine_execlists(struct intel_engine_cs *engine,
+static void error_record_engine_execlists(const struct intel_engine_cs *engine,
 					  struct drm_i915_error_engine *ee)
 {
 	const struct intel_engine_execlists * const execlists = &engine->execlists;
-	unsigned int n;
+	struct i915_request * const *port = execlists->active;
+	unsigned int n = 0;
 
-	for (n = 0; n < execlists_num_ports(execlists); n++) {
-		struct i915_request *rq = port_request(&execlists->port[n]);
-
-		if (!rq)
-			break;
-
-		record_request(rq, &ee->execlist[n]);
-	}
+	while (*port)
+		record_request(*port++, &ee->execlist[n++]);
 
 	ee->num_ports = n;
 }
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 51b068a57193..98e4743b03be 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -276,6 +276,12 @@ static bool i915_request_retire(struct i915_request *rq)
 
 	local_irq_disable();
 
+	/*
+	 * We only loosely track inflight requests across preemption,
+	 * and so we may find ourselves attempting to retire a _completed_
+	 * request that we have removed from the HW and put back on a run
+	 * queue.
+	 */
 	spin_lock(&rq->engine->active.lock);
 	list_del(&rq->sched.link);
 	spin_unlock(&rq->engine->active.lock);
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index edbbdfec24ab..bebc1e9b4a5e 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -28,6 +28,7 @@
 #include <linux/dma-fence.h>
 #include <linux/lockdep.h>
 
+#include "gt/intel_context_types.h"
 #include "gt/intel_engine_types.h"
 
 #include "i915_gem.h"
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index 2e9b38bdc33c..b1ba3e65cd52 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -179,8 +179,7 @@ static inline int rq_prio(const struct i915_request *rq)
 
 static void kick_submission(struct intel_engine_cs *engine, int prio)
 {
-	const struct i915_request *inflight =
-		port_request(engine->execlists.port);
+	const struct i915_request *inflight = *engine->execlists.active;
 
 	/*
 	 * If we are already the currently executing context, don't
diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h
index 2987219a6300..4920ff9aba62 100644
--- a/drivers/gpu/drm/i915/i915_utils.h
+++ b/drivers/gpu/drm/i915/i915_utils.h
@@ -131,6 +131,18 @@ __check_struct_size(size_t base, size_t arr, size_t count, size_t *size)
 	((typeof(ptr))((unsigned long)(ptr) | __bits));			\
 })
 
+#define ptr_count_dec(p_ptr) do {					\
+	typeof(p_ptr) __p = (p_ptr);					\
+	unsigned long __v = (unsigned long)(*__p);			\
+	*__p = (typeof(*p_ptr))(--__v);					\
+} while (0)
+
+#define ptr_count_inc(p_ptr) do {					\
+	typeof(p_ptr) __p = (p_ptr);					\
+	unsigned long __v = (unsigned long)(*__p);			\
+	*__p = (typeof(*p_ptr))(++__v);					\
+} while (0)
+
 #define page_mask_bits(ptr) ptr_mask_bits(ptr, PAGE_SHIFT)
 #define page_unmask_bits(ptr) ptr_unmask_bits(ptr, PAGE_SHIFT)
 #define page_pack_bits(ptr, bits) ptr_pack_bits(ptr, bits, PAGE_SHIFT)
diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c
index db531ebc7704..12c22359fdac 100644
--- a/drivers/gpu/drm/i915/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/intel_guc_submission.c
@@ -32,7 +32,11 @@
 #include "intel_guc_submission.h"
 #include "i915_drv.h"
 
-#define GUC_PREEMPT_FINISHED		0x1
+enum {
+	GUC_PREEMPT_NONE = 0,
+	GUC_PREEMPT_INPROGRESS,
+	GUC_PREEMPT_FINISHED,
+};
 #define GUC_PREEMPT_BREADCRUMB_DWORDS	0x8
 #define GUC_PREEMPT_BREADCRUMB_BYTES	\
 	(sizeof(u32) * GUC_PREEMPT_BREADCRUMB_DWORDS)
@@ -537,15 +541,11 @@ static void guc_add_request(struct intel_guc *guc, struct i915_request *rq)
 	u32 ctx_desc = lower_32_bits(rq->hw_context->lrc_desc);
 	u32 ring_tail = intel_ring_set_tail(rq->ring, rq->tail) / sizeof(u64);
 
-	spin_lock(&client->wq_lock);
-
 	guc_wq_item_append(client, engine->guc_id, ctx_desc,
 			   ring_tail, rq->fence.seqno);
 	guc_ring_doorbell(client);
 
 	client->submissions[engine->id] += 1;
-
-	spin_unlock(&client->wq_lock);
 }
 
 /*
@@ -631,8 +631,9 @@ static void inject_preempt_context(struct work_struct *work)
 	data[6] = intel_guc_ggtt_offset(guc, guc->shared_data);
 
 	if (WARN_ON(intel_guc_send(guc, data, ARRAY_SIZE(data)))) {
-		execlists_clear_active(&engine->execlists,
-				       EXECLISTS_ACTIVE_PREEMPT);
+		intel_write_status_page(engine,
+					I915_GEM_HWS_PREEMPT,
+					GUC_PREEMPT_NONE);
 		tasklet_schedule(&engine->execlists.tasklet);
 	}
 
@@ -672,8 +673,6 @@ static void complete_preempt_context(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists *execlists = &engine->execlists;
 
-	GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
-
 	if (inject_preempt_hang(execlists))
 		return;
 
@@ -681,89 +680,90 @@ static void complete_preempt_context(struct intel_engine_cs *engine)
 	execlists_unwind_incomplete_requests(execlists);
 
 	wait_for_guc_preempt_report(engine);
-	intel_write_status_page(engine, I915_GEM_HWS_PREEMPT, 0);
+	intel_write_status_page(engine, I915_GEM_HWS_PREEMPT, GUC_PREEMPT_NONE);
 }
 
-/**
- * guc_submit() - Submit commands through GuC
- * @engine: engine associated with the commands
- *
- * The only error here arises if the doorbell hardware isn't functioning
- * as expected, which really shouln't happen.
- */
-static void guc_submit(struct intel_engine_cs *engine)
+static void guc_submit(struct intel_engine_cs *engine,
+		       struct i915_request **out,
+		       struct i915_request **end)
 {
 	struct intel_guc *guc = &engine->i915->guc;
-	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct execlist_port *port = execlists->port;
-	unsigned int n;
+	struct intel_guc_client *client = guc->execbuf_client;
 
-	for (n = 0; n < execlists_num_ports(execlists); n++) {
-		struct i915_request *rq;
-		unsigned int count;
+	spin_lock(&client->wq_lock);
 
-		rq = port_unpack(&port[n], &count);
-		if (rq && count == 0) {
-			port_set(&port[n], port_pack(rq, ++count));
+	do {
+		struct i915_request *rq = *out++;
 
-			flush_ggtt_writes(rq->ring->vma);
+		flush_ggtt_writes(rq->ring->vma);
+		guc_add_request(guc, rq);
+	} while (out != end);
 
-			guc_add_request(guc, rq);
-		}
-	}
+	spin_unlock(&client->wq_lock);
 }
 
-static void port_assign(struct execlist_port *port, struct i915_request *rq)
+static inline int rq_prio(const struct i915_request *rq)
 {
-	GEM_BUG_ON(port_isset(port));
-
-	port_set(port, i915_request_get(rq));
+	return rq->sched.attr.priority | __NO_PREEMPTION;
 }
 
-static inline int rq_prio(const struct i915_request *rq)
+static struct i915_request *schedule_in(struct i915_request *rq, int idx)
 {
-	return rq->sched.attr.priority;
+	trace_i915_request_in(rq, idx);
+
+	if (!rq->hw_context->inflight)
+		rq->hw_context->inflight = rq->engine;
+	intel_context_inflight_inc(rq->hw_context);
+
+	return i915_request_get(rq);
 }
 
-static inline int port_prio(const struct execlist_port *port)
+static void schedule_out(struct i915_request *rq)
 {
-	return rq_prio(port_request(port)) | __NO_PREEMPTION;
+	trace_i915_request_out(rq);
+
+	intel_context_inflight_dec(rq->hw_context);
+	if (!intel_context_inflight_count(rq->hw_context))
+		rq->hw_context->inflight = NULL;
+
+	i915_request_put(rq);
 }
 
-static bool __guc_dequeue(struct intel_engine_cs *engine)
+static void __guc_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct execlist_port *port = execlists->port;
-	struct i915_request *last = NULL;
-	const struct execlist_port * const last_port =
-		&execlists->port[execlists->port_mask];
+	struct i915_request **first = execlists->inflight;
+	struct i915_request ** const last_port = first + execlists->port_mask;
+	struct i915_request *last = first[0];
+	struct i915_request **port;
 	bool submit = false;
 	struct rb_node *rb;
 
 	lockdep_assert_held(&engine->active.lock);
 
-	if (port_isset(port)) {
+	if (last) {
 		if (intel_engine_has_preemption(engine)) {
 			struct guc_preempt_work *preempt_work =
 				&engine->i915->guc.preempt_work[engine->id];
 			int prio = execlists->queue_priority_hint;
 
-			if (i915_scheduler_need_preempt(prio,
-							port_prio(port))) {
-				execlists_set_active(execlists,
-						     EXECLISTS_ACTIVE_PREEMPT);
+			if (i915_scheduler_need_preempt(prio, rq_prio(last))) {
+				intel_write_status_page(engine,
+							I915_GEM_HWS_PREEMPT,
+							GUC_PREEMPT_INPROGRESS);
 				queue_work(engine->i915->guc.preempt_wq,
 					   &preempt_work->work);
-				return false;
+				return;
 			}
 		}
 
-		port++;
-		if (port_isset(port))
-			return false;
+		if (*++first)
+			return;
+
+		last = NULL;
 	}
-	GEM_BUG_ON(port_isset(port));
 
+	port = first;
 	while ((rb = rb_first_cached(&execlists->queue))) {
 		struct i915_priolist *p = to_priolist(rb);
 		struct i915_request *rq, *rn;
@@ -774,18 +774,15 @@ static bool __guc_dequeue(struct intel_engine_cs *engine)
 				if (port == last_port)
 					goto done;
 
-				if (submit)
-					port_assign(port, last);
+				*port = schedule_in(last,
+						    port - execlists->inflight);
 				port++;
 			}
 
 			list_del_init(&rq->sched.link);
-
 			__i915_request_submit(rq);
-			trace_i915_request_in(rq, port_index(port, execlists));
-
-			last = rq;
 			submit = true;
+			last = rq;
 		}
 
 		rb_erase_cached(&p->node, &execlists->queue);
@@ -794,58 +791,41 @@ static bool __guc_dequeue(struct intel_engine_cs *engine)
 done:
 	execlists->queue_priority_hint =
 		rb ? to_priolist(rb)->priority : INT_MIN;
-	if (submit)
-		port_assign(port, last);
-	if (last)
-		execlists_user_begin(execlists, execlists->port);
-
-	/* We must always keep the beast fed if we have work piled up */
-	GEM_BUG_ON(port_isset(execlists->port) &&
-		   !execlists_is_active(execlists, EXECLISTS_ACTIVE_USER));
-	GEM_BUG_ON(rb_first_cached(&execlists->queue) &&
-		   !port_isset(execlists->port));
-
-	return submit;
-}
-
-static void guc_dequeue(struct intel_engine_cs *engine)
-{
-	if (__guc_dequeue(engine))
-		guc_submit(engine);
+	if (submit) {
+		*port = schedule_in(last, port - execlists->inflight);
+		*++port = NULL;
+		guc_submit(engine, first, port);
+	}
+	execlists->active = execlists->inflight;
 }
 
 static void guc_submission_tasklet(unsigned long data)
 {
 	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
 	struct intel_engine_execlists * const execlists = &engine->execlists;
-	struct execlist_port *port = execlists->port;
-	struct i915_request *rq;
+	struct i915_request **port, *rq;
 	unsigned long flags;
 
 	spin_lock_irqsave(&engine->active.lock, flags);
 
-	rq = port_request(port);
-	while (rq && i915_request_completed(rq)) {
-		trace_i915_request_out(rq);
-		i915_request_put(rq);
+	for (port = execlists->inflight; (rq = *port); port++) {
+		if (!i915_request_completed(rq))
+			break;
 
-		port = execlists_port_complete(execlists, port);
-		if (port_isset(port)) {
-			execlists_user_begin(execlists, port);
-			rq = port_request(port);
-		} else {
-			execlists_user_end(execlists);
-			rq = NULL;
-		}
+		schedule_out(rq);
+	}
+	if (port != execlists->inflight) {
+		int idx = port - execlists->inflight;
+		int rem = ARRAY_SIZE(execlists->inflight) - idx;
+		memmove(execlists->inflight, port, rem * sizeof(*port));
 	}
 
-	if (execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT) &&
-	    intel_read_status_page(engine, I915_GEM_HWS_PREEMPT) ==
+	if (intel_read_status_page(engine, I915_GEM_HWS_PREEMPT) ==
 	    GUC_PREEMPT_FINISHED)
 		complete_preempt_context(engine);
 
-	if (!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT))
-		guc_dequeue(engine);
+	if (!intel_read_status_page(engine, I915_GEM_HWS_PREEMPT))
+		__guc_dequeue(engine);
 
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
@@ -959,7 +939,6 @@ static void guc_cancel_requests(struct intel_engine_cs *engine)
 
 	execlists->queue_priority_hint = INT_MIN;
 	execlists->queue = RB_ROOT_CACHED;
-	GEM_BUG_ON(port_isset(execlists->port));
 
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
@@ -1422,7 +1401,7 @@ int intel_guc_submission_enable(struct intel_guc *guc)
 	 * and it is guaranteed that it will remove the work item from the
 	 * queue before our request is completed.
 	 */
-	BUILD_BUG_ON(ARRAY_SIZE(engine->execlists.port) *
+	BUILD_BUG_ON(ARRAY_SIZE(engine->execlists.inflight) *
 		     sizeof(struct guc_wq_item) *
 		     I915_NUM_ENGINES > GUC_WQ_SIZE);
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index 298bb7116c51..1a5b9e284ca9 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -366,13 +366,15 @@ static int __igt_breadcrumbs_smoketest(void *arg)
 
 		if (!wait_event_timeout(wait->wait,
 					i915_sw_fence_done(wait),
-					HZ / 2)) {
+					5 * HZ)) {
 			struct i915_request *rq = requests[count - 1];
 
-			pr_err("waiting for %d fences (last %llx:%lld) on %s timed out!\n",
-			       count,
+			pr_err("waiting for %d/%d fences (last %llx:%lld) on %s timed out!\n",
+			       atomic_read(&wait->pending), count,
 			       rq->fence.context, rq->fence.seqno,
 			       t->engine->name);
+			GEM_TRACE_DUMP();
+
 			i915_gem_set_wedged(t->engine->i915);
 			GEM_BUG_ON(!i915_request_completed(rq));
 			i915_sw_fence_wait(wait);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 06/26] drm/i915/execlists: Minimalistic timeslicing
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (3 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 05/26] drm/i915/execlists: Preempt-to-busy Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 07/26] drm/i915/execlists: Force preemption Chris Wilson
                   ` (24 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

If we have multiple contexts of equal priority pending execution,
activate a timer to demote the currently executing context in favour of
the next in the queue when that timeslice expires. This enforces
fairness between contexts (so long as they allow preemption -- forced
preemption, in the future, will kick those who do not obey) and allows
us to avoid userspace blocking forward progress with e.g. unbounded
MI_SEMAPHORE_WAIT.

For the starting point here, we use the jiffie as our timeslice so that
we should be reasonably efficient wrt frequent CPU wakeups.

Testcase: igt/gem_exec_scheduler/semaphore-resolve
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_engine_types.h |   6 +
 drivers/gpu/drm/i915/gt/intel_lrc.c          | 111 +++++++++
 drivers/gpu/drm/i915/gt/selftest_lrc.c       | 223 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_scheduler.c        |   1 +
 drivers/gpu/drm/i915/i915_scheduler_types.h  |   1 +
 5 files changed, 342 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index dd0082df42cc..11a25f060fed 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -12,6 +12,7 @@
 #include <linux/kref.h>
 #include <linux/list.h>
 #include <linux/llist.h>
+#include <linux/timer.h>
 #include <linux/types.h>
 
 #include "i915_gem.h"
@@ -137,6 +138,11 @@ struct intel_engine_execlists {
 	 */
 	struct tasklet_struct tasklet;
 
+	/**
+	 * @timer: kick the current context if its timeslice expires
+	 */
+	struct timer_list timer;
+
 	/**
 	 * @default_priolist: priority list for I915_PRIORITY_NORMAL
 	 */
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 65c91b7db59d..cea08d665ef5 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -255,6 +255,7 @@ static int effective_prio(const struct i915_request *rq)
 		prio |= I915_PRIORITY_NOSEMAPHORE;
 
 	/* Restrict mere WAIT boosts from triggering preemption */
+	BUILD_BUG_ON(__NO_PREEMPTION & ~I915_PRIORITY_MASK); /* only internal */
 	return prio | __NO_PREEMPTION;
 }
 
@@ -811,6 +812,81 @@ last_active(const struct intel_engine_execlists *execlists)
 	return *last;
 }
 
+static void
+defer_request(struct i915_request * const rq, struct list_head * const pl)
+{
+	struct i915_dependency *p;
+
+	/*
+	 * We want to move the interrupted request to the back of
+	 * the round-robin list (i.e. its priority level), but
+	 * in doing so, we must then move all requests that were in
+	 * flight and were waiting for the interrupted request to
+	 * be run after it again.
+	 */
+	list_move_tail(&rq->sched.link, pl);
+
+	list_for_each_entry(p, &rq->sched.waiters_list, wait_link) {
+		struct i915_request *w =
+			container_of(p->waiter, typeof(*w), sched);
+
+		/* Leave semaphores spinning on the other engines */
+		if (w->engine != rq->engine)
+			continue;
+
+		/* No waiter should start before the active request completed */
+		GEM_BUG_ON(i915_request_started(w));
+
+		GEM_BUG_ON(rq_prio(w) > rq_prio(rq));
+		if (rq_prio(w) < rq_prio(rq))
+			continue;
+
+		if (list_empty(&w->sched.link))
+			continue; /* Not yet submitted; unready */
+
+		/*
+		 * This should be very shallow as it is limited by the
+		 * number of requests that can fit in a ring (<64) and
+		 * the number of contexts that can be in flight on this
+		 * engine.
+		 */
+		defer_request(w, pl);
+	}
+}
+
+static void defer_active(struct intel_engine_cs *engine)
+{
+	struct i915_request *rq;
+
+	rq = __unwind_incomplete_requests(engine);
+	if (!rq)
+		return;
+
+	defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
+}
+
+static bool
+need_timeslice(struct intel_engine_cs *engine, const struct i915_request *rq)
+{
+	int hint;
+
+	if (list_is_last(&rq->sched.link, &engine->active.requests))
+		return false;
+
+	hint = max(rq_prio(list_next_entry(rq, sched.link)),
+		   engine->execlists.queue_priority_hint);
+
+	return hint >= rq_prio(rq);
+}
+
+static bool
+enable_timeslice(struct intel_engine_cs *engine)
+{
+	struct i915_request *last = last_active(&engine->execlists);
+
+	return last && need_timeslice(engine, last);
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -904,6 +980,27 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			 */
 			last->hw_context->lrc_desc |= CTX_DESC_FORCE_RESTORE;
 			last = NULL;
+		} else if (need_timeslice(engine, last) &&
+			   !timer_pending(&engine->execlists.timer)) {
+			GEM_TRACE("%s: expired last=%llx:%lld, prio=%d, hint=%d\n",
+				  engine->name,
+				  last->fence.context,
+				  last->fence.seqno,
+				  last->sched.attr.priority,
+				  execlists->queue_priority_hint);
+
+			ring_pause(engine) = 1;
+			defer_active(engine);
+
+			/*
+			 * Unlike for preemption, if we rewind and continue
+			 * executing the same context as previously active,
+			 * the order of execution will remain the same and
+			 * the tail will only advance. We do not need to
+			 * force a full context restore, as a lite-restore
+			 * is sufficient to resample the monotonic TAIL.
+			 */
+			last = NULL;
 		} else {
 			/*
 			 * Otherwise if we already have a request pending
@@ -1226,6 +1323,9 @@ static void process_csb(struct intel_engine_cs *engine)
 				       sizeof(*execlists->pending));
 			execlists->pending[0] = NULL;
 
+			if (enable_timeslice(engine))
+				mod_timer(&execlists->timer, jiffies + 1);
+
 			if (!inject_preempt_hang(execlists))
 				ring_pause(engine) = 0;
 		} else if (status & GEN8_CTX_STATUS_PREEMPTED) {
@@ -1296,6 +1396,15 @@ static void execlists_submission_tasklet(unsigned long data)
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
+static void execlists_submission_timer(struct timer_list *timer)
+{
+	struct intel_engine_cs *engine =
+		from_timer(engine, timer, execlists.timer);
+
+	/* Kick the tasklet for some interrupt coalescing and reset handling */
+	tasklet_hi_schedule(&engine->execlists.tasklet);
+}
+
 static void queue_request(struct intel_engine_cs *engine,
 			  struct i915_sched_node *node,
 			  int prio)
@@ -2525,6 +2634,7 @@ static int gen8_init_rcs_context(struct i915_request *rq)
 
 static void execlists_park(struct intel_engine_cs *engine)
 {
+	del_timer_sync(&engine->execlists.timer);
 	intel_engine_park(engine);
 }
 
@@ -2622,6 +2732,7 @@ int intel_execlists_submission_setup(struct intel_engine_cs *engine)
 
 	tasklet_init(&engine->execlists.tasklet,
 		     execlists_submission_tasklet, (unsigned long)engine);
+	timer_setup(&engine->execlists.timer, execlists_submission_timer, 0);
 
 	logical_ring_default_vfuncs(engine);
 	logical_ring_default_irqs(engine);
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 401e8b539297..0c97f953e908 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -79,6 +79,225 @@ static int live_sanitycheck(void *arg)
 	return err;
 }
 
+static int
+emit_semaphore_chain(struct i915_request *rq, struct i915_vma *vma, int idx)
+{
+	u32 *cs;
+
+	cs = intel_ring_begin(rq, 10);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	*cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+
+	*cs++ = MI_SEMAPHORE_WAIT |
+		MI_SEMAPHORE_GLOBAL_GTT |
+		MI_SEMAPHORE_POLL |
+		MI_SEMAPHORE_SAD_NEQ_SDD;
+	*cs++ = 0;
+	*cs++ = i915_ggtt_offset(vma) + 4 * idx;
+	*cs++ = 0;
+
+	if (idx > 0) {
+		*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
+		*cs++ = i915_ggtt_offset(vma) + 4 * (idx - 1);
+		*cs++ = 0;
+		*cs++ = 1;
+	} else {
+		*cs++ = MI_NOOP;
+		*cs++ = MI_NOOP;
+		*cs++ = MI_NOOP;
+		*cs++ = MI_NOOP;
+	}
+
+	*cs++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
+
+	intel_ring_advance(rq, cs);
+	return 0;
+}
+
+static struct i915_request *
+semaphore_queue(struct intel_engine_cs *engine, struct i915_vma *vma, int idx)
+{
+	struct i915_gem_context *ctx;
+	struct i915_request *rq;
+	int err;
+
+	ctx = kernel_context(engine->i915);
+	if (!ctx)
+		return ERR_PTR(-ENOMEM);
+
+	rq = igt_request_alloc(ctx, engine);
+	if (IS_ERR(rq))
+		goto out_ctx;
+
+	err = emit_semaphore_chain(rq, vma, idx);
+	i915_request_add(rq);
+	if (err)
+		rq = ERR_PTR(err);
+
+out_ctx:
+	kernel_context_close(ctx);
+	return rq;
+}
+
+static int
+release_queue(struct intel_engine_cs *engine,
+	      struct i915_vma *vma,
+	      int idx)
+{
+	struct i915_sched_attr attr = {
+		.priority = I915_USER_PRIORITY(I915_PRIORITY_MAX),
+	};
+	struct i915_request *rq;
+	u32 *cs;
+
+	rq = i915_request_create(engine->kernel_context);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
+
+	cs = intel_ring_begin(rq, 4);
+	if (IS_ERR(cs)) {
+		i915_request_add(rq);
+		return PTR_ERR(cs);
+	}
+
+	*cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
+	*cs++ = i915_ggtt_offset(vma) + 4 * (idx - 1);
+	*cs++ = 0;
+	*cs++ = 1;
+
+	intel_ring_advance(rq, cs);
+	i915_request_add(rq);
+
+	engine->schedule(rq, &attr);
+
+	return 0;
+}
+
+static int
+slice_semaphore_queue(struct intel_engine_cs *outer,
+		      struct i915_vma *vma,
+		      int count)
+{
+	struct intel_engine_cs *engine;
+	struct i915_request *head;
+	enum intel_engine_id id;
+	int err, i, n = 0;
+
+	head = semaphore_queue(outer, vma, n++);
+	if (IS_ERR(head))
+		return PTR_ERR(head);
+
+	i915_request_get(head);
+	for_each_engine(engine, outer->i915, id) {
+		for (i = 0; i < count; i++) {
+			struct i915_request *rq;
+
+			rq = semaphore_queue(engine, vma, n++);
+			if (IS_ERR(rq)) {
+				err = PTR_ERR(rq);
+				goto out;
+			}
+		}
+	}
+
+	err = release_queue(outer, vma, n);
+	if (err)
+		goto out;
+
+	if (i915_request_wait(head,
+			      I915_WAIT_LOCKED,
+			      2 * RUNTIME_INFO(outer->i915)->num_engines * (count + 2) * (count + 3)) < 0) {
+		pr_err("Failed to slice along semaphore chain of length (%d, %d)!\n",
+		       count, n);
+		GEM_TRACE_DUMP();
+		i915_gem_set_wedged(outer->i915);
+		err = -EIO;
+	}
+
+out:
+	i915_request_put(head);
+	return err;
+}
+
+static int live_timeslice_preempt(void *arg)
+{
+	struct drm_i915_private *i915 = arg;
+	struct drm_i915_gem_object *obj;
+	intel_wakeref_t wakeref;
+	struct i915_vma *vma;
+	void *vaddr;
+	int err = 0;
+	int count;
+
+	/*
+	 * If a request takes too long, we would like to give other users
+	 * a fair go on the GPU. In particular, users may create batches
+	 * that wait upon external input, where that input may even be
+	 * supplied by another GPU job. To avoid blocking forever, we
+	 * need to preempt the current task and replace it with another
+	 * ready task.
+	 */
+
+	mutex_lock(&i915->drm.struct_mutex);
+	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+
+	obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
+	if (IS_ERR(obj)) {
+		err = PTR_ERR(obj);
+		goto err_unlock;
+	}
+
+	vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
+	if (IS_ERR(vma)) {
+		err = PTR_ERR(vma);
+		goto err_obj;
+	}
+
+	vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+	if (IS_ERR(vaddr)) {
+		err = PTR_ERR(vaddr);
+		goto err_obj;
+	}
+
+	err = i915_vma_pin(vma, 0, 0, PIN_GLOBAL);
+	if (err)
+		goto err_map;
+
+	for_each_prime_number_from(count, 1, 16) {
+		struct intel_engine_cs *engine;
+		enum intel_engine_id id;
+
+		for_each_engine(engine, i915, id) {
+			memset(vaddr, 0, PAGE_SIZE);
+
+			err = slice_semaphore_queue(engine, vma, count);
+			if (err)
+				goto err_pin;
+
+			if (igt_flush_test(i915, I915_WAIT_LOCKED)) {
+				err = -EIO;
+				goto err_pin;
+			}
+		}
+	}
+
+err_pin:
+	i915_vma_unpin(vma);
+err_map:
+	i915_gem_object_unpin_map(obj);
+err_obj:
+	i915_gem_object_put(obj);
+err_unlock:
+	if (igt_flush_test(i915, I915_WAIT_LOCKED))
+		err = -EIO;
+	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+	mutex_unlock(&i915->drm.struct_mutex);
+
+	return err;
+}
+
 static int live_busywait_preempt(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
@@ -398,6 +617,9 @@ static int live_late_preempt(void *arg)
 	if (!ctx_lo)
 		goto err_ctx_hi;
 
+	/* Make sure ctx_lo stays before ctx_hi until we trigger preemption. */
+	ctx_lo->sched.priority = I915_USER_PRIORITY(1);
+
 	for_each_engine(engine, i915, id) {
 		struct igt_live_test t;
 		struct i915_request *rq;
@@ -1812,6 +2034,7 @@ int intel_execlists_live_selftests(struct drm_i915_private *i915)
 {
 	static const struct i915_subtest tests[] = {
 		SUBTEST(live_sanitycheck),
+		SUBTEST(live_timeslice_preempt),
 		SUBTEST(live_busywait_preempt),
 		SUBTEST(live_preempt),
 		SUBTEST(live_late_preempt),
diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
index b1ba3e65cd52..0bd452e851d8 100644
--- a/drivers/gpu/drm/i915/i915_scheduler.c
+++ b/drivers/gpu/drm/i915/i915_scheduler.c
@@ -394,6 +394,7 @@ bool __i915_sched_node_add_dependency(struct i915_sched_node *node,
 		list_add(&dep->wait_link, &signal->waiters_list);
 		list_add(&dep->signal_link, &node->signalers_list);
 		dep->signaler = signal;
+		dep->waiter = node;
 		dep->flags = flags;
 
 		/* Keep track of whether anyone on this chain has a semaphore */
diff --git a/drivers/gpu/drm/i915/i915_scheduler_types.h b/drivers/gpu/drm/i915/i915_scheduler_types.h
index 3e309631bd0b..aad81acba9dc 100644
--- a/drivers/gpu/drm/i915/i915_scheduler_types.h
+++ b/drivers/gpu/drm/i915/i915_scheduler_types.h
@@ -62,6 +62,7 @@ struct i915_sched_node {
 
 struct i915_dependency {
 	struct i915_sched_node *signaler;
+	struct i915_sched_node *waiter;
 	struct list_head signal_link;
 	struct list_head wait_link;
 	struct list_head dfs_link;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 07/26] drm/i915/execlists: Force preemption
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (4 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 06/26] drm/i915/execlists: Minimalistic timeslicing Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 08/26] drm/i915: Make the semaphore saturation mask global Chris Wilson
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

If the preempted context takes too long to relinquish control, e.g. it
is stuck inside a shader with arbitration disabled, evict that context
with an engine reset. This ensures that preemptions are reasonably
responsive, providing a tighter QoS for the more important context at
the cost of flagging unresponsive contexts more frequently (i.e. instead
of using an ~10s hangcheck, we now evict at ~10ms).  The challenge of
lies in picking a timeout that can be reasonably serviced by HW for
typical workloads, balancing the existing clients against the needs for
responsiveness.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/Kconfig.profile | 12 ++++++
 drivers/gpu/drm/i915/gt/intel_lrc.c  | 56 ++++++++++++++++++++++++++--
 2 files changed, 65 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/Kconfig.profile b/drivers/gpu/drm/i915/Kconfig.profile
index 48df8889a88a..8273d3baafe4 100644
--- a/drivers/gpu/drm/i915/Kconfig.profile
+++ b/drivers/gpu/drm/i915/Kconfig.profile
@@ -25,3 +25,15 @@ config DRM_I915_SPIN_REQUEST
 	  May be 0 to disable the initial spin. In practice, we estimate
 	  the cost of enabling the interrupt (if currently disabled) to be
 	  a few microseconds.
+
+config DRM_I915_PREEMPT_TIMEOUT
+	int "Preempt timeout (ms)"
+	default 10 # milliseconds
+	help
+	  How long to wait (in milliseconds) for a preemption event to occur
+	  when submitting a new context via execlists. If the current context
+	  does not hit an arbitration point and yield to HW before the timer
+	  expires, the HW will be reset to allow the more important context
+	  to execute.
+
+	  May be 0 to disable the timeout.
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index cea08d665ef5..0563fe8398c5 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -887,6 +887,15 @@ enable_timeslice(struct intel_engine_cs *engine)
 	return last && need_timeslice(engine, last);
 }
 
+static unsigned long preempt_expires(void)
+{
+	unsigned long timeout =
+		msecs_to_jiffies_timeout(CONFIG_DRM_I915_PREEMPT_TIMEOUT);
+
+	barrier();
+	return jiffies + timeout;
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -1218,6 +1227,9 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 		*port = execlists_schedule_in(last, port - execlists->pending);
 		memset(port + 1, 0, (last_port - port) * sizeof(*port));
 		execlists_submit_ports(engine);
+
+		if (CONFIG_DRM_I915_PREEMPT_TIMEOUT)
+			mod_timer(&execlists->timer, preempt_expires());
 	}
 }
 
@@ -1373,13 +1385,48 @@ static void process_csb(struct intel_engine_cs *engine)
 	invalidate_csb_entries(&buf[0], &buf[num_entries - 1]);
 }
 
-static void __execlists_submission_tasklet(struct intel_engine_cs *const engine)
+static bool __execlists_submission_tasklet(struct intel_engine_cs *const engine)
 {
 	lockdep_assert_held(&engine->active.lock);
 
 	process_csb(engine);
-	if (!engine->execlists.pending[0])
+	if (!engine->execlists.pending[0]) {
 		execlists_dequeue(engine);
+		return true;
+	}
+
+	return false;
+}
+
+static void preempt_reset(struct intel_engine_cs *engine)
+{
+	const unsigned int bit = I915_RESET_ENGINE + engine->id;
+	unsigned long *lock = &engine->i915->gpu_error.flags;
+
+	if (test_and_set_bit(bit, lock))
+		return;
+
+	tasklet_disable_nosync(&engine->execlists.tasklet);
+	spin_unlock(&engine->active.lock);
+
+	i915_reset_engine(engine, "preemption time out");
+
+	spin_lock(&engine->active.lock);
+	tasklet_enable(&engine->execlists.tasklet);
+
+	clear_bit(bit, lock);
+	wake_up_bit(lock, bit);
+}
+
+static bool preempt_timeout(struct intel_engine_cs *const engine)
+{
+	if (!CONFIG_DRM_I915_PREEMPT_TIMEOUT)
+		return false;
+
+	if (!intel_engine_has_preemption(engine))
+		return false;
+
+	return !timer_pending(&engine->execlists.timer);
 }
 
 /*
@@ -1392,7 +1439,10 @@ static void execlists_submission_tasklet(unsigned long data)
 	unsigned long flags;
 
 	spin_lock_irqsave(&engine->active.lock, flags);
-	__execlists_submission_tasklet(engine);
+
+	if (!__execlists_submission_tasklet(engine) && preempt_timeout(engine))
+		preempt_reset(engine);
+
 	spin_unlock_irqrestore(&engine->active.lock, flags);
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 08/26] drm/i915: Make the semaphore saturation mask global
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (5 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 07/26] drm/i915/execlists: Force preemption Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-19 10:45   ` Tvrtko Ursulin
  2019-06-18  7:41 ` [PATCH 09/26] dma-fence: Propagate errors to dma-fence-array container Chris Wilson
                   ` (22 subsequent siblings)
  29 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Dmitry Ermilov

The idea behind keeping the saturation mask local to a context backfired
spectacularly. The premise with the local mask was that we would be more
proactive in attempting to use semaphores after each time the context
idled, and that all new contexts would attempt to use semaphores
ignoring the current state of the system. This turns out to be horribly
optimistic. If the system state is still oversaturated and the existing
workloads have all stopped using semaphores, the new workloads would
attempt to use semaphores and be deprioritised behind real work. The
new contexts would not switch off using semaphores until their initial
batch of low priority work had completed. Given sufficient backload load
of equal user priority, this would completely starve the new work of any
GPU time.

To compensate, remove the local tracking in favour of keeping it as
global state on the engine -- once the system is saturated and
semaphores are disabled, everyone stops attempting to use semaphores
until the system is idle again. One of the reason for preferring local
context tracking was that it worked with virtual engines, so for
switching to global state we could either do a complete check of all the
virtual siblings or simply disable semaphores for those requests. This
takes the simpler approach of disabling semaphores on virtual engines.

The downside is that the decision that the engine is saturated is a
local measure -- we are only checking whether or not this context was
scheduled in a timely fashion, it may be legitimately delayed due to user
priorities. We still have the same dilemma though, that we do not want
to employ the semaphore poll unless it will be used.

Fixes: ca6e56f654e7 ("drm/i915: Disable semaphore busywaits on saturated systems")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Cc: Dmitry Ermilov <dmitry.ermilov@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_context.c       | 2 --
 drivers/gpu/drm/i915/gt/intel_context_types.h | 2 --
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     | 2 ++
 drivers/gpu/drm/i915/gt/intel_engine_types.h  | 2 ++
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 2 +-
 drivers/gpu/drm/i915/i915_request.c           | 4 ++--
 6 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 42f45744d859..2c454f227c2e 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -142,7 +142,6 @@ intel_context_init(struct intel_context *ce,
 	ce->engine = engine;
 	ce->ops = engine->cops;
 	ce->sseu = engine->sseu;
-	ce->saturated = 0;
 
 	INIT_LIST_HEAD(&ce->signal_link);
 	INIT_LIST_HEAD(&ce->signals);
@@ -223,7 +222,6 @@ void intel_context_enter_engine(struct intel_context *ce)
 
 void intel_context_exit_engine(struct intel_context *ce)
 {
-	ce->saturated = 0;
 	intel_engine_pm_put(ce->engine);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index b565c3ff4378..4c0e211c715d 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -58,8 +58,6 @@ struct intel_context {
 	atomic_t pin_count;
 	struct mutex pin_mutex; /* guards pinning and associated on-gpuing */
 
-	intel_engine_mask_t saturated; /* submitting semaphores too late? */
-
 	/**
 	 * active: Active tracker for the rq activity (inc. external) on this
 	 * intel_context object.
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index d14e352b0b17..2ce00d3dc42a 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -100,6 +100,8 @@ static int __engine_park(struct intel_wakeref *wf)
 	struct intel_engine_cs *engine =
 		container_of(wf, typeof(*engine), wakeref);
 
+	engine->saturated = 0;
+
 	/*
 	 * If one and only one request is completed between pm events,
 	 * we know that we are inside the kernel context and it is
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 11a25f060fed..1cbe10a0fec7 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -258,6 +258,8 @@ struct intel_engine_cs {
 	struct intel_context *kernel_context; /* pinned */
 	struct intel_context *preempt_context; /* pinned; optional */
 
+	intel_engine_mask_t saturated; /* submitting semaphores too late? */
+
 	unsigned long serial;
 
 	unsigned long wakeref_serial;
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 0563fe8398c5..bbbdc63906c6 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -3198,7 +3198,6 @@ static void virtual_context_exit(struct intel_context *ce)
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 	unsigned int n;
 
-	ce->saturated = 0;
 	for (n = 0; n < ve->num_siblings; n++)
 		intel_engine_pm_put(ve->siblings[n]);
 }
@@ -3397,6 +3396,7 @@ intel_execlists_create_virtual(struct i915_gem_context *ctx,
 	ve->base.uabi_class = I915_ENGINE_CLASS_INVALID;
 	ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
 	ve->base.flags = I915_ENGINE_IS_VIRTUAL;
+	ve->base.saturated = ALL_ENGINES;
 
 	snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
 
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 98e4743b03be..27b9893fa8e3 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -418,7 +418,7 @@ void __i915_request_submit(struct i915_request *request)
 	 */
 	if (request->sched.semaphores &&
 	    i915_sw_fence_signaled(&request->semaphore))
-		request->hw_context->saturated |= request->sched.semaphores;
+		engine->saturated |= request->sched.semaphores;
 
 	/* We may be recursing from the signal callback of another i915 fence */
 	spin_lock_nested(&request->lock, SINGLE_DEPTH_NESTING);
@@ -798,7 +798,7 @@ already_busywaiting(struct i915_request *rq)
 	 *
 	 * See the are-we-too-late? check in __i915_request_submit().
 	 */
-	return rq->sched.semaphores | rq->hw_context->saturated;
+	return rq->sched.semaphores | rq->engine->saturated;
 }
 
 static int
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 09/26] dma-fence: Propagate errors to dma-fence-array container
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (6 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 08/26] drm/i915: Make the semaphore saturation mask global Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 10/26] dma-fence: Report the composite sync_file status Chris Wilson
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Gustavo Padovan, Sumit Semwal

When one of the array of fences is signaled, propagate its errors to the
parent fence-array (keeping the first error to be raised).

v2: Opencode cmpxchg_local to avoid compiler freakout.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Gustavo Padovan <gustavo@padovan.org>
---
 drivers/dma-buf/dma-fence-array.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
index 12c6f64c0bc2..d90675bb4fcc 100644
--- a/drivers/dma-buf/dma-fence-array.c
+++ b/drivers/dma-buf/dma-fence-array.c
@@ -13,6 +13,12 @@
 #include <linux/slab.h>
 #include <linux/dma-fence-array.h>
 
+static void fence_set_error_once(struct dma_fence *fence, int error)
+{
+	if (!fence->error && error)
+		dma_fence_set_error(fence, error);
+}
+
 static const char *dma_fence_array_get_driver_name(struct dma_fence *fence)
 {
 	return "dma_fence_array";
@@ -38,6 +44,13 @@ static void dma_fence_array_cb_func(struct dma_fence *f,
 		container_of(cb, struct dma_fence_array_cb, cb);
 	struct dma_fence_array *array = array_cb->array;
 
+	/*
+	 * Propagate the first error reported by any of our fences, but only
+	 * before we ourselves are signaled.
+	 */
+	if (atomic_read(&array->num_pending) > 0)
+		fence_set_error_once(&array->base, f->error);
+
 	if (atomic_dec_and_test(&array->num_pending))
 		irq_work_queue(&array->work);
 	else
@@ -63,6 +76,8 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
 		dma_fence_get(&array->base);
 		if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
 					   dma_fence_array_cb_func)) {
+			fence_set_error_once(&array->base,
+					     array->fences[i]->error);
 			dma_fence_put(&array->base);
 			if (atomic_dec_and_test(&array->num_pending))
 				return false;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 10/26] dma-fence: Report the composite sync_file status
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (7 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 09/26] dma-fence: Propagate errors to dma-fence-array container Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 11/26] dma-fence: Refactor signaling for manual invocation Chris Wilson
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Gustavo Padovan, Sumit Semwal

Same as for the individual fences, we want to report the actual status
of the fence when queried.

Reported-by: Petri Latvala <petri.latvala@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: Petri Latvala <petri.latvala@intel.com>
---
 drivers/dma-buf/sync_file.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
index ee4d1a96d779..25c5c071645b 100644
--- a/drivers/dma-buf/sync_file.c
+++ b/drivers/dma-buf/sync_file.c
@@ -419,7 +419,7 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
 	 * info->num_fences.
 	 */
 	if (!info.num_fences) {
-		info.status = dma_fence_is_signaled(sync_file->fence);
+		info.status = dma_fence_get_status(sync_file->fence);
 		goto no_fences;
 	} else {
 		info.status = 1;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 11/26] dma-fence: Refactor signaling for manual invocation
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (8 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 10/26] dma-fence: Report the composite sync_file status Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 12/26] dma-fence: Always execute signal callbacks Chris Wilson
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Move the duplicated code within dma-fence.c into the header for wider
reuse. In the process apply a small micro-optimisation to only prune the
fence->cb_list once rather than use list_del on every entry.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/dma-buf/Makefile                    |  10 +-
 drivers/dma-buf/dma-fence-trace.c           |  28 +++
 drivers/dma-buf/dma-fence.c                 |  33 +--
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c |  32 +--
 include/linux/dma-fence-impl.h              |  84 +++++++
 include/linux/dma-fence-types.h             | 252 ++++++++++++++++++++
 include/linux/dma-fence.h                   | 222 +----------------
 7 files changed, 381 insertions(+), 280 deletions(-)
 create mode 100644 drivers/dma-buf/dma-fence-trace.c
 create mode 100644 include/linux/dma-fence-impl.h
 create mode 100644 include/linux/dma-fence-types.h

diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index e8c7310cb800..65c43778e571 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,12 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
-	 reservation.o seqno-fence.o
+obj-y := \
+	dma-buf.o \
+	dma-fence.o \
+	dma-fence-array.o \
+	dma-fence-chain.o \
+	dma-fence-trace.o \
+	reservation.o \
+	seqno-fence.o
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
 obj-$(CONFIG_UDMABUF)		+= udmabuf.o
diff --git a/drivers/dma-buf/dma-fence-trace.c b/drivers/dma-buf/dma-fence-trace.c
new file mode 100644
index 000000000000..eb6f282be4c0
--- /dev/null
+++ b/drivers/dma-buf/dma-fence-trace.c
@@ -0,0 +1,28 @@
+/*
+ * Fence mechanism for dma-buf and to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/dma-fence-types.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/dma_fence.h>
+
+EXPORT_TRACEPOINT_SYMBOL(dma_fence_emit);
+EXPORT_TRACEPOINT_SYMBOL(dma_fence_enable_signal);
+EXPORT_TRACEPOINT_SYMBOL(dma_fence_signaled);
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 59ac96ec7ba8..027a6a894abd 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -14,15 +14,9 @@
 #include <linux/export.h>
 #include <linux/atomic.h>
 #include <linux/dma-fence.h>
+#include <linux/dma-fence-impl.h>
 #include <linux/sched/signal.h>
 
-#define CREATE_TRACE_POINTS
-#include <trace/events/dma_fence.h>
-
-EXPORT_TRACEPOINT_SYMBOL(dma_fence_emit);
-EXPORT_TRACEPOINT_SYMBOL(dma_fence_enable_signal);
-EXPORT_TRACEPOINT_SYMBOL(dma_fence_signaled);
-
 static DEFINE_SPINLOCK(dma_fence_stub_lock);
 static struct dma_fence dma_fence_stub;
 
@@ -128,7 +122,6 @@ EXPORT_SYMBOL(dma_fence_context_alloc);
  */
 int dma_fence_signal_locked(struct dma_fence *fence)
 {
-	struct dma_fence_cb *cur, *tmp;
 	int ret = 0;
 
 	lockdep_assert_held(fence->lock);
@@ -136,7 +129,7 @@ int dma_fence_signal_locked(struct dma_fence *fence)
 	if (WARN_ON(!fence))
 		return -EINVAL;
 
-	if (test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+	if (!__dma_fence_signal(fence)) {
 		ret = -EINVAL;
 
 		/*
@@ -144,15 +137,10 @@ int dma_fence_signal_locked(struct dma_fence *fence)
 		 * still run through all callbacks
 		 */
 	} else {
-		fence->timestamp = ktime_get();
-		set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
-		trace_dma_fence_signaled(fence);
+		__dma_fence_signal__timestamp(fence, ktime_get());
 	}
 
-	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
-		list_del_init(&cur->node);
-		cur->func(fence, cur);
-	}
+	__dma_fence_signal__notify(fence);
 	return ret;
 }
 EXPORT_SYMBOL(dma_fence_signal_locked);
@@ -177,21 +165,14 @@ int dma_fence_signal(struct dma_fence *fence)
 	if (!fence)
 		return -EINVAL;
 
-	if (test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+	if (!__dma_fence_signal(fence))
 		return -EINVAL;
 
-	fence->timestamp = ktime_get();
-	set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
-	trace_dma_fence_signaled(fence);
+	__dma_fence_signal__timestamp(fence, ktime_get());
 
 	if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
-		struct dma_fence_cb *cur, *tmp;
-
 		spin_lock_irqsave(fence->lock, flags);
-		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
-			list_del_init(&cur->node);
-			cur->func(fence, cur);
-		}
+		__dma_fence_signal__notify(fence);
 		spin_unlock_irqrestore(fence->lock, flags);
 	}
 	return 0;
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index c092bdf5f0bf..f9ba43a0f4d8 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -22,8 +22,7 @@
  *
  */
 
-#include <linux/kthread.h>
-#include <trace/events/dma_fence.h>
+#include <linux/dma-fence-impl.h>
 #include <uapi/linux/sched/types.h>
 
 #include "i915_drv.h"
@@ -97,35 +96,6 @@ check_signal_order(struct intel_context *ce, struct i915_request *rq)
 	return true;
 }
 
-static bool
-__dma_fence_signal(struct dma_fence *fence)
-{
-	return !test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags);
-}
-
-static void
-__dma_fence_signal__timestamp(struct dma_fence *fence, ktime_t timestamp)
-{
-	fence->timestamp = timestamp;
-	set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
-	trace_dma_fence_signaled(fence);
-}
-
-static void
-__dma_fence_signal__notify(struct dma_fence *fence)
-{
-	struct dma_fence_cb *cur, *tmp;
-
-	lockdep_assert_held(fence->lock);
-	lockdep_assert_irqs_disabled();
-
-	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
-		INIT_LIST_HEAD(&cur->node);
-		cur->func(fence, cur);
-	}
-	INIT_LIST_HEAD(&fence->cb_list);
-}
-
 void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
 {
 	struct intel_breadcrumbs *b = &engine->breadcrumbs;
diff --git a/include/linux/dma-fence-impl.h b/include/linux/dma-fence-impl.h
new file mode 100644
index 000000000000..aa44eb031693
--- /dev/null
+++ b/include/linux/dma-fence-impl.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Fence mechanism for dma-buf to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ */
+
+#ifndef __LINUX_DMA_FENCE_IMPL_H
+#define __LINUX_DMA_FENCE_IMPL_H
+
+#include <linux/dma-fence.h>
+#include <linux/lockdep.h>
+#include <linux/list.h>
+#include <linux/ktime.h>
+
+#include <trace/events/dma_fence.h>
+
+/**
+ * __dma_fence_signal: Mark a fence as signaled
+ * @fence: the dma fence to mark
+ *
+ * The first step of the dma_fence_signal() implementation is to atomically
+ * mark the fence as signaled.
+ *
+ * Returns: true if the fence was not previously signaled, false if it was
+ * already signaled.
+ */
+static inline bool
+__dma_fence_signal(struct dma_fence *fence)
+{
+	return !test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags);
+}
+
+/**
+ * __dma_fence_signal__timestamp: sets the signaling timestamp
+ * @fence: the dma fence
+ * @timestamp: the monotonic timestamp (e.g. ktime_get_mono())
+ *
+ * The second step of the dma_fence_signal() implementation it to record
+ * the siganling timestamp.
+ *
+ * The dma-fence stores a timestamp of when it was signaled for inspection
+ * by userspace. This timestamp is typically the CPU time at which the
+ * signal was raised, but could be a HW timestamp generated by the event
+ * itself. Either way, it must be set on the signaled fence before
+ * callbacks are notified.
+ */
+static inline void
+__dma_fence_signal__timestamp(struct dma_fence *fence, ktime_t timestamp)
+{
+	fence->timestamp = timestamp;
+	set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
+	trace_dma_fence_signaled(fence);
+}
+
+/**
+ * __dma_fence_signal__notify: notify observers of the signal event
+ * @fence: the dma fence
+ *
+ * The final step of the dma_fence_signal() implementation is to notify
+ * all observers (dma_fence_add_callback()) of the signal event. This must
+ * be called with the fence->lock already held and irqsoff.
+ */
+static inline void
+__dma_fence_signal__notify(struct dma_fence *fence)
+{
+	struct dma_fence_cb *cur, *tmp;
+
+	lockdep_assert_held(fence->lock);
+	lockdep_assert_irqs_disabled();
+
+	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+		INIT_LIST_HEAD(&cur->node);
+		cur->func(fence, cur);
+	}
+	INIT_LIST_HEAD(&fence->cb_list);
+}
+
+#endif /* __LINUX_DMA_FENCE_IMPL_H */
diff --git a/include/linux/dma-fence-types.h b/include/linux/dma-fence-types.h
new file mode 100644
index 000000000000..52f48decf23d
--- /dev/null
+++ b/include/linux/dma-fence-types.h
@@ -0,0 +1,252 @@
+/*
+ * Fence mechanism for dma-buf to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#ifndef __LINUX_DMA_FENCE_TYPES_H
+#define __LINUX_DMA_FENCE_TYPES_H
+
+#include <linux/list.h>
+#include <linux/kref.h>
+#include <linux/ktime.h>
+#include <linux/rcupdate.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+struct dma_fence;
+struct dma_fence_ops;
+struct dma_fence_cb;
+
+/**
+ * struct dma_fence - software synchronization primitive
+ * @refcount: refcount for this fence
+ * @ops: dma_fence_ops associated with this fence
+ * @rcu: used for releasing fence with kfree_rcu
+ * @cb_list: list of all callbacks to call
+ * @lock: spin_lock_irqsave used for locking
+ * @context: execution context this fence belongs to, returned by
+ *           dma_fence_context_alloc()
+ * @seqno: the sequence number of this fence inside the execution context,
+ * can be compared to decide which fence would be signaled later.
+ * @flags: A mask of DMA_FENCE_FLAG_* defined below
+ * @timestamp: Timestamp when the fence was signaled.
+ * @error: Optional, only valid if < 0, must be set before calling
+ * dma_fence_signal, indicates that the fence has completed with an error.
+ *
+ * the flags member must be manipulated and read using the appropriate
+ * atomic ops (bit_*), so taking the spinlock will not be needed most
+ * of the time.
+ *
+ * DMA_FENCE_FLAG_SIGNALED_BIT - fence is already signaled
+ * DMA_FENCE_FLAG_TIMESTAMP_BIT - timestamp recorded for fence signaling
+ * DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called
+ * DMA_FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
+ * implementer of the fence for its own purposes. Can be used in different
+ * ways by different fence implementers, so do not rely on this.
+ *
+ * Since atomic bitops are used, this is not guaranteed to be the case.
+ * Particularly, if the bit was set, but dma_fence_signal was called right
+ * before this bit was set, it would have been able to set the
+ * DMA_FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
+ * Adding a check for DMA_FENCE_FLAG_SIGNALED_BIT after setting
+ * DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
+ * after dma_fence_signal was called, any enable_signaling call will have either
+ * been completed, or never called at all.
+ */
+struct dma_fence {
+	struct kref refcount;
+	const struct dma_fence_ops *ops;
+	struct rcu_head rcu;
+	struct list_head cb_list;
+	spinlock_t *lock;
+	u64 context;
+	u64 seqno;
+	unsigned long flags;
+	ktime_t timestamp;
+	int error;
+};
+
+enum dma_fence_flag_bits {
+	DMA_FENCE_FLAG_SIGNALED_BIT,
+	DMA_FENCE_FLAG_TIMESTAMP_BIT,
+	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
+};
+
+typedef void (*dma_fence_func_t)(struct dma_fence *fence,
+				 struct dma_fence_cb *cb);
+
+/**
+ * struct dma_fence_cb - callback for dma_fence_add_callback()
+ * @node: used by dma_fence_add_callback() to append this struct to fence::cb_list
+ * @func: dma_fence_func_t to call
+ *
+ * This struct will be initialized by dma_fence_add_callback(), additional
+ * data can be passed along by embedding dma_fence_cb in another struct.
+ */
+struct dma_fence_cb {
+	struct list_head node;
+	dma_fence_func_t func;
+};
+
+/**
+ * struct dma_fence_ops - operations implemented for fence
+ *
+ */
+struct dma_fence_ops {
+	/**
+	 * @use_64bit_seqno:
+	 *
+	 * True if this dma_fence implementation uses 64bit seqno, false
+	 * otherwise.
+	 */
+	bool use_64bit_seqno;
+
+	/**
+	 * @get_driver_name:
+	 *
+	 * Returns the driver name. This is a callback to allow drivers to
+	 * compute the name at runtime, without having it to store permanently
+	 * for each fence, or build a cache of some sort.
+	 *
+	 * This callback is mandatory.
+	 */
+	const char * (*get_driver_name)(struct dma_fence *fence);
+
+	/**
+	 * @get_timeline_name:
+	 *
+	 * Return the name of the context this fence belongs to. This is a
+	 * callback to allow drivers to compute the name at runtime, without
+	 * having it to store permanently for each fence, or build a cache of
+	 * some sort.
+	 *
+	 * This callback is mandatory.
+	 */
+	const char * (*get_timeline_name)(struct dma_fence *fence);
+
+	/**
+	 * @enable_signaling:
+	 *
+	 * Enable software signaling of fence.
+	 *
+	 * For fence implementations that have the capability for hw->hw
+	 * signaling, they can implement this op to enable the necessary
+	 * interrupts, or insert commands into cmdstream, etc, to avoid these
+	 * costly operations for the common case where only hw->hw
+	 * synchronization is required.  This is called in the first
+	 * dma_fence_wait() or dma_fence_add_callback() path to let the fence
+	 * implementation know that there is another driver waiting on the
+	 * signal (ie. hw->sw case).
+	 *
+	 * This function can be called from atomic context, but not
+	 * from irq context, so normal spinlocks can be used.
+	 *
+	 * A return value of false indicates the fence already passed,
+	 * or some failure occurred that made it impossible to enable
+	 * signaling. True indicates successful enabling.
+	 *
+	 * &dma_fence.error may be set in enable_signaling, but only when false
+	 * is returned.
+	 *
+	 * Since many implementations can call dma_fence_signal() even before
+	 * @enable_signaling has been called there's a race window, where the
+	 * dma_fence_signal() might result in the final fence reference being
+	 * released and its memory freed. To avoid this, implementations of this
+	 * callback should grab their own reference using dma_fence_get(), to be
+	 * released when the fence is signalled (through e.g. the interrupt
+	 * handler).
+	 *
+	 * This callback is optional. If this callback is not present, then the
+	 * driver must always have signaling enabled.
+	 */
+	bool (*enable_signaling)(struct dma_fence *fence);
+
+	/**
+	 * @signaled:
+	 *
+	 * Peek whether the fence is signaled, as a fastpath optimization for
+	 * e.g. dma_fence_wait() or dma_fence_add_callback(). Note that this
+	 * callback does not need to make any guarantees beyond that a fence
+	 * once indicates as signalled must always return true from this
+	 * callback. This callback may return false even if the fence has
+	 * completed already, in this case information hasn't propogated throug
+	 * the system yet. See also dma_fence_is_signaled().
+	 *
+	 * May set &dma_fence.error if returning true.
+	 *
+	 * This callback is optional.
+	 */
+	bool (*signaled)(struct dma_fence *fence);
+
+	/**
+	 * @wait:
+	 *
+	 * Custom wait implementation, defaults to dma_fence_default_wait() if
+	 * not set.
+	 *
+	 * The dma_fence_default_wait implementation should work for any fence,
+	 * as long as @enable_signaling works correctly. This hook allows
+	 * drivers to have an optimized version for the case where a process
+	 * context is already available, e.g. if @enable_signaling for the
+	 * general case needs to set up a worker thread.
+	 *
+	 * Must return -ERESTARTSYS if the wait is intr = true and the wait was
+	 * interrupted, and remaining jiffies if fence has signaled, or 0 if
+	 * wait timed out. Can also return other error values on custom
+	 * implementations, which should be treated as if the fence is signaled.
+	 * For example a hardware lockup could be reported like that.
+	 *
+	 * This callback is optional.
+	 */
+	signed long (*wait)(struct dma_fence *fence,
+			    bool intr, signed long timeout);
+
+	/**
+	 * @release:
+	 *
+	 * Called on destruction of fence to release additional resources.
+	 * Can be called from irq context.  This callback is optional. If it is
+	 * NULL, then dma_fence_free() is instead called as the default
+	 * implementation.
+	 */
+	void (*release)(struct dma_fence *fence);
+
+	/**
+	 * @fence_value_str:
+	 *
+	 * Callback to fill in free-form debug info specific to this fence, like
+	 * the sequence number.
+	 *
+	 * This callback is optional.
+	 */
+	void (*fence_value_str)(struct dma_fence *fence, char *str, int size);
+
+	/**
+	 * @timeline_value_str:
+	 *
+	 * Fills in the current value of the timeline as a string, like the
+	 * sequence number. Note that the specific fence passed to this function
+	 * should not matter, drivers should only use it to look up the
+	 * corresponding timeline structures.
+	 */
+	void (*timeline_value_str)(struct dma_fence *fence,
+				   char *str, int size);
+};
+
+#endif /* __LINUX_DMA_FENCE_TYPES_H */
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 05d29dbc7e62..1c8dd1fbafae 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -13,6 +13,7 @@
 #ifndef __LINUX_DMA_FENCE_H
 #define __LINUX_DMA_FENCE_H
 
+#include <linux/dma-fence-types.h>
 #include <linux/err.h>
 #include <linux/wait.h>
 #include <linux/list.h>
@@ -22,227 +23,6 @@
 #include <linux/printk.h>
 #include <linux/rcupdate.h>
 
-struct dma_fence;
-struct dma_fence_ops;
-struct dma_fence_cb;
-
-/**
- * struct dma_fence - software synchronization primitive
- * @refcount: refcount for this fence
- * @ops: dma_fence_ops associated with this fence
- * @rcu: used for releasing fence with kfree_rcu
- * @cb_list: list of all callbacks to call
- * @lock: spin_lock_irqsave used for locking
- * @context: execution context this fence belongs to, returned by
- *           dma_fence_context_alloc()
- * @seqno: the sequence number of this fence inside the execution context,
- * can be compared to decide which fence would be signaled later.
- * @flags: A mask of DMA_FENCE_FLAG_* defined below
- * @timestamp: Timestamp when the fence was signaled.
- * @error: Optional, only valid if < 0, must be set before calling
- * dma_fence_signal, indicates that the fence has completed with an error.
- *
- * the flags member must be manipulated and read using the appropriate
- * atomic ops (bit_*), so taking the spinlock will not be needed most
- * of the time.
- *
- * DMA_FENCE_FLAG_SIGNALED_BIT - fence is already signaled
- * DMA_FENCE_FLAG_TIMESTAMP_BIT - timestamp recorded for fence signaling
- * DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called
- * DMA_FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
- * implementer of the fence for its own purposes. Can be used in different
- * ways by different fence implementers, so do not rely on this.
- *
- * Since atomic bitops are used, this is not guaranteed to be the case.
- * Particularly, if the bit was set, but dma_fence_signal was called right
- * before this bit was set, it would have been able to set the
- * DMA_FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
- * Adding a check for DMA_FENCE_FLAG_SIGNALED_BIT after setting
- * DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
- * after dma_fence_signal was called, any enable_signaling call will have either
- * been completed, or never called at all.
- */
-struct dma_fence {
-	struct kref refcount;
-	const struct dma_fence_ops *ops;
-	struct rcu_head rcu;
-	struct list_head cb_list;
-	spinlock_t *lock;
-	u64 context;
-	u64 seqno;
-	unsigned long flags;
-	ktime_t timestamp;
-	int error;
-};
-
-enum dma_fence_flag_bits {
-	DMA_FENCE_FLAG_SIGNALED_BIT,
-	DMA_FENCE_FLAG_TIMESTAMP_BIT,
-	DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
-	DMA_FENCE_FLAG_USER_BITS, /* must always be last member */
-};
-
-typedef void (*dma_fence_func_t)(struct dma_fence *fence,
-				 struct dma_fence_cb *cb);
-
-/**
- * struct dma_fence_cb - callback for dma_fence_add_callback()
- * @node: used by dma_fence_add_callback() to append this struct to fence::cb_list
- * @func: dma_fence_func_t to call
- *
- * This struct will be initialized by dma_fence_add_callback(), additional
- * data can be passed along by embedding dma_fence_cb in another struct.
- */
-struct dma_fence_cb {
-	struct list_head node;
-	dma_fence_func_t func;
-};
-
-/**
- * struct dma_fence_ops - operations implemented for fence
- *
- */
-struct dma_fence_ops {
-	/**
-	 * @use_64bit_seqno:
-	 *
-	 * True if this dma_fence implementation uses 64bit seqno, false
-	 * otherwise.
-	 */
-	bool use_64bit_seqno;
-
-	/**
-	 * @get_driver_name:
-	 *
-	 * Returns the driver name. This is a callback to allow drivers to
-	 * compute the name at runtime, without having it to store permanently
-	 * for each fence, or build a cache of some sort.
-	 *
-	 * This callback is mandatory.
-	 */
-	const char * (*get_driver_name)(struct dma_fence *fence);
-
-	/**
-	 * @get_timeline_name:
-	 *
-	 * Return the name of the context this fence belongs to. This is a
-	 * callback to allow drivers to compute the name at runtime, without
-	 * having it to store permanently for each fence, or build a cache of
-	 * some sort.
-	 *
-	 * This callback is mandatory.
-	 */
-	const char * (*get_timeline_name)(struct dma_fence *fence);
-
-	/**
-	 * @enable_signaling:
-	 *
-	 * Enable software signaling of fence.
-	 *
-	 * For fence implementations that have the capability for hw->hw
-	 * signaling, they can implement this op to enable the necessary
-	 * interrupts, or insert commands into cmdstream, etc, to avoid these
-	 * costly operations for the common case where only hw->hw
-	 * synchronization is required.  This is called in the first
-	 * dma_fence_wait() or dma_fence_add_callback() path to let the fence
-	 * implementation know that there is another driver waiting on the
-	 * signal (ie. hw->sw case).
-	 *
-	 * This function can be called from atomic context, but not
-	 * from irq context, so normal spinlocks can be used.
-	 *
-	 * A return value of false indicates the fence already passed,
-	 * or some failure occurred that made it impossible to enable
-	 * signaling. True indicates successful enabling.
-	 *
-	 * &dma_fence.error may be set in enable_signaling, but only when false
-	 * is returned.
-	 *
-	 * Since many implementations can call dma_fence_signal() even when before
-	 * @enable_signaling has been called there's a race window, where the
-	 * dma_fence_signal() might result in the final fence reference being
-	 * released and its memory freed. To avoid this, implementations of this
-	 * callback should grab their own reference using dma_fence_get(), to be
-	 * released when the fence is signalled (through e.g. the interrupt
-	 * handler).
-	 *
-	 * This callback is optional. If this callback is not present, then the
-	 * driver must always have signaling enabled.
-	 */
-	bool (*enable_signaling)(struct dma_fence *fence);
-
-	/**
-	 * @signaled:
-	 *
-	 * Peek whether the fence is signaled, as a fastpath optimization for
-	 * e.g. dma_fence_wait() or dma_fence_add_callback(). Note that this
-	 * callback does not need to make any guarantees beyond that a fence
-	 * once indicates as signalled must always return true from this
-	 * callback. This callback may return false even if the fence has
-	 * completed already, in this case information hasn't propogated throug
-	 * the system yet. See also dma_fence_is_signaled().
-	 *
-	 * May set &dma_fence.error if returning true.
-	 *
-	 * This callback is optional.
-	 */
-	bool (*signaled)(struct dma_fence *fence);
-
-	/**
-	 * @wait:
-	 *
-	 * Custom wait implementation, defaults to dma_fence_default_wait() if
-	 * not set.
-	 *
-	 * The dma_fence_default_wait implementation should work for any fence, as long
-	 * as @enable_signaling works correctly. This hook allows drivers to
-	 * have an optimized version for the case where a process context is
-	 * already available, e.g. if @enable_signaling for the general case
-	 * needs to set up a worker thread.
-	 *
-	 * Must return -ERESTARTSYS if the wait is intr = true and the wait was
-	 * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
-	 * timed out. Can also return other error values on custom implementations,
-	 * which should be treated as if the fence is signaled. For example a hardware
-	 * lockup could be reported like that.
-	 *
-	 * This callback is optional.
-	 */
-	signed long (*wait)(struct dma_fence *fence,
-			    bool intr, signed long timeout);
-
-	/**
-	 * @release:
-	 *
-	 * Called on destruction of fence to release additional resources.
-	 * Can be called from irq context.  This callback is optional. If it is
-	 * NULL, then dma_fence_free() is instead called as the default
-	 * implementation.
-	 */
-	void (*release)(struct dma_fence *fence);
-
-	/**
-	 * @fence_value_str:
-	 *
-	 * Callback to fill in free-form debug info specific to this fence, like
-	 * the sequence number.
-	 *
-	 * This callback is optional.
-	 */
-	void (*fence_value_str)(struct dma_fence *fence, char *str, int size);
-
-	/**
-	 * @timeline_value_str:
-	 *
-	 * Fills in the current value of the timeline as a string, like the
-	 * sequence number. Note that the specific fence passed to this function
-	 * should not matter, drivers should only use it to look up the
-	 * corresponding timeline structures.
-	 */
-	void (*timeline_value_str)(struct dma_fence *fence,
-				   char *str, int size);
-};
-
 void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 		    spinlock_t *lock, u64 context, u64 seqno);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 12/26] dma-fence: Always execute signal callbacks
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (9 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 11/26] dma-fence: Refactor signaling for manual invocation Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 13/26] drm/i915: Track i915_active using debugobjects Chris Wilson
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Allow for some users to surreptiously insert lazy signal callbacks that
do not depend on enabling the signaling mechanism around every fence.
This means that we may have a cb_list even if the signaling bit is not
enabled, so always notify the callbacks.

The cost is that dma_fence_signal() must always acquire the spinlock to
ensure that the cb_list is flushed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/dma-buf/dma-fence.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 027a6a894abd..ab4a456bba04 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -170,11 +170,9 @@ int dma_fence_signal(struct dma_fence *fence)
 
 	__dma_fence_signal__timestamp(fence, ktime_get());
 
-	if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
-		spin_lock_irqsave(fence->lock, flags);
-		__dma_fence_signal__notify(fence);
-		spin_unlock_irqrestore(fence->lock, flags);
-	}
+	spin_lock_irqsave(fence->lock, flags);
+	__dma_fence_signal__notify(fence);
+	spin_unlock_irqrestore(fence->lock, flags);
 	return 0;
 }
 EXPORT_SYMBOL(dma_fence_signal);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 13/26] drm/i915: Track i915_active using debugobjects
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (10 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 12/26] dma-fence: Always execute signal callbacks Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 14/26] drm/i915: Signal fence completion from i915_request_wait Chris Wilson
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Provide runtime asserts and tracking of i915_active via debugobjects.
For example, this should allow us to check that the i915_active is only
active when we expect it to be and is never freed too early.

One consequence is that, for simplicity, we no longer allow i915_active
to be on-stack which only affected the selftests.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_active.c           | 80 +++++++++++++++++++-
 drivers/gpu/drm/i915/selftests/i915_active.c | 78 ++++++++++++++-----
 2 files changed, 137 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 41ed2798687d..e716baab276f 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -4,6 +4,8 @@
  * Copyright © 2019 Intel Corporation
  */
 
+#include <linux/debugobjects.h>
+
 #include "gt/intel_engine_pm.h"
 
 #include "i915_drv.h"
@@ -31,6 +33,69 @@ struct active_node {
 	u64 timeline;
 };
 
+#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) && IS_ENABLED(CONFIG_DEBUG_OBJECTS)
+
+static void *active_debug_hint(void *addr)
+{
+	struct i915_active *ref = addr;
+
+	return (void *)ref->retire ?: (void *)ref;
+}
+
+static struct debug_obj_descr active_debug_desc = {
+	.name = "i915_active",
+	.debug_hint = active_debug_hint,
+};
+
+static void debug_active_init(struct i915_active *ref)
+{
+	debug_object_init(ref, &active_debug_desc);
+}
+
+static void debug_active_activate(struct i915_active *ref)
+{
+	debug_object_activate(ref, &active_debug_desc);
+}
+
+static void debug_active_deactivate(struct i915_active *ref)
+{
+	debug_object_deactivate(ref, &active_debug_desc);
+}
+
+static void debug_active_fini(struct i915_active *ref)
+{
+	debug_object_free(ref, &active_debug_desc);
+}
+
+static void debug_active_assert(struct i915_active *ref)
+{
+	debug_object_assert_init(ref, &active_debug_desc);
+}
+
+#else
+
+static void debug_active_init(struct i915_active *ref)
+{
+}
+
+static void debug_active_activate(struct i915_active *ref)
+{
+}
+
+static void debug_active_deactivate(struct i915_active *ref)
+{
+}
+
+static void debug_active_fini(struct i915_active *ref)
+{
+}
+
+static void debug_active_assert(struct i915_active *ref)
+{
+}
+
+#endif
+
 static void
 __active_park(struct i915_active *ref)
 {
@@ -50,6 +115,8 @@ __active_retire(struct i915_active *ref)
 	if (--ref->count)
 		return;
 
+	debug_active_deactivate(ref);
+
 	/* return the unused nodes to our slabcache */
 	__active_park(ref);
 
@@ -155,6 +222,8 @@ void i915_active_init(struct drm_i915_private *i915,
 		      struct i915_active *ref,
 		      void (*retire)(struct i915_active *ref))
 {
+	debug_active_init(ref);
+
 	ref->i915 = i915;
 	ref->retire = retire;
 	ref->tree = RB_ROOT;
@@ -191,13 +260,21 @@ int i915_active_ref(struct i915_active *ref,
 
 bool i915_active_acquire(struct i915_active *ref)
 {
+	debug_active_assert(ref);
 	lockdep_assert_held(BKL(ref));
-	return !ref->count++;
+
+	if (ref->count++)
+		return false;
+
+	debug_active_activate(ref);
+	return true;
 }
 
 void i915_active_release(struct i915_active *ref)
 {
+	debug_active_assert(ref);
 	lockdep_assert_held(BKL(ref));
+
 	__active_retire(ref);
 }
 
@@ -260,6 +337,7 @@ int i915_request_await_active(struct i915_request *rq, struct i915_active *ref)
 #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
 void i915_active_fini(struct i915_active *ref)
 {
+	debug_active_fini(ref);
 	GEM_BUG_ON(i915_active_request_isset(&ref->last));
 	GEM_BUG_ON(!RB_EMPTY_ROOT(&ref->tree));
 	GEM_BUG_ON(ref->count);
diff --git a/drivers/gpu/drm/i915/selftests/i915_active.c b/drivers/gpu/drm/i915/selftests/i915_active.c
index c0b3537a5fa6..98493bcc91f2 100644
--- a/drivers/gpu/drm/i915/selftests/i915_active.c
+++ b/drivers/gpu/drm/i915/selftests/i915_active.c
@@ -16,28 +16,51 @@ struct live_active {
 	bool retired;
 };
 
-static void __live_active_retire(struct i915_active *base)
+static void __live_free(struct live_active *active)
+{
+	i915_active_fini(&active->base);
+	kfree(active);
+}
+
+static void __live_retire(struct i915_active *base)
 {
 	struct live_active *active = container_of(base, typeof(*active), base);
 
 	active->retired = true;
 }
 
-static int __live_active_setup(struct drm_i915_private *i915,
-			       struct live_active *active)
+static struct live_active *__live_alloc(struct drm_i915_private *i915)
+{
+	struct live_active *active;
+
+	active = kzalloc(sizeof(*active), GFP_KERNEL);
+	if (!active)
+		return NULL;
+
+	i915_active_init(i915, &active->base, __live_retire);
+
+	return active;
+}
+
+static struct live_active *
+__live_active_setup(struct drm_i915_private *i915)
 {
 	struct intel_engine_cs *engine;
 	struct i915_sw_fence *submit;
+	struct live_active *active;
 	enum intel_engine_id id;
 	unsigned int count = 0;
 	int err = 0;
 
-	submit = heap_fence_create(GFP_KERNEL);
-	if (!submit)
-		return -ENOMEM;
+	active = __live_alloc(i915);
+	if (!active)
+		return ERR_PTR(-ENOMEM);
 
-	i915_active_init(i915, &active->base, __live_active_retire);
-	active->retired = false;
+	submit = heap_fence_create(GFP_KERNEL);
+	if (!submit) {
+		kfree(active);
+		return ERR_PTR(-ENOMEM);
+	}
 
 	if (!i915_active_acquire(&active->base)) {
 		pr_err("First i915_active_acquire should report being idle\n");
@@ -84,64 +107,79 @@ static int __live_active_setup(struct drm_i915_private *i915,
 	i915_sw_fence_commit(submit);
 	heap_fence_put(submit);
 
-	return err;
+	/* XXX leaks live_active on error */
+	return err ? ERR_PTR(err) : active;
 }
 
 static int live_active_wait(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
-	struct live_active active;
+	struct live_active *active;
 	intel_wakeref_t wakeref;
-	int err;
+	int err = 0;
 
 	/* Check that we get a callback when requests retire upon waiting */
 
 	mutex_lock(&i915->drm.struct_mutex);
 	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
 
-	err = __live_active_setup(i915, &active);
+	active = __live_active_setup(i915);
+	if (IS_ERR(active)) {
+		err = PTR_ERR(active);
+		goto err;
+	}
 
-	i915_active_wait(&active.base);
-	if (!active.retired) {
+	i915_active_wait(&active->base);
+	if (!active->retired) {
 		pr_err("i915_active not retired after waiting!\n");
 		err = -EINVAL;
 	}
 
-	i915_active_fini(&active.base);
+	__live_free(active);
+
 	if (igt_flush_test(i915, I915_WAIT_LOCKED))
 		err = -EIO;
 
+err:
 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 	mutex_unlock(&i915->drm.struct_mutex);
+
 	return err;
 }
 
 static int live_active_retire(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
-	struct live_active active;
+	struct live_active *active;
 	intel_wakeref_t wakeref;
-	int err;
+	int err = 0;
 
 	/* Check that we get a callback when requests are indirectly retired */
 
 	mutex_lock(&i915->drm.struct_mutex);
 	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
 
-	err = __live_active_setup(i915, &active);
+	active = __live_active_setup(i915);
+	if (IS_ERR(active)) {
+		err = PTR_ERR(active);
+		goto err;
+	}
 
 	/* waits for & retires all requests */
 	if (igt_flush_test(i915, I915_WAIT_LOCKED))
 		err = -EIO;
 
-	if (!active.retired) {
+	if (!active->retired) {
 		pr_err("i915_active not retired after flushing!\n");
 		err = -EINVAL;
 	}
 
-	i915_active_fini(&active.base);
+	__live_free(active);
+
+err:
 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 	mutex_unlock(&i915->drm.struct_mutex);
+
 	return err;
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 14/26] drm/i915: Signal fence completion from i915_request_wait
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (11 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 13/26] drm/i915: Track i915_active using debugobjects Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 15/26] drm/i915: Remove waiting & retiring from shrinker paths Chris Wilson
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

With the upcoming change to automanaged i915_active, the intent is that
whenever we wait on the set of active fences, they are signaled and
collected.  The requirement is that all successful returns from
i915_request_wait() signal the fence, so fixup the one remaining path
where we may return before the interrupt has been run.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_request.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 27b9893fa8e3..201bd8d1506f 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1458,8 +1458,10 @@ long i915_request_wait(struct i915_request *rq,
 	for (;;) {
 		set_current_state(state);
 
-		if (i915_request_completed(rq))
+		if (i915_request_completed(rq)) {
+			dma_fence_signal(&rq->fence);
 			break;
+		}
 
 		if (signal_pending_state(state, current)) {
 			timeout = -ERESTARTSYS;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 15/26] drm/i915: Remove waiting & retiring from shrinker paths
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (12 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 14/26] drm/i915: Signal fence completion from i915_request_wait Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 16/26] drm/i915: Throw away the active object retirement complexity Chris Wilson
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

i915_gem_wait_for_idle() and i915_retire_requests() introduce a
dependency on the timeline->mutex. This is problematic as we want to
later perform allocations underneath i915_active.mutex, forming a link
between the shrinker, the timeline and active mutexes. Nip this cycle in
the bud by removing the acquisition of the timeline mutex (i.e.
retiring) from inside the shrinker.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 10 ----------
 drivers/gpu/drm/i915/i915_timeline.c         |  3 ---
 2 files changed, 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index 3a926a8755c6..1bbc690494c7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -169,7 +169,6 @@ i915_gem_shrink(struct drm_i915_private *i915,
 	 */
 
 	trace_i915_gem_shrink(i915, target, shrink);
-	i915_retire_requests(i915);
 
 	/*
 	 * Unbinding of objects will require HW access; Let us not wake the
@@ -269,8 +268,6 @@ i915_gem_shrink(struct drm_i915_private *i915,
 	if (shrink & I915_SHRINK_BOUND)
 		intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 
-	i915_retire_requests(i915);
-
 	shrinker_unlock(i915, unlock);
 
 	if (nr_scanned)
@@ -427,12 +424,6 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
 	if (!shrinker_lock(i915, 0, &unlock))
 		return NOTIFY_DONE;
 
-	/* Force everything onto the inactive lists */
-	if (i915_gem_wait_for_idle(i915,
-				   I915_WAIT_LOCKED,
-				   MAX_SCHEDULE_TIMEOUT))
-		goto out;
-
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
 		freed_pages += i915_gem_shrink(i915, -1UL, NULL,
 					       I915_SHRINK_BOUND |
@@ -455,7 +446,6 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
 	}
 	mutex_unlock(&i915->ggtt.vm.mutex);
 
-out:
 	shrinker_unlock(i915, unlock);
 
 	*(unsigned long *)ptr += freed_pages;
diff --git a/drivers/gpu/drm/i915/i915_timeline.c b/drivers/gpu/drm/i915/i915_timeline.c
index c311ce9c6f9d..f3ee264d7dd5 100644
--- a/drivers/gpu/drm/i915/i915_timeline.c
+++ b/drivers/gpu/drm/i915/i915_timeline.c
@@ -270,9 +270,6 @@ void i915_timelines_init(struct drm_i915_private *i915)
 
 	spin_lock_init(&gt->hwsp_lock);
 	INIT_LIST_HEAD(&gt->hwsp_free_list);
-
-	/* via i915_gem_wait_for_idle() */
-	i915_gem_shrinker_taints_mutex(i915, &gt->mutex);
 }
 
 static void timeline_add_to_active(struct i915_timeline *tl)
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 16/26] drm/i915: Throw away the active object retirement complexity
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (13 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 15/26] drm/i915: Remove waiting & retiring from shrinker paths Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 17/26] drm/i915: Provide an i915_active.acquire callback Chris Wilson
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

Remove the accumulated optimisations that we have for i915_vma_retire
and reduce it to the bare essential of tracking the active object
reference. This allows us to only use atomic operations, and so will be
able to avoid the struct_mutex requirement.

The principal loss here is the shrinker MRU bumping, so now if we have
to shrink, we will do so in much more random order and more likely to
try and shrink recently used objects. That is a nuisance, but shrinking
active objects is a second step we try to avoid and will always be a
system-wide performance issue.

The other loss is here is in the automatic pruning of the
reservation_object when idling. This is not as large an issue as upon
reservation_object introduction as now adding new fences into the object
replaces already signaled fences, keeping the array compact. But we do
lose the auto-expiration of stale fences and unused arrays. That may be
a noticeable problem for which we need to re-implement autopruning.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  1 -
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |  6 ---
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |  1 -
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c  |  5 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c    |  9 ----
 drivers/gpu/drm/i915/gt/intel_lrc.c           |  4 +-
 drivers/gpu/drm/i915/gt/intel_ringbuffer.c    |  1 -
 drivers/gpu/drm/i915/gt/selftest_hangcheck.c  | 32 +++++------
 drivers/gpu/drm/i915/i915_debugfs.c           |  8 +--
 drivers/gpu/drm/i915/i915_gem_batch_pool.c    | 42 ++++++---------
 drivers/gpu/drm/i915/i915_vma.c               | 54 ++++---------------
 11 files changed, 47 insertions(+), 116 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 1b571fd26ed4..0a105bc29494 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -163,7 +163,6 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
 
 		mutex_lock(&i915->drm.struct_mutex);
 
-		GEM_BUG_ON(i915_gem_object_is_active(obj));
 		list_for_each_entry_safe(vma, vn, &obj->vma.list, obj_link) {
 			GEM_BUG_ON(i915_vma_is_active(vma));
 			vma->flags &= ~I915_VMA_PIN_MASK;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 7cb1871d7128..454bfb498001 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -158,12 +158,6 @@ i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
 	return obj->ops->flags & I915_GEM_OBJECT_ASYNC_CANCEL;
 }
 
-static inline bool
-i915_gem_object_is_active(const struct drm_i915_gem_object *obj)
-{
-	return READ_ONCE(obj->active_count);
-}
-
 static inline bool
 i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
 {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 5b05698619ce..c299fed2c6b1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -156,7 +156,6 @@ struct drm_i915_gem_object {
 
 	/** Count of VMA actually bound by this object */
 	atomic_t bind_count;
-	unsigned int active_count;
 	/** Count of how many global VMA are currently pinned for use by HW */
 	unsigned int pin_global;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
index 1bbc690494c7..6670fa2350c9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
@@ -229,8 +229,9 @@ i915_gem_shrink(struct drm_i915_private *i915,
 				continue;
 
 			if (!(shrink & I915_SHRINK_ACTIVE) &&
-			    (i915_gem_object_is_active(obj) ||
-			     i915_gem_object_is_framebuffer(obj)))
+			    (i915_gem_object_is_framebuffer(obj) ||
+			     !reservation_object_test_signaled_rcu(obj->resv,
+								   true)))
 				continue;
 
 			if (!(shrink & I915_SHRINK_BOUND) &&
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 5c81f4b4813a..2053194a8b70 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -474,15 +474,6 @@ static int igt_mmap_offset_exhaustion(void *arg)
 			pr_err("[loop %d] Failed to busy the object\n", loop);
 			goto err_obj;
 		}
-
-		/* NB we rely on the _active_ reference to access obj now */
-		GEM_BUG_ON(!i915_gem_object_is_active(obj));
-		err = create_mmap_offset(obj);
-		if (err) {
-			pr_err("[loop %d] create_mmap_offset failed with err=%d\n",
-			       loop, err);
-			goto out;
-		}
 	}
 
 out:
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index bbbdc63906c6..cd4cf4d0b30c 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1509,9 +1509,7 @@ static void execlists_submit_request(struct i915_request *request)
 static void __execlists_context_fini(struct intel_context *ce)
 {
 	intel_ring_put(ce->ring);
-
-	GEM_BUG_ON(i915_gem_object_is_active(ce->state->obj));
-	i915_gem_object_put(ce->state->obj);
+	i915_vma_put(ce->state);
 }
 
 static void execlists_context_destroy(struct kref *kref)
diff --git a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
index c6023bc9452d..26f33a08a32f 100644
--- a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
@@ -1309,7 +1309,6 @@ void intel_ring_free(struct kref *ref)
 
 static void __ring_context_fini(struct intel_context *ce)
 {
-	GEM_BUG_ON(i915_gem_object_is_active(ce->state->obj));
 	i915_gem_object_put(ce->state->obj);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index 1ee4c923044f..43b81084122d 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -129,33 +129,29 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
 {
 	struct drm_i915_private *i915 = h->i915;
 	struct i915_address_space *vm = h->ctx->vm ?: &i915->ggtt.vm;
+	struct drm_i915_gem_object *obj;
 	struct i915_request *rq = NULL;
 	struct i915_vma *hws, *vma;
 	unsigned int flags;
+	void *vaddr;
 	u32 *batch;
 	int err;
 
-	if (i915_gem_object_is_active(h->obj)) {
-		struct drm_i915_gem_object *obj;
-		void *vaddr;
-
-		obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
-		if (IS_ERR(obj))
-			return ERR_CAST(obj);
+	obj = i915_gem_object_create_internal(h->i915, PAGE_SIZE);
+	if (IS_ERR(obj))
+		return ERR_CAST(obj);
 
-		vaddr = i915_gem_object_pin_map(obj,
-						i915_coherent_map_type(h->i915));
-		if (IS_ERR(vaddr)) {
-			i915_gem_object_put(obj);
-			return ERR_CAST(vaddr);
-		}
+	vaddr = i915_gem_object_pin_map(obj, i915_coherent_map_type(h->i915));
+	if (IS_ERR(vaddr)) {
+		i915_gem_object_put(obj);
+		return ERR_CAST(vaddr);
+	}
 
-		i915_gem_object_unpin_map(h->obj);
-		i915_gem_object_put(h->obj);
+	i915_gem_object_unpin_map(h->obj);
+	i915_gem_object_put(h->obj);
 
-		h->obj = obj;
-		h->batch = vaddr;
-	}
+	h->obj = obj;
+	h->batch = vaddr;
 
 	vma = i915_vma_instance(h->obj, vm, NULL);
 	if (IS_ERR(vma))
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 62cf34db9280..eeecdad0e3ca 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -75,11 +75,6 @@ static int i915_capabilities(struct seq_file *m, void *data)
 	return 0;
 }
 
-static char get_active_flag(struct drm_i915_gem_object *obj)
-{
-	return i915_gem_object_is_active(obj) ? '*' : ' ';
-}
-
 static char get_pin_flag(struct drm_i915_gem_object *obj)
 {
 	return obj->pin_global ? 'p' : ' ';
@@ -144,9 +139,8 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 	unsigned int frontbuffer_bits;
 	int pin_count = 0;
 
-	seq_printf(m, "%pK: %c%c%c%c%c %8zdKiB %02x %02x %s%s%s",
+	seq_printf(m, "%pK: %c%c%c%c %8zdKiB %02x %02x %s%s%s",
 		   &obj->base,
-		   get_active_flag(obj),
 		   get_pin_flag(obj),
 		   get_tiling_flag(obj),
 		   get_global_flag(obj),
diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.c b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
index 56adfdcaed3e..1b7595e2ac21 100644
--- a/drivers/gpu/drm/i915/i915_gem_batch_pool.c
+++ b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
@@ -94,34 +94,26 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
 	list = &pool->cache_list[n];
 
 	list_for_each_entry(obj, list, batch_pool_link) {
+		struct reservation_object *resv = obj->resv;
+
 		/* The batches are strictly LRU ordered */
-		if (i915_gem_object_is_active(obj)) {
-			struct reservation_object *resv = obj->resv;
-
-			if (!reservation_object_test_signaled_rcu(resv, true))
-				break;
-
-			i915_retire_requests(pool->engine->i915);
-			GEM_BUG_ON(i915_gem_object_is_active(obj));
-
-			/*
-			 * The object is now idle, clear the array of shared
-			 * fences before we add a new request. Although, we
-			 * remain on the same engine, we may be on a different
-			 * timeline and so may continually grow the array,
-			 * trapping a reference to all the old fences, rather
-			 * than replace the existing fence.
-			 */
-			if (rcu_access_pointer(resv->fence)) {
-				reservation_object_lock(resv, NULL);
-				reservation_object_add_excl_fence(resv, NULL);
-				reservation_object_unlock(resv);
-			}
+		if (!reservation_object_test_signaled_rcu(resv, true))
+			break;
+
+		/*
+		 * The object is now idle, clear the array of shared
+		 * fences before we add a new request. Although, we
+		 * remain on the same engine, we may be on a different
+		 * timeline and so may continually grow the array,
+		 * trapping a reference to all the old fences, rather
+		 * than replace the existing fence.
+		 */
+		if (rcu_access_pointer(resv->fence)) {
+			reservation_object_lock(resv, NULL);
+			reservation_object_add_excl_fence(resv, NULL);
+			reservation_object_unlock(resv);
 		}
 
-		GEM_BUG_ON(!reservation_object_test_signaled_rcu(obj->resv,
-								 true));
-
 		if (obj->base.size >= size)
 			goto found;
 	}
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 5fc0fda32e2a..afa6399066f5 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -77,43 +77,11 @@ static void vma_print_allocator(struct i915_vma *vma, const char *reason)
 
 #endif
 
-static void obj_bump_mru(struct drm_i915_gem_object *obj)
-{
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	unsigned long flags;
-
-	spin_lock_irqsave(&i915->mm.obj_lock, flags);
-	list_move_tail(&obj->mm.link, &i915->mm.shrink_list);
-	spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
-
-	obj->mm.dirty = true; /* be paranoid  */
-}
-
 static void __i915_vma_retire(struct i915_active *ref)
 {
 	struct i915_vma *vma = container_of(ref, typeof(*vma), active);
-	struct drm_i915_gem_object *obj = vma->obj;
-
-	GEM_BUG_ON(!i915_gem_object_is_active(obj));
-	if (--obj->active_count)
-		return;
-
-	/* Prune the shared fence arrays iff completely idle (inc. external) */
-	if (reservation_object_trylock(obj->resv)) {
-		if (reservation_object_test_signaled_rcu(obj->resv, true))
-			reservation_object_add_excl_fence(obj->resv, NULL);
-		reservation_object_unlock(obj->resv);
-	}
 
-	/*
-	 * Bump our place on the bound list to keep it roughly in LRU order
-	 * so that we don't steal from recently used but inactive objects
-	 * (unless we are forced to ofc!)
-	 */
-	if (i915_gem_object_is_shrinkable(obj))
-		obj_bump_mru(obj);
-
-	i915_gem_object_put(obj); /* and drop the active reference */
+	i915_vma_put(vma);
 }
 
 static struct i915_vma *
@@ -921,6 +889,7 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 			    unsigned int flags)
 {
 	struct drm_i915_gem_object *obj = vma->obj;
+	int err;
 
 	assert_vma_held(vma);
 	assert_object_held(obj);
@@ -934,17 +903,13 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 	 * add the active reference first and queue for it to be dropped
 	 * *last*.
 	 */
-	if (!vma->active.count && !obj->active_count++)
-		i915_gem_object_get(obj); /* once more for the active ref */
-
-	if (unlikely(i915_active_ref(&vma->active, rq->fence.context, rq))) {
-		if (!vma->active.count && !--obj->active_count)
-			i915_gem_object_put(obj);
-		return -ENOMEM;
-	}
+	if (i915_active_acquire(&vma->active))
+		i915_vma_get(vma);
 
-	GEM_BUG_ON(!i915_vma_is_active(vma));
-	GEM_BUG_ON(!obj->active_count);
+	err = i915_active_ref(&vma->active, rq->fence.context, rq);
+	i915_active_release(&vma->active);
+	if (unlikely(err))
+		return err;
 
 	obj->write_domain = 0;
 	if (flags & EXEC_OBJECT_WRITE) {
@@ -956,11 +921,14 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 		obj->read_domains = 0;
 	}
 	obj->read_domains |= I915_GEM_GPU_DOMAINS;
+	obj->mm.dirty = true;
 
 	if (flags & EXEC_OBJECT_NEEDS_FENCE)
 		__i915_active_request_set(&vma->last_fence, rq);
 
 	export_fence(vma, rq, flags);
+
+	GEM_BUG_ON(!i915_vma_is_active(vma));
 	return 0;
 }
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 17/26] drm/i915: Provide an i915_active.acquire callback
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (14 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 16/26] drm/i915: Throw away the active object retirement complexity Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 18/26] drm/i915: Push the i915_active.retire into a worker Chris Wilson
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

If we introduce a callback for i915_active that is only called the first
time we use the i915_active and is symmetrically paired with the
i915_active.retire callback, we can replace the open-coded and
non-atomic implementations -- which will be very fragile (i.e. broken)
upon removing the struct_mutex serialisation.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c  |   8 +-
 drivers/gpu/drm/i915/gt/intel_context.c      |  83 ++++---
 drivers/gpu/drm/i915/gt/intel_context.h      |  14 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c          |   6 +-
 drivers/gpu/drm/i915/gt/intel_ringbuffer.c   |   2 +-
 drivers/gpu/drm/i915/gt/mock_engine.c        |   2 +-
 drivers/gpu/drm/i915/i915_active.c           | 226 +++++++++----------
 drivers/gpu/drm/i915/i915_active.h           |  25 +-
 drivers/gpu/drm/i915/i915_active_types.h     |  10 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c          |   2 +-
 drivers/gpu/drm/i915/i915_timeline.c         |  16 +-
 drivers/gpu/drm/i915/i915_vma.c              |  22 +-
 drivers/gpu/drm/i915/selftests/i915_active.c |  53 ++++-
 13 files changed, 265 insertions(+), 204 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 35871c8a42a6..911f3564972d 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -923,8 +923,12 @@ static int context_barrier_task(struct i915_gem_context *ctx,
 	if (!cb)
 		return -ENOMEM;
 
-	i915_active_init(i915, &cb->base, cb_retire);
-	i915_active_acquire(&cb->base);
+	i915_active_init(i915, &cb->base, NULL, cb_retire);
+	err = i915_active_acquire(&cb->base);
+	if (err) {
+		kfree(cb);
+		return err;
+	}
 
 	for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
 		struct i915_request *rq;
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 2c454f227c2e..20c708ae6dc0 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -95,11 +95,15 @@ void intel_context_unpin(struct intel_context *ce)
 	intel_context_put(ce);
 }
 
-static int __context_pin_state(struct i915_vma *vma, unsigned long flags)
+static int __context_pin_state(struct i915_vma *vma)
 {
+	u64 flags;
 	int err;
 
-	err = i915_vma_pin(vma, 0, 0, flags | PIN_GLOBAL);
+	flags = i915_ggtt_pin_bias(vma) | PIN_OFFSET_BIAS;
+	flags |= PIN_HIGH | PIN_GLOBAL;
+
+	err = i915_vma_pin(vma, 0, 0, flags);
 	if (err)
 		return err;
 
@@ -119,7 +123,7 @@ static void __context_unpin_state(struct i915_vma *vma)
 	__i915_vma_unpin(vma);
 }
 
-static void intel_context_retire(struct i915_active *active)
+static void __intel_context_retire(struct i915_active *active)
 {
 	struct intel_context *ce = container_of(active, typeof(*ce), active);
 
@@ -129,65 +133,58 @@ static void intel_context_retire(struct i915_active *active)
 	intel_context_put(ce);
 }
 
-void
-intel_context_init(struct intel_context *ce,
-		   struct i915_gem_context *ctx,
-		   struct intel_engine_cs *engine)
-{
-	GEM_BUG_ON(!engine->cops);
-
-	kref_init(&ce->ref);
-
-	ce->gem_context = ctx;
-	ce->engine = engine;
-	ce->ops = engine->cops;
-	ce->sseu = engine->sseu;
-
-	INIT_LIST_HEAD(&ce->signal_link);
-	INIT_LIST_HEAD(&ce->signals);
-
-	mutex_init(&ce->pin_mutex);
-
-	i915_active_init(ctx->i915, &ce->active, intel_context_retire);
-}
-
-int intel_context_active_acquire(struct intel_context *ce, unsigned long flags)
+static int __intel_context_active(struct i915_active *active)
 {
+	struct intel_context *ce = container_of(active, typeof(*ce), active);
 	int err;
 
-	if (!i915_active_acquire(&ce->active))
-		return 0;
-
 	intel_context_get(ce);
 
 	if (!ce->state)
 		return 0;
 
-	err = __context_pin_state(ce->state, flags);
-	if (err) {
-		i915_active_cancel(&ce->active);
-		intel_context_put(ce);
-		return err;
-	}
+	err = __context_pin_state(ce->state);
+	if (err)
+		goto err_put;
 
 	/* Preallocate tracking nodes */
 	if (!i915_gem_context_is_kernel(ce->gem_context)) {
 		err = i915_active_acquire_preallocate_barrier(&ce->active,
 							      ce->engine);
-		if (err) {
-			i915_active_release(&ce->active);
-			return err;
-		}
+		if (err)
+			goto err_unpin;
 	}
 
 	return 0;
+
+err_unpin:
+	__context_unpin_state(ce->state);
+err_put:
+	intel_context_put(ce);
+	return err;
 }
 
-void intel_context_active_release(struct intel_context *ce)
+void
+intel_context_init(struct intel_context *ce,
+		   struct i915_gem_context *ctx,
+		   struct intel_engine_cs *engine)
 {
-	/* Nodes preallocated in intel_context_active() */
-	i915_active_acquire_barrier(&ce->active);
-	i915_active_release(&ce->active);
+	GEM_BUG_ON(!engine->cops);
+
+	kref_init(&ce->ref);
+
+	ce->gem_context = ctx;
+	ce->engine = engine;
+	ce->ops = engine->cops;
+	ce->sseu = engine->sseu;
+
+	INIT_LIST_HEAD(&ce->signal_link);
+	INIT_LIST_HEAD(&ce->signals);
+
+	mutex_init(&ce->pin_mutex);
+
+	i915_active_init(ctx->i915, &ce->active,
+			 __intel_context_active, __intel_context_retire);
 }
 
 static void i915_global_context_shrink(void)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index a47275bc4f01..40cd8320fcc3 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -9,6 +9,7 @@
 
 #include <linux/lockdep.h>
 
+#include "i915_active.h"
 #include "intel_context_types.h"
 #include "intel_engine_types.h"
 
@@ -102,8 +103,17 @@ static inline void intel_context_exit(struct intel_context *ce)
 		ce->ops->exit(ce);
 }
 
-int intel_context_active_acquire(struct intel_context *ce, unsigned long flags);
-void intel_context_active_release(struct intel_context *ce);
+static inline int intel_context_active_acquire(struct intel_context *ce)
+{
+	return i915_active_acquire(&ce->active);
+}
+
+static inline void intel_context_active_release(struct intel_context *ce)
+{
+	/* Nodes preallocated in intel_context_active() */
+	i915_active_acquire_barrier(&ce->active);
+	i915_active_release(&ce->active);
+}
 
 static inline struct intel_context *intel_context_get(struct intel_context *ce)
 {
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index cd4cf4d0b30c..5dbc43c70496 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1565,12 +1565,10 @@ __execlists_context_pin(struct intel_context *ce,
 		goto err;
 	GEM_BUG_ON(!ce->state);
 
-	ret = intel_context_active_acquire(ce,
-					   engine->i915->ggtt.pin_bias |
-					   PIN_OFFSET_BIAS |
-					   PIN_HIGH);
+	ret = intel_context_active_acquire(ce);
 	if (ret)
 		goto err;
+	GEM_BUG_ON(!i915_vma_is_pinned(ce->state));
 
 	vaddr = i915_gem_object_pin_map(ce->state->obj,
 					i915_coherent_map_type(engine->i915) |
diff --git a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
index 26f33a08a32f..5292ddac4d74 100644
--- a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
@@ -1437,7 +1437,7 @@ static int ring_context_pin(struct intel_context *ce)
 		ce->state = vma;
 	}
 
-	err = intel_context_active_acquire(ce, PIN_HIGH);
+	err = intel_context_active_acquire(ce);
 	if (err)
 		return err;
 
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
index 086801b51441..b9c2764beca3 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -154,7 +154,7 @@ static int mock_context_pin(struct intel_context *ce)
 			return -ENOMEM;
 	}
 
-	ret = intel_context_active_acquire(ce, PIN_HIGH);
+	ret = intel_context_active_acquire(ce);
 	if (ret)
 		return ret;
 
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index e716baab276f..3070fccbc08b 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -39,7 +39,7 @@ static void *active_debug_hint(void *addr)
 {
 	struct i915_active *ref = addr;
 
-	return (void *)ref->retire ?: (void *)ref;
+	return (void *)ref->active ?: (void *)ref->retire ?: (void *)ref;
 }
 
 static struct debug_obj_descr active_debug_desc = {
@@ -97,50 +97,51 @@ static void debug_active_assert(struct i915_active *ref)
 #endif
 
 static void
-__active_park(struct i915_active *ref)
+active_retire(struct i915_active *ref)
 {
 	struct active_node *it, *n;
+	struct rb_root root;
+	bool retire = false;
 
-	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
-		GEM_BUG_ON(i915_active_request_isset(&it->base));
-		kmem_cache_free(global.slab_cache, it);
-	}
-	ref->tree = RB_ROOT;
-}
-
-static void
-__active_retire(struct i915_active *ref)
-{
-	GEM_BUG_ON(!ref->count);
-	if (--ref->count)
+	GEM_BUG_ON(!atomic_read(&ref->count));
+	if (atomic_add_unless(&ref->count, -1, 1))
 		return;
 
-	debug_active_deactivate(ref);
+	/* One active may be flushed from inside the acquire of another */
+	mutex_lock_nested(&ref->mutex, SINGLE_DEPTH_NESTING);
+
+	/* return the unused nodes to our slabcache -- flushing the allocator */
+	if (atomic_dec_and_test(&ref->count)) {
+		debug_active_deactivate(ref);
+		root = ref->tree;
+		ref->tree = RB_ROOT;
+		ref->cache = NULL;
+		retire = true;
+	}
 
-	/* return the unused nodes to our slabcache */
-	__active_park(ref);
+	mutex_unlock(&ref->mutex);
+	if (!retire)
+		return;
 
 	ref->retire(ref);
-}
 
-static void
-node_retire(struct i915_active_request *base, struct i915_request *rq)
-{
-	__active_retire(container_of(base, struct active_node, base)->ref);
+	rbtree_postorder_for_each_entry_safe(it, n, &root, node) {
+		GEM_BUG_ON(i915_active_request_isset(&it->base));
+		kmem_cache_free(global.slab_cache, it);
+	}
 }
 
 static void
-last_retire(struct i915_active_request *base, struct i915_request *rq)
+node_retire(struct i915_active_request *base, struct i915_request *rq)
 {
-	__active_retire(container_of(base, struct i915_active, last));
+	active_retire(container_of(base, struct active_node, base)->ref);
 }
 
 static struct i915_active_request *
 active_instance(struct i915_active *ref, u64 idx)
 {
-	struct active_node *node;
+	struct active_node *node, *prealloc;
 	struct rb_node **p, *parent;
-	struct i915_request *old;
 
 	/*
 	 * We track the most recently used timeline to skip a rbtree search
@@ -148,20 +149,18 @@ active_instance(struct i915_active *ref, u64 idx)
 	 * at all. We can reuse the last slot if it is empty, that is
 	 * after the previous activity has been retired, or if it matches the
 	 * current timeline.
-	 *
-	 * Note that we allow the timeline to be active simultaneously in
-	 * the rbtree and the last cache. We do this to avoid having
-	 * to search and replace the rbtree element for a new timeline, with
-	 * the cost being that we must be aware that the ref may be retired
-	 * twice for the same timeline (as the older rbtree element will be
-	 * retired before the new request added to last).
 	 */
-	old = i915_active_request_raw(&ref->last, BKL(ref));
-	if (!old || old->fence.context == idx)
-		goto out;
+	node = READ_ONCE(ref->cache);
+	if (node && node->timeline == idx)
+		return &node->base;
+
+	/* Preallocate a replacement, just in case */
+	prealloc = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
+	if (!prealloc)
+		return NULL;
 
-	/* Move the currently active fence into the rbtree */
-	idx = old->fence.context;
+	mutex_lock(&ref->mutex);
+	GEM_BUG_ON(i915_active_is_idle(ref));
 
 	parent = NULL;
 	p = &ref->tree.rb_node;
@@ -169,8 +168,10 @@ active_instance(struct i915_active *ref, u64 idx)
 		parent = *p;
 
 		node = rb_entry(parent, struct active_node, node);
-		if (node->timeline == idx)
-			goto replace;
+		if (node->timeline == idx) {
+			kmem_cache_free(global.slab_cache, prealloc);
+			goto out;
+		}
 
 		if (node->timeline < idx)
 			p = &parent->rb_right;
@@ -178,17 +179,7 @@ active_instance(struct i915_active *ref, u64 idx)
 			p = &parent->rb_left;
 	}
 
-	node = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
-
-	/* kmalloc may retire the ref->last (thanks shrinker)! */
-	if (unlikely(!i915_active_request_raw(&ref->last, BKL(ref)))) {
-		kmem_cache_free(global.slab_cache, node);
-		goto out;
-	}
-
-	if (unlikely(!node))
-		return ERR_PTR(-ENOMEM);
-
+	node = prealloc;
 	i915_active_request_init(&node->base, NULL, node_retire);
 	node->ref = ref;
 	node->timeline = idx;
@@ -196,40 +187,29 @@ active_instance(struct i915_active *ref, u64 idx)
 	rb_link_node(&node->node, parent, p);
 	rb_insert_color(&node->node, &ref->tree);
 
-replace:
-	/*
-	 * Overwrite the previous active slot in the rbtree with last,
-	 * leaving last zeroed. If the previous slot is still active,
-	 * we must be careful as we now only expect to receive one retire
-	 * callback not two, and so much undo the active counting for the
-	 * overwritten slot.
-	 */
-	if (i915_active_request_isset(&node->base)) {
-		/* Retire ourselves from the old rq->active_list */
-		__list_del_entry(&node->base.link);
-		ref->count--;
-		GEM_BUG_ON(!ref->count);
-	}
-	GEM_BUG_ON(list_empty(&ref->last.link));
-	list_replace_init(&ref->last.link, &node->base.link);
-	node->base.request = fetch_and_zero(&ref->last.request);
-
 out:
-	return &ref->last;
+	ref->cache = node;
+	mutex_unlock(&ref->mutex);
+
+	return &node->base;
 }
 
-void i915_active_init(struct drm_i915_private *i915,
-		      struct i915_active *ref,
-		      void (*retire)(struct i915_active *ref))
+void __i915_active_init(struct drm_i915_private *i915,
+			struct i915_active *ref,
+			int (*active)(struct i915_active *ref),
+			void (*retire)(struct i915_active *ref),
+			struct lock_class_key *key)
 {
 	debug_active_init(ref);
 
 	ref->i915 = i915;
+	ref->active = active;
 	ref->retire = retire;
 	ref->tree = RB_ROOT;
-	i915_active_request_init(&ref->last, NULL, last_retire);
+	ref->cache = NULL;
 	init_llist_head(&ref->barriers);
-	ref->count = 0;
+	atomic_set(&ref->count, 0);
+	__mutex_init(&ref->mutex, "i915_active", key);
 }
 
 int i915_active_ref(struct i915_active *ref,
@@ -237,68 +217,84 @@ int i915_active_ref(struct i915_active *ref,
 		    struct i915_request *rq)
 {
 	struct i915_active_request *active;
-	int err = 0;
+	int err;
 
 	/* Prevent reaping in case we malloc/wait while building the tree */
-	i915_active_acquire(ref);
+	err = i915_active_acquire(ref);
+	if (err)
+		return err;
 
 	active = active_instance(ref, timeline);
-	if (IS_ERR(active)) {
-		err = PTR_ERR(active);
+	if (!active) {
+		err = -ENOMEM;
 		goto out;
 	}
 
 	if (!i915_active_request_isset(active))
-		ref->count++;
+		atomic_inc(&ref->count);
 	__i915_active_request_set(active, rq);
 
-	GEM_BUG_ON(!ref->count);
 out:
 	i915_active_release(ref);
 	return err;
 }
 
-bool i915_active_acquire(struct i915_active *ref)
+int i915_active_acquire(struct i915_active *ref)
 {
+	int err;
+
 	debug_active_assert(ref);
-	lockdep_assert_held(BKL(ref));
+	if (atomic_add_unless(&ref->count, 1, 0))
+		return 0;
 
-	if (ref->count++)
-		return false;
+	err = mutex_lock_interruptible(&ref->mutex);
+	if (err)
+		return err;
+
+	if (!atomic_read(&ref->count) && ref->active)
+		err = ref->active(ref);
+	if (!err) {
+		debug_active_activate(ref);
+		atomic_inc(&ref->count);
+	}
 
-	debug_active_activate(ref);
-	return true;
+	mutex_unlock(&ref->mutex);
+
+	return err;
 }
 
 void i915_active_release(struct i915_active *ref)
 {
 	debug_active_assert(ref);
-	lockdep_assert_held(BKL(ref));
-
-	__active_retire(ref);
+	active_retire(ref);
 }
 
 int i915_active_wait(struct i915_active *ref)
 {
 	struct active_node *it, *n;
-	int ret = 0;
+	int err;
 
-	if (i915_active_acquire(ref))
-		goto out_release;
+	might_sleep();
+	if (RB_EMPTY_ROOT(&ref->tree))
+		return 0;
 
-	ret = i915_active_request_retire(&ref->last, BKL(ref));
-	if (ret)
-		goto out_release;
+	err = mutex_lock_interruptible(&ref->mutex);
+	if (err)
+		return err;
+
+	if (!atomic_add_unless(&ref->count, 1, 0))
+		goto unlock;
 
 	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
-		ret = i915_active_request_retire(&it->base, BKL(ref));
-		if (ret)
+		err = i915_active_request_retire(&it->base, BKL(ref));
+		if (err)
 			break;
 	}
 
-out_release:
-	i915_active_release(ref);
-	return ret;
+	active_retire(ref);
+unlock:
+	mutex_unlock(&ref->mutex);
+	return err;
 }
 
 int i915_request_await_active_request(struct i915_request *rq,
@@ -313,23 +309,24 @@ int i915_request_await_active_request(struct i915_request *rq,
 int i915_request_await_active(struct i915_request *rq, struct i915_active *ref)
 {
 	struct active_node *it, *n;
-	int err = 0;
+	int err;
 
-	/* await allocates and so we need to avoid hitting the shrinker */
-	if (i915_active_acquire(ref))
-		goto out; /* was idle */
+	if (RB_EMPTY_ROOT(&ref->tree))
+		return 0;
 
-	err = i915_request_await_active_request(rq, &ref->last);
+	/* await allocates and so we need to avoid hitting the shrinker */
+	err = i915_active_acquire(ref);
 	if (err)
-		goto out;
+		return err;
 
+	mutex_lock(&ref->mutex);
 	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
 		err = i915_request_await_active_request(rq, &it->base);
 		if (err)
-			goto out;
+			break;
 	}
+	mutex_unlock(&ref->mutex);
 
-out:
 	i915_active_release(ref);
 	return err;
 }
@@ -338,9 +335,9 @@ int i915_request_await_active(struct i915_request *rq, struct i915_active *ref)
 void i915_active_fini(struct i915_active *ref)
 {
 	debug_active_fini(ref);
-	GEM_BUG_ON(i915_active_request_isset(&ref->last));
 	GEM_BUG_ON(!RB_EMPTY_ROOT(&ref->tree));
-	GEM_BUG_ON(ref->count);
+	GEM_BUG_ON(atomic_read(&ref->count));
+	mutex_destroy(&ref->mutex);
 }
 #endif
 
@@ -367,7 +364,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 					 (void *)engine, node_retire);
 		node->timeline = kctx->ring->timeline->fence_context;
 		node->ref = ref;
-		ref->count++;
+		atomic_inc(&ref->count);
 
 		intel_engine_pm_get(engine);
 		llist_add((struct llist_node *)&node->base.link,
@@ -394,8 +391,9 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 {
 	struct llist_node *pos, *next;
 
-	i915_active_acquire(ref);
+	GEM_BUG_ON(i915_active_is_idle(ref));
 
+	mutex_lock_nested(&ref->mutex, SINGLE_DEPTH_NESTING);
 	llist_for_each_safe(pos, next, llist_del_all(&ref->barriers)) {
 		struct intel_engine_cs *engine;
 		struct active_node *node;
@@ -425,7 +423,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 			  &engine->barrier_tasks);
 		intel_engine_pm_put(engine);
 	}
-	i915_active_release(ref);
+	mutex_unlock(&ref->mutex);
 }
 
 void i915_request_add_barriers(struct i915_request *rq)
diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
index c14eebf6d074..134166d31251 100644
--- a/drivers/gpu/drm/i915/i915_active.h
+++ b/drivers/gpu/drm/i915/i915_active.h
@@ -369,9 +369,16 @@ i915_active_request_retire(struct i915_active_request *active,
  * synchronisation.
  */
 
-void i915_active_init(struct drm_i915_private *i915,
-		      struct i915_active *ref,
-		      void (*retire)(struct i915_active *ref));
+void __i915_active_init(struct drm_i915_private *i915,
+			struct i915_active *ref,
+			int (*active)(struct i915_active *ref),
+			void (*retire)(struct i915_active *ref),
+			struct lock_class_key *key);
+#define i915_active_init(i915, ref, active, retire) do {		\
+	static struct lock_class_key __key;				\
+									\
+	__i915_active_init(i915, ref, active, retire, &__key);		\
+} while (0)
 
 int i915_active_ref(struct i915_active *ref,
 		    u64 timeline,
@@ -384,20 +391,14 @@ int i915_request_await_active(struct i915_request *rq,
 int i915_request_await_active_request(struct i915_request *rq,
 				      struct i915_active_request *active);
 
-bool i915_active_acquire(struct i915_active *ref);
-
-static inline void i915_active_cancel(struct i915_active *ref)
-{
-	GEM_BUG_ON(ref->count != 1);
-	ref->count = 0;
-}
-
+int i915_active_acquire(struct i915_active *ref);
 void i915_active_release(struct i915_active *ref);
+void __i915_active_release_nested(struct i915_active *ref, int subclass);
 
 static inline bool
 i915_active_is_idle(const struct i915_active *ref)
 {
-	return !ref->count;
+	return !atomic_read(&ref->count);
 }
 
 #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
diff --git a/drivers/gpu/drm/i915/i915_active_types.h b/drivers/gpu/drm/i915/i915_active_types.h
index c025991b9233..5b0a3024ce24 100644
--- a/drivers/gpu/drm/i915/i915_active_types.h
+++ b/drivers/gpu/drm/i915/i915_active_types.h
@@ -7,7 +7,9 @@
 #ifndef _I915_ACTIVE_TYPES_H_
 #define _I915_ACTIVE_TYPES_H_
 
+#include <linux/atomic.h>
 #include <linux/llist.h>
+#include <linux/mutex.h>
 #include <linux/rbtree.h>
 #include <linux/rcupdate.h>
 
@@ -24,13 +26,17 @@ struct i915_active_request {
 	i915_active_retire_fn retire;
 };
 
+struct active_node;
+
 struct i915_active {
 	struct drm_i915_private *i915;
 
+	struct active_node *cache;
 	struct rb_root tree;
-	struct i915_active_request last;
-	unsigned int count;
+	struct mutex mutex;
+	atomic_t count;
 
+	int (*active)(struct i915_active *ref);
 	void (*retire)(struct i915_active *ref);
 
 	struct llist_head barriers;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 8ab820145ea6..ee22c34d071a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2049,7 +2049,7 @@ static struct i915_vma *pd_vma_create(struct gen6_ppgtt *ppgtt, int size)
 	if (!vma)
 		return ERR_PTR(-ENOMEM);
 
-	i915_active_init(i915, &vma->active, NULL);
+	i915_active_init(i915, &vma->active, NULL, NULL);
 	INIT_ACTIVE_REQUEST(&vma->last_fence);
 
 	vma->vm = &ggtt->vm;
diff --git a/drivers/gpu/drm/i915/i915_timeline.c b/drivers/gpu/drm/i915/i915_timeline.c
index f3ee264d7dd5..3ea1b881a4ae 100644
--- a/drivers/gpu/drm/i915/i915_timeline.c
+++ b/drivers/gpu/drm/i915/i915_timeline.c
@@ -148,6 +148,15 @@ static void __cacheline_retire(struct i915_active *active)
 		__idle_cacheline_free(cl);
 }
 
+static int __cacheline_active(struct i915_active *active)
+{
+	struct i915_timeline_cacheline *cl =
+		container_of(active, typeof(*cl), active);
+
+	__i915_vma_pin(cl->hwsp->vma);
+	return 0;
+}
+
 static struct i915_timeline_cacheline *
 cacheline_alloc(struct i915_timeline_hwsp *hwsp, unsigned int cacheline)
 {
@@ -170,15 +179,16 @@ cacheline_alloc(struct i915_timeline_hwsp *hwsp, unsigned int cacheline)
 	cl->hwsp = hwsp;
 	cl->vaddr = page_pack_bits(vaddr, cacheline);
 
-	i915_active_init(hwsp_to_i915(hwsp), &cl->active, __cacheline_retire);
+	i915_active_init(hwsp_to_i915(hwsp), &cl->active,
+			 __cacheline_active, __cacheline_retire);
 
 	return cl;
 }
 
 static void cacheline_acquire(struct i915_timeline_cacheline *cl)
 {
-	if (cl && i915_active_acquire(&cl->active))
-		__i915_vma_pin(cl->hwsp->vma);
+	if (cl)
+		i915_active_acquire(&cl->active);
 }
 
 static void cacheline_release(struct i915_timeline_cacheline *cl)
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index afa6399066f5..ece5392c4f05 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -77,11 +77,20 @@ static void vma_print_allocator(struct i915_vma *vma, const char *reason)
 
 #endif
 
-static void __i915_vma_retire(struct i915_active *ref)
+static inline struct i915_vma *active_to_vma(struct i915_active *ref)
 {
-	struct i915_vma *vma = container_of(ref, typeof(*vma), active);
+	return container_of(ref, typeof(struct i915_vma), active);
+}
 
-	i915_vma_put(vma);
+static int __i915_vma_active(struct i915_active *ref)
+{
+	i915_vma_get(active_to_vma(ref));
+	return 0;
+}
+
+static void __i915_vma_retire(struct i915_active *ref)
+{
+	i915_vma_put(active_to_vma(ref));
 }
 
 static struct i915_vma *
@@ -106,7 +115,8 @@ vma_create(struct drm_i915_gem_object *obj,
 	vma->size = obj->base.size;
 	vma->display_alignment = I915_GTT_MIN_ALIGNMENT;
 
-	i915_active_init(vm->i915, &vma->active, __i915_vma_retire);
+	i915_active_init(vm->i915, &vma->active,
+			 __i915_vma_active, __i915_vma_retire);
 	INIT_ACTIVE_REQUEST(&vma->last_fence);
 
 	INIT_LIST_HEAD(&vma->closed_link);
@@ -903,11 +913,7 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 	 * add the active reference first and queue for it to be dropped
 	 * *last*.
 	 */
-	if (i915_active_acquire(&vma->active))
-		i915_vma_get(vma);
-
 	err = i915_active_ref(&vma->active, rq->fence.context, rq);
-	i915_active_release(&vma->active);
 	if (unlikely(err))
 		return err;
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_active.c b/drivers/gpu/drm/i915/selftests/i915_active.c
index 98493bcc91f2..84fce379c0de 100644
--- a/drivers/gpu/drm/i915/selftests/i915_active.c
+++ b/drivers/gpu/drm/i915/selftests/i915_active.c
@@ -4,6 +4,8 @@
  * Copyright © 2018 Intel Corporation
  */
 
+#include <linux/kref.h>
+
 #include "gem/i915_gem_pm.h"
 
 #include "i915_selftest.h"
@@ -13,20 +15,47 @@
 
 struct live_active {
 	struct i915_active base;
+	struct kref ref;
 	bool retired;
 };
 
+static void __live_get(struct live_active *active)
+{
+	kref_get(&active->ref);
+}
+
 static void __live_free(struct live_active *active)
 {
 	i915_active_fini(&active->base);
 	kfree(active);
 }
 
+static void __live_release(struct kref *ref)
+{
+	struct live_active *active = container_of(ref, typeof(*active), ref);
+
+	__live_free(active);
+}
+
+static void __live_put(struct live_active *active)
+{
+	kref_put(&active->ref, __live_release);
+}
+
+static int __live_active(struct i915_active *base)
+{
+	struct live_active *active = container_of(base, typeof(*active), base);
+
+	__live_get(active);
+	return 0;
+}
+
 static void __live_retire(struct i915_active *base)
 {
 	struct live_active *active = container_of(base, typeof(*active), base);
 
 	active->retired = true;
+	__live_put(active);
 }
 
 static struct live_active *__live_alloc(struct drm_i915_private *i915)
@@ -37,7 +66,8 @@ static struct live_active *__live_alloc(struct drm_i915_private *i915)
 	if (!active)
 		return NULL;
 
-	i915_active_init(i915, &active->base, __live_retire);
+	kref_init(&active->ref);
+	i915_active_init(i915, &active->base, __live_active, __live_retire);
 
 	return active;
 }
@@ -62,11 +92,9 @@ __live_active_setup(struct drm_i915_private *i915)
 		return ERR_PTR(-ENOMEM);
 	}
 
-	if (!i915_active_acquire(&active->base)) {
-		pr_err("First i915_active_acquire should report being idle\n");
-		err = -EINVAL;
+	err = i915_active_acquire(&active->base);
+	if (err)
 		goto out;
-	}
 
 	for_each_engine(engine, i915, id) {
 		struct i915_request *rq;
@@ -97,18 +125,21 @@ __live_active_setup(struct drm_i915_private *i915)
 		pr_err("i915_active retired before submission!\n");
 		err = -EINVAL;
 	}
-	if (active->base.count != count) {
+	if (atomic_read(&active->base.count) != count) {
 		pr_err("i915_active not tracking all requests, found %d, expected %d\n",
-		       active->base.count, count);
+		       atomic_read(&active->base.count), count);
 		err = -EINVAL;
 	}
 
 out:
 	i915_sw_fence_commit(submit);
 	heap_fence_put(submit);
+	if (err) {
+		__live_put(active);
+		active = ERR_PTR(err);
+	}
 
-	/* XXX leaks live_active on error */
-	return err ? ERR_PTR(err) : active;
+	return active;
 }
 
 static int live_active_wait(void *arg)
@@ -135,7 +166,7 @@ static int live_active_wait(void *arg)
 		err = -EINVAL;
 	}
 
-	__live_free(active);
+	__live_put(active);
 
 	if (igt_flush_test(i915, I915_WAIT_LOCKED))
 		err = -EIO;
@@ -174,7 +205,7 @@ static int live_active_retire(void *arg)
 		err = -EINVAL;
 	}
 
-	__live_free(active);
+	__live_put(active);
 
 err:
 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 18/26] drm/i915: Push the i915_active.retire into a worker
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (15 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 17/26] drm/i915: Provide an i915_active.acquire callback Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 19/26] drm/i915/overlay: Switch to using i915_active tracking Chris Wilson
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

As we need to use a mutex to serialise i915_active activation
(because we want to allow the callback to sleep), we need to push the
i915_active.retire into a worker callback in case we get need to retire
from an atomic context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c |  1 +
 drivers/gpu/drm/i915/gt/intel_context.c     |  1 +
 drivers/gpu/drm/i915/i915_active.c          | 72 ++++++++++++++++-----
 drivers/gpu/drm/i915/i915_active_types.h    | 12 ++++
 drivers/gpu/drm/i915/i915_timeline.c        |  1 +
 drivers/gpu/drm/i915/i915_vma.c             |  3 +-
 6 files changed, 74 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 911f3564972d..55b8f0e04846 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -891,6 +891,7 @@ struct context_barrier_task {
 	void *data;
 };
 
+__i915_active_call
 static void cb_retire(struct i915_active *base)
 {
 	struct context_barrier_task *cb = container_of(base, typeof(*cb), base);
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 20c708ae6dc0..70d3b2568227 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -123,6 +123,7 @@ static void __context_unpin_state(struct i915_vma *vma)
 	__i915_vma_unpin(vma);
 }
 
+__i915_active_call
 static void __intel_context_retire(struct i915_active *active)
 {
 	struct intel_context *ce = container_of(active, typeof(*ce), active);
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 3070fccbc08b..710805605bb3 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -97,18 +97,14 @@ static void debug_active_assert(struct i915_active *ref)
 #endif
 
 static void
-active_retire(struct i915_active *ref)
+__active_retire(struct i915_active *ref)
 {
 	struct active_node *it, *n;
 	struct rb_root root;
 	bool retire = false;
 
-	GEM_BUG_ON(!atomic_read(&ref->count));
-	if (atomic_add_unless(&ref->count, -1, 1))
-		return;
-
-	/* One active may be flushed from inside the acquire of another */
-	mutex_lock_nested(&ref->mutex, SINGLE_DEPTH_NESTING);
+	lockdep_assert_held(&ref->mutex);
+	GEM_BUG_ON(i915_active_is_idle(ref));
 
 	/* return the unused nodes to our slabcache -- flushing the allocator */
 	if (atomic_dec_and_test(&ref->count)) {
@@ -131,6 +127,36 @@ active_retire(struct i915_active *ref)
 	}
 }
 
+static void
+active_work(struct work_struct *wrk)
+{
+	struct i915_active *ref = container_of(wrk, typeof(*ref), work);
+
+	GEM_BUG_ON(!atomic_read(&ref->count));
+	if (atomic_add_unless(&ref->count, -1, 1))
+		return;
+
+	mutex_lock(&ref->mutex);
+	__active_retire(ref);
+}
+
+static void
+active_retire(struct i915_active *ref)
+{
+	GEM_BUG_ON(!atomic_read(&ref->count));
+	if (atomic_add_unless(&ref->count, -1, 1))
+		return;
+
+	/* If we are inside interrupt context (fence signaling), defer */
+	if (ref->flags & I915_ACTIVE_RETIRE_SLEEPS ||
+	    !mutex_trylock(&ref->mutex)) {
+		queue_work(system_unbound_wq, &ref->work);
+		return;
+	}
+
+	__active_retire(ref);
+}
+
 static void
 node_retire(struct i915_active_request *base, struct i915_request *rq)
 {
@@ -200,16 +226,24 @@ void __i915_active_init(struct drm_i915_private *i915,
 			void (*retire)(struct i915_active *ref),
 			struct lock_class_key *key)
 {
+	unsigned long bits;
+
 	debug_active_init(ref);
 
 	ref->i915 = i915;
+
+	ref->flags = 0;
 	ref->active = active;
-	ref->retire = retire;
+	ref->retire = ptr_unpack_bits(retire, &bits, 2);
+	if (bits & I915_ACTIVE_MAY_SLEEP)
+		ref->flags |= I915_ACTIVE_RETIRE_SLEEPS;
+
 	ref->tree = RB_ROOT;
 	ref->cache = NULL;
 	init_llist_head(&ref->barriers);
 	atomic_set(&ref->count, 0);
 	__mutex_init(&ref->mutex, "i915_active", key);
+	INIT_WORK(&ref->work, active_work);
 }
 
 int i915_active_ref(struct i915_active *ref,
@@ -282,8 +316,10 @@ int i915_active_wait(struct i915_active *ref)
 	if (err)
 		return err;
 
-	if (!atomic_add_unless(&ref->count, 1, 0))
-		goto unlock;
+	if (!atomic_add_unless(&ref->count, 1, 0)) {
+		mutex_unlock(&ref->mutex);
+		return 0;
+	}
 
 	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
 		err = i915_active_request_retire(&it->base, BKL(ref));
@@ -291,10 +327,15 @@ int i915_active_wait(struct i915_active *ref)
 			break;
 	}
 
-	active_retire(ref);
-unlock:
-	mutex_unlock(&ref->mutex);
-	return err;
+	__active_retire(ref);
+	if (err)
+		return err;
+
+	flush_work(&ref->work);
+	if (!i915_active_is_idle(ref))
+		return -EBUSY;
+
+	return 0;
 }
 
 int i915_request_await_active_request(struct i915_request *rq,
@@ -335,8 +376,9 @@ int i915_request_await_active(struct i915_request *rq, struct i915_active *ref)
 void i915_active_fini(struct i915_active *ref)
 {
 	debug_active_fini(ref);
-	GEM_BUG_ON(!RB_EMPTY_ROOT(&ref->tree));
 	GEM_BUG_ON(atomic_read(&ref->count));
+	GEM_BUG_ON(work_pending(&ref->work));
+	GEM_BUG_ON(!RB_EMPTY_ROOT(&ref->tree));
 	mutex_destroy(&ref->mutex);
 }
 #endif
diff --git a/drivers/gpu/drm/i915/i915_active_types.h b/drivers/gpu/drm/i915/i915_active_types.h
index 5b0a3024ce24..a3a5ec3e4163 100644
--- a/drivers/gpu/drm/i915/i915_active_types.h
+++ b/drivers/gpu/drm/i915/i915_active_types.h
@@ -12,6 +12,9 @@
 #include <linux/mutex.h>
 #include <linux/rbtree.h>
 #include <linux/rcupdate.h>
+#include <linux/workqueue.h>
+
+#include "i915_utils.h"
 
 struct drm_i915_private;
 struct i915_active_request;
@@ -28,6 +31,11 @@ struct i915_active_request {
 
 struct active_node;
 
+#define I915_ACTIVE_MAY_SLEEP BIT(0)
+
+#define __i915_active_call __aligned(4)
+#define i915_active_may_sleep(fn) ptr_pack_bits(&(fn), I915_ACTIVE_MAY_SLEEP, 2)
+
 struct i915_active {
 	struct drm_i915_private *i915;
 
@@ -38,6 +46,10 @@ struct i915_active {
 
 	int (*active)(struct i915_active *ref);
 	void (*retire)(struct i915_active *ref);
+	unsigned long flags;
+#define I915_ACTIVE_RETIRE_SLEEPS BIT(0)
+
+	struct work_struct work;
 
 	struct llist_head barriers;
 };
diff --git a/drivers/gpu/drm/i915/i915_timeline.c b/drivers/gpu/drm/i915/i915_timeline.c
index 3ea1b881a4ae..0f4ca9478ae5 100644
--- a/drivers/gpu/drm/i915/i915_timeline.c
+++ b/drivers/gpu/drm/i915/i915_timeline.c
@@ -138,6 +138,7 @@ static void __idle_cacheline_free(struct i915_timeline_cacheline *cl)
 	kfree(cl);
 }
 
+__i915_active_call
 static void __cacheline_retire(struct i915_active *active)
 {
 	struct i915_timeline_cacheline *cl =
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index ece5392c4f05..f694bfa54e0d 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -88,6 +88,7 @@ static int __i915_vma_active(struct i915_active *ref)
 	return 0;
 }
 
+__i915_active_call
 static void __i915_vma_retire(struct i915_active *ref)
 {
 	i915_vma_put(active_to_vma(ref));
@@ -976,13 +977,13 @@ int i915_vma_unbind(struct i915_vma *vma)
 		if (ret)
 			return ret;
 	}
-	GEM_BUG_ON(i915_vma_is_active(vma));
 
 	if (i915_vma_is_pinned(vma)) {
 		vma_print_allocator(vma, "is pinned");
 		return -EBUSY;
 	}
 
+	GEM_BUG_ON(i915_vma_is_active(vma));
 	if (!drm_mm_node_allocated(&vma->node))
 		return 0;
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 19/26] drm/i915/overlay: Switch to using i915_active tracking
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (16 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 18/26] drm/i915: Push the i915_active.retire into a worker Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 20/26] drm/i915: Forgo last_fence active request tracking Chris Wilson
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Remove the raw i915_active_request tracking in favour of the higher
level i915_active tracking for the sole purpose of making the lockless
transition easier in later patches.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/display/intel_overlay.c | 130 +++++++++----------
 drivers/gpu/drm/i915/i915_active.h           |  19 ---
 2 files changed, 64 insertions(+), 85 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
index 21339b7f6a3e..c7d2d980df8c 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -190,7 +190,8 @@ struct intel_overlay {
 	struct overlay_registers __iomem *regs;
 	u32 flip_addr;
 	/* flip handling */
-	struct i915_active_request last_flip;
+	struct i915_active last_flip;
+	void (*flip_complete)(struct intel_overlay *ovl);
 };
 
 static void i830_overlay_clock_gating(struct drm_i915_private *dev_priv,
@@ -216,32 +217,26 @@ static void i830_overlay_clock_gating(struct drm_i915_private *dev_priv,
 				  PCI_DEVFN(0, 0), I830_CLOCK_GATE, val);
 }
 
-static void intel_overlay_submit_request(struct intel_overlay *overlay,
-					 struct i915_request *rq,
-					 i915_active_retire_fn retire)
+static struct i915_request *
+alloc_request(struct intel_overlay *overlay, void (*fn)(struct intel_overlay *))
 {
-	GEM_BUG_ON(i915_active_request_peek(&overlay->last_flip,
-					    &overlay->i915->drm.struct_mutex));
-	i915_active_request_set_retire_fn(&overlay->last_flip, retire,
-					  &overlay->i915->drm.struct_mutex);
-	__i915_active_request_set(&overlay->last_flip, rq);
-	i915_request_add(rq);
-}
+	struct intel_engine_cs *engine = overlay->i915->engine[RCS0];
+	struct i915_request *rq;
+	int err;
 
-static int intel_overlay_do_wait_request(struct intel_overlay *overlay,
-					 struct i915_request *rq,
-					 i915_active_retire_fn retire)
-{
-	intel_overlay_submit_request(overlay, rq, retire);
-	return i915_active_request_retire(&overlay->last_flip,
-					  &overlay->i915->drm.struct_mutex);
-}
+	overlay->flip_complete = fn;
 
-static struct i915_request *alloc_request(struct intel_overlay *overlay)
-{
-	struct intel_engine_cs *engine = overlay->i915->engine[RCS0];
+	rq = i915_request_create(engine->kernel_context);
+	if (IS_ERR(rq))
+		return rq;
+
+	err = i915_active_ref(&overlay->last_flip, rq->fence.context, rq);
+	if (err) {
+		i915_request_add(rq);
+		return ERR_PTR(err);
+	}
 
-	return i915_request_create(engine->kernel_context);
+	return rq;
 }
 
 /* overlay needs to be disable in OCMD reg */
@@ -253,7 +248,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 
 	WARN_ON(overlay->active);
 
-	rq = alloc_request(overlay);
+	rq = alloc_request(overlay, NULL);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 
@@ -274,7 +269,9 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 	*cs++ = MI_NOOP;
 	intel_ring_advance(rq, cs);
 
-	return intel_overlay_do_wait_request(overlay, rq, NULL);
+	i915_request_add(rq);
+
+	return i915_active_wait(&overlay->last_flip);
 }
 
 static void intel_overlay_flip_prepare(struct intel_overlay *overlay,
@@ -318,7 +315,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 	if (tmp & (1 << 17))
 		DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp);
 
-	rq = alloc_request(overlay);
+	rq = alloc_request(overlay, NULL);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 
@@ -333,8 +330,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
 	intel_ring_advance(rq, cs);
 
 	intel_overlay_flip_prepare(overlay, vma);
-
-	intel_overlay_submit_request(overlay, rq, NULL);
+	i915_request_add(rq);
 
 	return 0;
 }
@@ -355,20 +351,13 @@ static void intel_overlay_release_old_vma(struct intel_overlay *overlay)
 }
 
 static void
-intel_overlay_release_old_vid_tail(struct i915_active_request *active,
-				   struct i915_request *rq)
+intel_overlay_release_old_vid_tail(struct intel_overlay *overlay)
 {
-	struct intel_overlay *overlay =
-		container_of(active, typeof(*overlay), last_flip);
-
 	intel_overlay_release_old_vma(overlay);
 }
 
-static void intel_overlay_off_tail(struct i915_active_request *active,
-				   struct i915_request *rq)
+static void intel_overlay_off_tail(struct intel_overlay *overlay)
 {
-	struct intel_overlay *overlay =
-		container_of(active, typeof(*overlay), last_flip);
 	struct drm_i915_private *dev_priv = overlay->i915;
 
 	intel_overlay_release_old_vma(overlay);
@@ -381,6 +370,16 @@ static void intel_overlay_off_tail(struct i915_active_request *active,
 		i830_overlay_clock_gating(dev_priv, true);
 }
 
+static void
+intel_overlay_last_flip_retire(struct i915_active *active)
+{
+	struct intel_overlay *overlay =
+		container_of(active, typeof(*overlay), last_flip);
+
+	if (overlay->flip_complete)
+		overlay->flip_complete(overlay);
+}
+
 /* overlay needs to be disabled in OCMD reg */
 static int intel_overlay_off(struct intel_overlay *overlay)
 {
@@ -395,7 +394,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 	 * of the hw. Do it in both cases */
 	flip_addr |= OFC_UPDATE;
 
-	rq = alloc_request(overlay);
+	rq = alloc_request(overlay, intel_overlay_off_tail);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 
@@ -418,17 +417,16 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 	intel_ring_advance(rq, cs);
 
 	intel_overlay_flip_prepare(overlay, NULL);
+	i915_request_add(rq);
 
-	return intel_overlay_do_wait_request(overlay, rq,
-					     intel_overlay_off_tail);
+	return i915_active_wait(&overlay->last_flip);
 }
 
 /* recover from an interruption due to a signal
  * We have to be careful not to repeat work forever an make forward progess. */
 static int intel_overlay_recover_from_interrupt(struct intel_overlay *overlay)
 {
-	return i915_active_request_retire(&overlay->last_flip,
-					  &overlay->i915->drm.struct_mutex);
+	return i915_active_wait(&overlay->last_flip);
 }
 
 /* Wait for pending overlay flip and release old frame.
@@ -438,43 +436,40 @@ static int intel_overlay_recover_from_interrupt(struct intel_overlay *overlay)
 static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 {
 	struct drm_i915_private *dev_priv = overlay->i915;
+	struct i915_request *rq;
 	u32 *cs;
-	int ret;
 
 	lockdep_assert_held(&dev_priv->drm.struct_mutex);
 
-	/* Only wait if there is actually an old frame to release to
+	/*
+	 * Only wait if there is actually an old frame to release to
 	 * guarantee forward progress.
 	 */
 	if (!overlay->old_vma)
 		return 0;
 
-	if (I915_READ(GEN2_ISR) & I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT) {
-		/* synchronous slowpath */
-		struct i915_request *rq;
+	if (!(I915_READ(GEN2_ISR) & I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT)) {
+		intel_overlay_release_old_vid_tail(overlay);
+		return 0;
+	}
 
-		rq = alloc_request(overlay);
-		if (IS_ERR(rq))
-			return PTR_ERR(rq);
+	rq = alloc_request(overlay, intel_overlay_release_old_vid_tail);
+	if (IS_ERR(rq))
+		return PTR_ERR(rq);
 
-		cs = intel_ring_begin(rq, 2);
-		if (IS_ERR(cs)) {
-			i915_request_add(rq);
-			return PTR_ERR(cs);
-		}
+	cs = intel_ring_begin(rq, 2);
+	if (IS_ERR(cs)) {
+		i915_request_add(rq);
+		return PTR_ERR(cs);
+	}
 
-		*cs++ = MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP;
-		*cs++ = MI_NOOP;
-		intel_ring_advance(rq, cs);
+	*cs++ = MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP;
+	*cs++ = MI_NOOP;
+	intel_ring_advance(rq, cs);
 
-		ret = intel_overlay_do_wait_request(overlay, rq,
-						    intel_overlay_release_old_vid_tail);
-		if (ret)
-			return ret;
-	} else
-		intel_overlay_release_old_vid_tail(&overlay->last_flip, NULL);
+	i915_request_add(rq);
 
-	return 0;
+	return i915_active_wait(&overlay->last_flip);
 }
 
 void intel_overlay_reset(struct drm_i915_private *dev_priv)
@@ -1371,7 +1366,9 @@ void intel_overlay_setup(struct drm_i915_private *dev_priv)
 	overlay->contrast = 75;
 	overlay->saturation = 146;
 
-	INIT_ACTIVE_REQUEST(&overlay->last_flip);
+	i915_active_init(dev_priv,
+			 &overlay->last_flip,
+			 NULL, intel_overlay_last_flip_retire);
 
 	ret = get_registers(overlay, OVERLAY_NEEDS_PHYSICAL(dev_priv));
 	if (ret)
@@ -1405,6 +1402,7 @@ void intel_overlay_cleanup(struct drm_i915_private *dev_priv)
 	WARN_ON(overlay->active);
 
 	i915_gem_object_put(overlay->reg_bo);
+	i915_active_fini(&overlay->last_flip);
 
 	kfree(overlay);
 }
diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
index 134166d31251..911a8338007a 100644
--- a/drivers/gpu/drm/i915/i915_active.h
+++ b/drivers/gpu/drm/i915/i915_active.h
@@ -89,25 +89,6 @@ int __must_check
 i915_active_request_set(struct i915_active_request *active,
 			struct i915_request *rq);
 
-/**
- * i915_active_request_set_retire_fn - updates the retirement callback
- * @active - the active tracker
- * @fn - the routine called when the request is retired
- * @mutex - struct_mutex used to guard retirements
- *
- * i915_active_request_set_retire_fn() updates the function pointer that
- * is called when the final request associated with the @active tracker
- * is retired.
- */
-static inline void
-i915_active_request_set_retire_fn(struct i915_active_request *active,
-				  i915_active_retire_fn fn,
-				  struct mutex *mutex)
-{
-	lockdep_assert_held(mutex);
-	active->retire = fn ?: i915_active_retire_noop;
-}
-
 /**
  * i915_active_request_raw - return the active request
  * @active - the active tracker
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 20/26] drm/i915: Forgo last_fence active request tracking
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (17 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 19/26] drm/i915/overlay: Switch to using i915_active tracking Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 21/26] drm/i915: Extract intel_frontbuffer active tracking Chris Wilson
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx; +Cc: Matthew Auld

We were using the last_fence to track the last request that used this
vma that might be interpreted by a fence register and forced ourselves
to wait for this request before modifying any fence register that
overlapped our vma. Due to requirement that we need to track any XY_BLT
command, linear or tiled, this in effect meant that we have to track the
vma for its active lifespan anyway, so we can forgo the explicit
last_fence tracking and just use the whole vma->active.

Another solution would be to pipeline the register updates, and would
help resolve some long running stalls for gen3 (but only gen 2 and 3!)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c       |  4 +---
 drivers/gpu/drm/i915/i915_gem_fence_reg.c |  6 ++----
 drivers/gpu/drm/i915/i915_gem_gtt.c       |  1 -
 drivers/gpu/drm/i915/i915_vma.c           | 13 -------------
 drivers/gpu/drm/i915/i915_vma.h           |  1 -
 5 files changed, 3 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index eeecdad0e3ca..2c8580f375fb 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -210,9 +210,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 			}
 		}
 		if (vma->fence)
-			seq_printf(m, " , fence: %d%s",
-				   vma->fence->id,
-				   i915_active_request_isset(&vma->last_fence) ? "*" : "");
+			seq_printf(m, " , fence: %d", vma->fence->id);
 		seq_puts(m, ")");
 
 		spin_lock(&obj->vma.lock);
diff --git a/drivers/gpu/drm/i915/i915_gem_fence_reg.c b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
index 0bf53ac1c835..754eceaf0b23 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence_reg.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
@@ -230,16 +230,14 @@ static int fence_update(struct i915_fence_reg *fence,
 			 i915_gem_object_get_tiling(vma->obj)))
 			return -EINVAL;
 
-		ret = i915_active_request_retire(&vma->last_fence,
-					     &vma->obj->base.dev->struct_mutex);
+		ret = i915_active_wait(&vma->active);
 		if (ret)
 			return ret;
 	}
 
 	old = xchg(&fence->vma, NULL);
 	if (old) {
-		ret = i915_active_request_retire(&old->last_fence,
-					     &old->obj->base.dev->struct_mutex);
+		ret = i915_active_wait(&old->active);
 		if (ret) {
 			fence->vma = old;
 			return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ee22c34d071a..8c58e6f0a2b4 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2050,7 +2050,6 @@ static struct i915_vma *pd_vma_create(struct gen6_ppgtt *ppgtt, int size)
 		return ERR_PTR(-ENOMEM);
 
 	i915_active_init(i915, &vma->active, NULL, NULL);
-	INIT_ACTIVE_REQUEST(&vma->last_fence);
 
 	vma->vm = &ggtt->vm;
 	vma->ops = &pd_vma_ops;
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index f694bfa54e0d..b888ddab3c8b 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -118,7 +118,6 @@ vma_create(struct drm_i915_gem_object *obj,
 
 	i915_active_init(vm->i915, &vma->active,
 			 __i915_vma_active, __i915_vma_retire);
-	INIT_ACTIVE_REQUEST(&vma->last_fence);
 
 	INIT_LIST_HEAD(&vma->closed_link);
 
@@ -793,8 +792,6 @@ static void __i915_vma_destroy(struct i915_vma *vma)
 	GEM_BUG_ON(vma->node.allocated);
 	GEM_BUG_ON(vma->fence);
 
-	GEM_BUG_ON(i915_active_request_isset(&vma->last_fence));
-
 	mutex_lock(&vma->vm->mutex);
 	list_del(&vma->vm_link);
 	mutex_unlock(&vma->vm->mutex);
@@ -930,9 +927,6 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 	obj->read_domains |= I915_GEM_GPU_DOMAINS;
 	obj->mm.dirty = true;
 
-	if (flags & EXEC_OBJECT_NEEDS_FENCE)
-		__i915_active_request_set(&vma->last_fence, rq);
-
 	export_fence(vma, rq, flags);
 
 	GEM_BUG_ON(!i915_vma_is_active(vma));
@@ -965,14 +959,7 @@ int i915_vma_unbind(struct i915_vma *vma)
 		 * before we are finished).
 		 */
 		__i915_vma_pin(vma);
-
 		ret = i915_active_wait(&vma->active);
-		if (ret)
-			goto unpin;
-
-		ret = i915_active_request_retire(&vma->last_fence,
-					      &vma->vm->i915->drm.struct_mutex);
-unpin:
 		__i915_vma_unpin(vma);
 		if (ret)
 			return ret;
diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
index 4b769db649bf..17e6a038bd36 100644
--- a/drivers/gpu/drm/i915/i915_vma.h
+++ b/drivers/gpu/drm/i915/i915_vma.h
@@ -111,7 +111,6 @@ struct i915_vma {
 #define I915_VMA_GGTT_WRITE	BIT(14)
 
 	struct i915_active active;
-	struct i915_active_request last_fence;
 
 	/**
 	 * Support different GGTT views into the same object.
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 21/26] drm/i915: Extract intel_frontbuffer active tracking
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (18 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 20/26] drm/i915: Forgo last_fence active request tracking Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 22/26] drm/i915: Coordinate i915_active with its own mutex Chris Wilson
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Move the active tracking for the frontbuffer operations out of the
i915_gem_object and into its own first class (refcounted) object. In the
process of detangling, we switch from low level request tracking to the
easier i915_active -- with the plan that this avoids any potential
atomic callbacks as the frontbuffer tracking wishes to sleep as it
flushes.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  70 +++--
 drivers/gpu/drm/i915/display/intel_fbdev.c    |  40 ++-
 .../gpu/drm/i915/display/intel_frontbuffer.c  | 255 +++++++++++++-----
 .../gpu/drm/i915/display/intel_frontbuffer.h  |  70 +++--
 drivers/gpu/drm/i915/display/intel_overlay.c  |   8 +-
 drivers/gpu/drm/i915/gem/i915_gem_clflush.c   |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_domain.c    |  14 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |   4 -
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |  28 +-
 drivers/gpu/drm/i915/gem/i915_gem_object.h    |   2 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   8 +-
 drivers/gpu/drm/i915/i915_debugfs.c           |   5 -
 drivers/gpu/drm/i915/i915_drv.h               |   4 -
 drivers/gpu/drm/i915/i915_gem.c               |  47 +---
 drivers/gpu/drm/i915/i915_vma.c               |   6 +-
 drivers/gpu/drm/i915/intel_drv.h              |   1 +
 16 files changed, 306 insertions(+), 258 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 8d7e4c8b60bc..fe0918157621 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -3048,12 +3048,13 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
 {
 	struct drm_device *dev = crtc->base.dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	struct drm_i915_gem_object *obj = NULL;
 	struct drm_mode_fb_cmd2 mode_cmd = { 0 };
 	struct drm_framebuffer *fb = &plane_config->fb->base;
 	u32 base_aligned = round_down(plane_config->base, PAGE_SIZE);
 	u32 size_aligned = round_up(plane_config->base + plane_config->size,
 				    PAGE_SIZE);
+	struct drm_i915_gem_object *obj;
+	bool ret = false;
 
 	size_aligned -= base_aligned;
 
@@ -3095,7 +3096,7 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
 		break;
 	default:
 		MISSING_CASE(plane_config->tiling);
-		return false;
+		goto out;
 	}
 
 	mode_cmd.pixel_format = fb->format->format;
@@ -3107,16 +3108,15 @@ intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
 
 	if (intel_framebuffer_init(to_intel_framebuffer(fb), obj, &mode_cmd)) {
 		DRM_DEBUG_KMS("intel fb init failed\n");
-		goto out_unref_obj;
+		goto out;
 	}
 
 
 	DRM_DEBUG_KMS("initial plane fb obj %p\n", obj);
-	return true;
-
-out_unref_obj:
+	ret = true;
+out:
 	i915_gem_object_put(obj);
-	return false;
+	return ret;
 }
 
 static void
@@ -3173,6 +3173,12 @@ static void intel_plane_disable_noatomic(struct intel_crtc *crtc,
 	intel_disable_plane(plane, crtc_state);
 }
 
+static struct intel_frontbuffer *
+to_intel_frontbuffer(struct drm_framebuffer *fb)
+{
+	return fb ? to_intel_framebuffer(fb)->frontbuffer : NULL;
+}
+
 static void
 intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
 			     struct intel_initial_plane_config *plane_config)
@@ -3180,7 +3186,6 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
 	struct drm_device *dev = intel_crtc->base.dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_crtc *c;
-	struct drm_i915_gem_object *obj;
 	struct drm_plane *primary = intel_crtc->base.primary;
 	struct drm_plane_state *plane_state = primary->state;
 	struct intel_plane *intel_plane = to_intel_plane(primary);
@@ -3256,8 +3261,7 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
 		return;
 	}
 
-	obj = intel_fb_obj(fb);
-	intel_fb_obj_flush(obj, ORIGIN_DIRTYFB);
+	intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB);
 
 	plane_state->src_x = 0;
 	plane_state->src_y = 0;
@@ -3272,14 +3276,14 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
 	intel_state->base.src = drm_plane_state_src(plane_state);
 	intel_state->base.dst = drm_plane_state_dest(plane_state);
 
-	if (i915_gem_object_is_tiled(obj))
+	if (plane_config->tiling)
 		dev_priv->preserve_bios_swizzle = true;
 
 	plane_state->fb = fb;
 	plane_state->crtc = &intel_crtc->base;
 
 	atomic_or(to_intel_plane(primary)->frontbuffer_bit,
-		  &obj->frontbuffer_bits);
+		  &to_intel_frontbuffer(fb)->bits);
 }
 
 static int skl_max_plane_width(const struct drm_framebuffer *fb,
@@ -13982,9 +13986,9 @@ static void intel_atomic_track_fbs(struct drm_atomic_state *state)
 	int i;
 
 	for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i)
-		i915_gem_track_fb(intel_fb_obj(old_plane_state->fb),
-				  intel_fb_obj(new_plane_state->fb),
-				  to_intel_plane(plane)->frontbuffer_bit);
+		intel_frontbuffer_track(to_intel_frontbuffer(old_plane_state->fb),
+					to_intel_frontbuffer(new_plane_state->fb),
+					to_intel_plane(plane)->frontbuffer_bit);
 }
 
 /**
@@ -14294,7 +14298,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
 		return ret;
 
 	fb_obj_bump_render_priority(obj);
-	intel_fb_obj_flush(obj, ORIGIN_DIRTYFB);
+	intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_DIRTYFB);
 
 	if (!new_state->fence) { /* implicit fencing */
 		struct dma_fence *fence;
@@ -14557,13 +14561,12 @@ intel_legacy_cursor_update(struct drm_plane *plane,
 			   struct drm_modeset_acquire_ctx *ctx)
 {
 	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
-	int ret;
 	struct drm_plane_state *old_plane_state, *new_plane_state;
 	struct intel_plane *intel_plane = to_intel_plane(plane);
-	struct drm_framebuffer *old_fb;
 	struct intel_crtc_state *crtc_state =
 		to_intel_crtc_state(crtc->state);
 	struct intel_crtc_state *new_crtc_state;
+	int ret;
 
 	/*
 	 * When crtc is inactive or there is a modeset pending,
@@ -14631,11 +14634,10 @@ intel_legacy_cursor_update(struct drm_plane *plane,
 	if (ret)
 		goto out_unlock;
 
-	intel_fb_obj_flush(intel_fb_obj(fb), ORIGIN_FLIP);
-
-	old_fb = old_plane_state->fb;
-	i915_gem_track_fb(intel_fb_obj(old_fb), intel_fb_obj(fb),
-			  intel_plane->frontbuffer_bit);
+	intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_FLIP);
+	intel_frontbuffer_track(to_intel_frontbuffer(old_plane_state->fb),
+				to_intel_frontbuffer(fb),
+				intel_plane->frontbuffer_bit);
 
 	/* Swap plane state */
 	plane->state = new_plane_state;
@@ -15323,15 +15325,9 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv)
 static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb)
 {
 	struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
-	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
 
 	drm_framebuffer_cleanup(fb);
-
-	i915_gem_object_lock(obj);
-	WARN_ON(!obj->framebuffer_references--);
-	i915_gem_object_unlock(obj);
-
-	i915_gem_object_put(obj);
+	intel_frontbuffer_put(intel_fb->frontbuffer);
 
 	kfree(intel_fb);
 }
@@ -15359,7 +15355,7 @@ static int intel_user_framebuffer_dirty(struct drm_framebuffer *fb,
 	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
 
 	i915_gem_object_flush_if_display(obj);
-	intel_fb_obj_flush(obj, ORIGIN_DIRTYFB);
+	intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB);
 
 	return 0;
 }
@@ -15381,8 +15377,11 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
 	int ret = -EINVAL;
 	int i;
 
+	intel_fb->frontbuffer = intel_frontbuffer_get(obj);
+	if (!intel_fb->frontbuffer)
+		return -ENOMEM;
+
 	i915_gem_object_lock(obj);
-	obj->framebuffer_references++;
 	tiling = i915_gem_object_get_tiling(obj);
 	stride = i915_gem_object_get_stride(obj);
 	i915_gem_object_unlock(obj);
@@ -15499,9 +15498,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
 	return 0;
 
 err:
-	i915_gem_object_lock(obj);
-	obj->framebuffer_references--;
-	i915_gem_object_unlock(obj);
+	intel_frontbuffer_put(intel_fb->frontbuffer);
 	return ret;
 }
 
@@ -15519,8 +15516,7 @@ intel_user_framebuffer_create(struct drm_device *dev,
 		return ERR_PTR(-ENOENT);
 
 	fb = intel_framebuffer_create(obj, &mode_cmd);
-	if (IS_ERR(fb))
-		i915_gem_object_put(obj);
+	i915_gem_object_put(obj);
 
 	return fb;
 }
diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c
index 1edd44ee32b2..4b57cdd76699 100644
--- a/drivers/gpu/drm/i915/display/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/display/intel_fbdev.c
@@ -47,13 +47,14 @@
 #include "intel_fbdev.h"
 #include "intel_frontbuffer.h"
 
-static void intel_fbdev_invalidate(struct intel_fbdev *ifbdev)
+static struct intel_frontbuffer *to_frontbuffer(struct intel_fbdev *ifbdev)
 {
-	struct drm_i915_gem_object *obj = intel_fb_obj(&ifbdev->fb->base);
-	unsigned int origin =
-		ifbdev->vma_flags & PLANE_HAS_FENCE ? ORIGIN_GTT : ORIGIN_CPU;
+	return ifbdev->fb->frontbuffer;
+}
 
-	intel_fb_obj_invalidate(obj, origin);
+static void intel_fbdev_invalidate(struct intel_fbdev *ifbdev)
+{
+	intel_frontbuffer_invalidate(to_frontbuffer(ifbdev), ORIGIN_CPU);
 }
 
 static int intel_fbdev_set_par(struct fb_info *info)
@@ -120,7 +121,7 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct drm_mode_fb_cmd2 mode_cmd = {};
 	struct drm_i915_gem_object *obj;
-	int size, ret;
+	int size;
 
 	/* we don't do packed 24bpp */
 	if (sizes->surface_bpp == 24)
@@ -147,24 +148,16 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
 		obj = i915_gem_object_create_shmem(dev_priv, size);
 	if (IS_ERR(obj)) {
 		DRM_ERROR("failed to allocate framebuffer\n");
-		ret = PTR_ERR(obj);
-		goto err;
+		return PTR_ERR(obj);
 	}
 
 	fb = intel_framebuffer_create(obj, &mode_cmd);
-	if (IS_ERR(fb)) {
-		ret = PTR_ERR(fb);
-		goto err_obj;
-	}
+	i915_gem_object_put(obj);
+	if (IS_ERR(fb))
+		return PTR_ERR(fb);
 
 	ifbdev->fb = to_intel_framebuffer(fb);
-
 	return 0;
-
-err_obj:
-	i915_gem_object_put(obj);
-err:
-	return ret;
 }
 
 static int intelfb_create(struct drm_fb_helper *helper,
@@ -180,7 +173,6 @@ static int intelfb_create(struct drm_fb_helper *helper,
 	const struct i915_ggtt_view view = {
 		.type = I915_GGTT_VIEW_NORMAL,
 	};
-	struct drm_framebuffer *fb;
 	intel_wakeref_t wakeref;
 	struct fb_info *info;
 	struct i915_vma *vma;
@@ -226,8 +218,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
 		goto out_unlock;
 	}
 
-	fb = &ifbdev->fb->base;
-	intel_fb_obj_flush(intel_fb_obj(fb), ORIGIN_DIRTYFB);
+	intel_frontbuffer_flush(to_frontbuffer(ifbdev), ORIGIN_DIRTYFB);
 
 	info = drm_fb_helper_alloc_fbi(helper);
 	if (IS_ERR(info)) {
@@ -236,7 +227,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
 		goto out_unpin;
 	}
 
-	ifbdev->helper.fb = fb;
+	ifbdev->helper.fb = &ifbdev->fb->base;
 
 	info->fbops = &intelfb_ops;
 
@@ -262,13 +253,14 @@ static int intelfb_create(struct drm_fb_helper *helper,
 	 * If the object is stolen however, it will be full of whatever
 	 * garbage was left in there.
 	 */
-	if (intel_fb_obj(fb)->stolen && !prealloc)
+	if (vma->obj->stolen && !prealloc)
 		memset_io(info->screen_base, 0, info->screen_size);
 
 	/* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */
 
 	DRM_DEBUG_KMS("allocated %dx%d fb: 0x%08x\n",
-		      fb->width, fb->height, i915_ggtt_offset(vma));
+		      ifbdev->fb->base.width, ifbdev->fb->base.height,
+		      i915_ggtt_offset(vma));
 	ifbdev->vma = vma;
 	ifbdev->vma_flags = flags;
 
diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.c b/drivers/gpu/drm/i915/display/intel_frontbuffer.c
index 44273c10cea5..7128148155dc 100644
--- a/drivers/gpu/drm/i915/display/intel_frontbuffer.c
+++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.c
@@ -30,11 +30,11 @@
  * Many features require us to track changes to the currently active
  * frontbuffer, especially rendering targeted at the frontbuffer.
  *
- * To be able to do so GEM tracks frontbuffers using a bitmask for all possible
- * frontbuffer slots through i915_gem_track_fb(). The function in this file are
- * then called when the contents of the frontbuffer are invalidated, when
- * frontbuffer rendering has stopped again to flush out all the changes and when
- * the frontbuffer is exchanged with a flip. Subsystems interested in
+ * To be able to do so we track frontbuffers using a bitmask for all possible
+ * frontbuffer slots through intel_frontbuffer_track(). The functions in this
+ * file are then called when the contents of the frontbuffer are invalidated,
+ * when frontbuffer rendering has stopped again to flush out all the changes
+ * and when the frontbuffer is exchanged with a flip. Subsystems interested in
  * frontbuffer changes (e.g. PSR, FBC, DRRS) should directly put their callbacks
  * into the relevant places and filter for the frontbuffer slots that they are
  * interested int.
@@ -63,28 +63,9 @@
 #include "intel_frontbuffer.h"
 #include "intel_psr.h"
 
-void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
-			       enum fb_op_origin origin,
-			       unsigned int frontbuffer_bits)
-{
-	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
-
-	if (origin == ORIGIN_CS) {
-		spin_lock(&dev_priv->fb_tracking.lock);
-		dev_priv->fb_tracking.busy_bits |= frontbuffer_bits;
-		dev_priv->fb_tracking.flip_bits &= ~frontbuffer_bits;
-		spin_unlock(&dev_priv->fb_tracking.lock);
-	}
-
-	might_sleep();
-	intel_psr_invalidate(dev_priv, frontbuffer_bits, origin);
-	intel_edp_drrs_invalidate(dev_priv, frontbuffer_bits);
-	intel_fbc_invalidate(dev_priv, frontbuffer_bits, origin);
-}
-
 /**
- * intel_frontbuffer_flush - flush frontbuffer
- * @dev_priv: i915 device
+ * frontbuffer_flush - flush frontbuffer
+ * @i915: i915 device
  * @frontbuffer_bits: frontbuffer plane tracking bits
  * @origin: which operation caused the flush
  *
@@ -94,45 +75,27 @@ void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
  *
  * Can be called without any locks held.
  */
-static void intel_frontbuffer_flush(struct drm_i915_private *dev_priv,
-				    unsigned frontbuffer_bits,
-				    enum fb_op_origin origin)
+static void frontbuffer_flush(struct drm_i915_private *i915,
+			      unsigned int frontbuffer_bits,
+			      enum fb_op_origin origin)
 {
 	/* Delay flushing when rings are still busy.*/
-	spin_lock(&dev_priv->fb_tracking.lock);
-	frontbuffer_bits &= ~dev_priv->fb_tracking.busy_bits;
-	spin_unlock(&dev_priv->fb_tracking.lock);
+	spin_lock(&i915->fb_tracking.lock);
+	frontbuffer_bits &= ~i915->fb_tracking.busy_bits;
+	spin_unlock(&i915->fb_tracking.lock);
 
 	if (!frontbuffer_bits)
 		return;
 
 	might_sleep();
-	intel_edp_drrs_flush(dev_priv, frontbuffer_bits);
-	intel_psr_flush(dev_priv, frontbuffer_bits, origin);
-	intel_fbc_flush(dev_priv, frontbuffer_bits, origin);
-}
-
-void __intel_fb_obj_flush(struct drm_i915_gem_object *obj,
-			  enum fb_op_origin origin,
-			  unsigned int frontbuffer_bits)
-{
-	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
-
-	if (origin == ORIGIN_CS) {
-		spin_lock(&dev_priv->fb_tracking.lock);
-		/* Filter out new bits since rendering started. */
-		frontbuffer_bits &= dev_priv->fb_tracking.busy_bits;
-		dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits;
-		spin_unlock(&dev_priv->fb_tracking.lock);
-	}
-
-	if (frontbuffer_bits)
-		intel_frontbuffer_flush(dev_priv, frontbuffer_bits, origin);
+	intel_edp_drrs_flush(i915, frontbuffer_bits);
+	intel_psr_flush(i915, frontbuffer_bits, origin);
+	intel_fbc_flush(i915, frontbuffer_bits, origin);
 }
 
 /**
  * intel_frontbuffer_flip_prepare - prepare asynchronous frontbuffer flip
- * @dev_priv: i915 device
+ * @i915: i915 device
  * @frontbuffer_bits: frontbuffer plane tracking bits
  *
  * This function gets called after scheduling a flip on @obj. The actual
@@ -142,19 +105,19 @@ void __intel_fb_obj_flush(struct drm_i915_gem_object *obj,
  *
  * Can be called without any locks held.
  */
-void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv,
+void intel_frontbuffer_flip_prepare(struct drm_i915_private *i915,
 				    unsigned frontbuffer_bits)
 {
-	spin_lock(&dev_priv->fb_tracking.lock);
-	dev_priv->fb_tracking.flip_bits |= frontbuffer_bits;
+	spin_lock(&i915->fb_tracking.lock);
+	i915->fb_tracking.flip_bits |= frontbuffer_bits;
 	/* Remove stale busy bits due to the old buffer. */
-	dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits;
-	spin_unlock(&dev_priv->fb_tracking.lock);
+	i915->fb_tracking.busy_bits &= ~frontbuffer_bits;
+	spin_unlock(&i915->fb_tracking.lock);
 }
 
 /**
  * intel_frontbuffer_flip_complete - complete asynchronous frontbuffer flip
- * @dev_priv: i915 device
+ * @i915: i915 device
  * @frontbuffer_bits: frontbuffer plane tracking bits
  *
  * This function gets called after the flip has been latched and will complete
@@ -162,23 +125,22 @@ void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv,
  *
  * Can be called without any locks held.
  */
-void intel_frontbuffer_flip_complete(struct drm_i915_private *dev_priv,
+void intel_frontbuffer_flip_complete(struct drm_i915_private *i915,
 				     unsigned frontbuffer_bits)
 {
-	spin_lock(&dev_priv->fb_tracking.lock);
+	spin_lock(&i915->fb_tracking.lock);
 	/* Mask any cancelled flips. */
-	frontbuffer_bits &= dev_priv->fb_tracking.flip_bits;
-	dev_priv->fb_tracking.flip_bits &= ~frontbuffer_bits;
-	spin_unlock(&dev_priv->fb_tracking.lock);
+	frontbuffer_bits &= i915->fb_tracking.flip_bits;
+	i915->fb_tracking.flip_bits &= ~frontbuffer_bits;
+	spin_unlock(&i915->fb_tracking.lock);
 
 	if (frontbuffer_bits)
-		intel_frontbuffer_flush(dev_priv,
-					frontbuffer_bits, ORIGIN_FLIP);
+		frontbuffer_flush(i915, frontbuffer_bits, ORIGIN_FLIP);
 }
 
 /**
  * intel_frontbuffer_flip - synchronous frontbuffer flip
- * @dev_priv: i915 device
+ * @i915: i915 device
  * @frontbuffer_bits: frontbuffer plane tracking bits
  *
  * This function gets called after scheduling a flip on @obj. This is for
@@ -187,13 +149,160 @@ void intel_frontbuffer_flip_complete(struct drm_i915_private *dev_priv,
  *
  * Can be called without any locks held.
  */
-void intel_frontbuffer_flip(struct drm_i915_private *dev_priv,
+void intel_frontbuffer_flip(struct drm_i915_private *i915,
 			    unsigned frontbuffer_bits)
 {
-	spin_lock(&dev_priv->fb_tracking.lock);
+	spin_lock(&i915->fb_tracking.lock);
 	/* Remove stale busy bits due to the old buffer. */
-	dev_priv->fb_tracking.busy_bits &= ~frontbuffer_bits;
-	spin_unlock(&dev_priv->fb_tracking.lock);
+	i915->fb_tracking.busy_bits &= ~frontbuffer_bits;
+	spin_unlock(&i915->fb_tracking.lock);
 
-	intel_frontbuffer_flush(dev_priv, frontbuffer_bits, ORIGIN_FLIP);
+	frontbuffer_flush(i915, frontbuffer_bits, ORIGIN_FLIP);
+}
+
+void __intel_fb_invalidate(struct intel_frontbuffer *front,
+			   enum fb_op_origin origin,
+			   unsigned int frontbuffer_bits)
+{
+	struct drm_i915_private *i915 = to_i915(front->obj->base.dev);
+
+	if (origin == ORIGIN_CS) {
+		spin_lock(&i915->fb_tracking.lock);
+		i915->fb_tracking.busy_bits |= frontbuffer_bits;
+		i915->fb_tracking.flip_bits &= ~frontbuffer_bits;
+		spin_unlock(&i915->fb_tracking.lock);
+	}
+
+	might_sleep();
+	intel_psr_invalidate(i915, frontbuffer_bits, origin);
+	intel_edp_drrs_invalidate(i915, frontbuffer_bits);
+	intel_fbc_invalidate(i915, frontbuffer_bits, origin);
+}
+
+void __intel_fb_flush(struct intel_frontbuffer *front,
+		      enum fb_op_origin origin,
+		      unsigned int frontbuffer_bits)
+{
+	struct drm_i915_private *i915 = to_i915(front->obj->base.dev);
+
+	if (origin == ORIGIN_CS) {
+		spin_lock(&i915->fb_tracking.lock);
+		/* Filter out new bits since rendering started. */
+		frontbuffer_bits &= i915->fb_tracking.busy_bits;
+		i915->fb_tracking.busy_bits &= ~frontbuffer_bits;
+		spin_unlock(&i915->fb_tracking.lock);
+	}
+
+	if (frontbuffer_bits)
+		frontbuffer_flush(i915, frontbuffer_bits, origin);
+}
+
+static int frontbuffer_active(struct i915_active *ref)
+{
+	struct intel_frontbuffer *front =
+		container_of(ref, typeof(*front), write);
+
+	kref_get(&front->ref);
+	return 0;
+}
+
+static void frontbuffer_retire(struct i915_active *ref)
+{
+	struct intel_frontbuffer *front =
+		container_of(ref, typeof(*front), write);
+
+	intel_frontbuffer_flush(front, ORIGIN_CS);
+	intel_frontbuffer_put(front);
+}
+
+static void frontbuffer_release(struct kref *ref)
+{
+	struct intel_frontbuffer *front =
+		container_of(ref, typeof(*front), ref);
+
+	front->obj->frontbuffer = NULL;
+	spin_unlock(&to_i915(front->obj->base.dev)->fb_tracking.lock);
+
+	i915_gem_object_put(front->obj);
+	kfree(front);
+}
+
+struct intel_frontbuffer *
+intel_frontbuffer_get(struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+	struct intel_frontbuffer *front;
+
+	spin_lock(&i915->fb_tracking.lock);
+	front = obj->frontbuffer;
+	if (front)
+		kref_get(&front->ref);
+	spin_unlock(&i915->fb_tracking.lock);
+	if (front)
+		return front;
+
+	front = kmalloc(sizeof(*front), GFP_KERNEL);
+	if (!front)
+		return NULL;
+
+	front->obj = obj;
+	kref_init(&front->ref);
+	atomic_set(&front->bits, 0);
+	i915_active_init(i915, &front->write,
+			 frontbuffer_active,
+			 i915_active_may_sleep(frontbuffer_retire));
+
+	spin_lock(&i915->fb_tracking.lock);
+	if (obj->frontbuffer) {
+		kfree(front);
+		front = obj->frontbuffer;
+		kref_get(&front->ref);
+	} else {
+		i915_gem_object_get(obj);
+		obj->frontbuffer = front;
+	}
+	spin_unlock(&i915->fb_tracking.lock);
+
+	return front;
+}
+
+void intel_frontbuffer_put(struct intel_frontbuffer *front)
+{
+	kref_put_lock(&front->ref,
+		      frontbuffer_release,
+		      &to_i915(front->obj->base.dev)->fb_tracking.lock);
+}
+
+/**
+ * intel_frontbuffer_track - update frontbuffer tracking
+ * @old: current buffer for the frontbuffer slots
+ * @new: new buffer for the frontbuffer slots
+ * @frontbuffer_bits: bitmask of frontbuffer slots
+ *
+ * This updates the frontbuffer tracking bits @frontbuffer_bits by clearing them
+ * from @old and setting them in @new. Both @old and @new can be NULL.
+ */
+void intel_frontbuffer_track(struct intel_frontbuffer *old,
+			     struct intel_frontbuffer *new,
+			     unsigned int frontbuffer_bits)
+{
+	/*
+	 * Control of individual bits within the mask are guarded by
+	 * the owning plane->mutex, i.e. we can never see concurrent
+	 * manipulation of individual bits. But since the bitfield as a whole
+	 * is updated using RMW, we need to use atomics in order to update
+	 * the bits.
+	 */
+	BUILD_BUG_ON(INTEL_FRONTBUFFER_BITS_PER_PIPE * I915_MAX_PIPES >
+		     BITS_PER_TYPE(atomic_t));
+
+	if (old) {
+		WARN_ON(!(atomic_read(&old->bits) & frontbuffer_bits));
+		atomic_andnot(frontbuffer_bits, &old->bits);
+	}
+
+	if (new) {
+		WARN_ON(atomic_read(&new->bits) & frontbuffer_bits);
+		atomic_or(frontbuffer_bits, &new->bits);
+	}
 }
diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.h b/drivers/gpu/drm/i915/display/intel_frontbuffer.h
index 5727320c8084..adc64d61a4a5 100644
--- a/drivers/gpu/drm/i915/display/intel_frontbuffer.h
+++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.h
@@ -24,7 +24,10 @@
 #ifndef __INTEL_FRONTBUFFER_H__
 #define __INTEL_FRONTBUFFER_H__
 
-#include "gem/i915_gem_object.h"
+#include <linux/atomic.h>
+#include <linux/kref.h>
+
+#include "i915_active.h"
 
 struct drm_i915_private;
 struct drm_i915_gem_object;
@@ -37,23 +40,30 @@ enum fb_op_origin {
 	ORIGIN_DIRTYFB,
 };
 
-void intel_frontbuffer_flip_prepare(struct drm_i915_private *dev_priv,
+struct intel_frontbuffer {
+	struct kref ref;
+	atomic_t bits;
+	struct i915_active write;
+	struct drm_i915_gem_object *obj;
+};
+
+void intel_frontbuffer_flip_prepare(struct drm_i915_private *i915,
 				    unsigned frontbuffer_bits);
-void intel_frontbuffer_flip_complete(struct drm_i915_private *dev_priv,
+void intel_frontbuffer_flip_complete(struct drm_i915_private *i915,
 				     unsigned frontbuffer_bits);
-void intel_frontbuffer_flip(struct drm_i915_private *dev_priv,
+void intel_frontbuffer_flip(struct drm_i915_private *i915,
 			    unsigned frontbuffer_bits);
 
-void __intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
-			       enum fb_op_origin origin,
-			       unsigned int frontbuffer_bits);
-void __intel_fb_obj_flush(struct drm_i915_gem_object *obj,
-			  enum fb_op_origin origin,
-			  unsigned int frontbuffer_bits);
+struct intel_frontbuffer *
+intel_frontbuffer_get(struct drm_i915_gem_object *obj);
+
+void __intel_fb_invalidate(struct intel_frontbuffer *front,
+			   enum fb_op_origin origin,
+			   unsigned int frontbuffer_bits);
 
 /**
- * intel_fb_obj_invalidate - invalidate frontbuffer object
- * @obj: GEM object to invalidate
+ * intel_frontbuffer_invalidate - invalidate frontbuffer object
+ * @front: GEM object to invalidate
  * @origin: which operation caused the invalidation
  *
  * This function gets called every time rendering on the given object starts and
@@ -62,37 +72,53 @@ void __intel_fb_obj_flush(struct drm_i915_gem_object *obj,
  * until the rendering completes or a flip on this frontbuffer plane is
  * scheduled.
  */
-static inline bool intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
-					   enum fb_op_origin origin)
+static inline bool intel_frontbuffer_invalidate(struct intel_frontbuffer *front,
+						enum fb_op_origin origin)
 {
 	unsigned int frontbuffer_bits;
 
-	frontbuffer_bits = atomic_read(&obj->frontbuffer_bits);
+	if (!front)
+		return false;
+
+	frontbuffer_bits = atomic_read(&front->bits);
 	if (!frontbuffer_bits)
 		return false;
 
-	__intel_fb_obj_invalidate(obj, origin, frontbuffer_bits);
+	__intel_fb_invalidate(front, origin, frontbuffer_bits);
 	return true;
 }
 
+void __intel_fb_flush(struct intel_frontbuffer *front,
+		      enum fb_op_origin origin,
+		      unsigned int frontbuffer_bits);
+
 /**
- * intel_fb_obj_flush - flush frontbuffer object
- * @obj: GEM object to flush
+ * intel_frontbuffer_flush - flush frontbuffer object
+ * @front: GEM object to flush
  * @origin: which operation caused the flush
  *
  * This function gets called every time rendering on the given object has
  * completed and frontbuffer caching can be started again.
  */
-static inline void intel_fb_obj_flush(struct drm_i915_gem_object *obj,
-				      enum fb_op_origin origin)
+static inline void intel_frontbuffer_flush(struct intel_frontbuffer *front,
+					   enum fb_op_origin origin)
 {
 	unsigned int frontbuffer_bits;
 
-	frontbuffer_bits = atomic_read(&obj->frontbuffer_bits);
+	if (!front)
+		return;
+
+	frontbuffer_bits = atomic_read(&front->bits);
 	if (!frontbuffer_bits)
 		return;
 
-	__intel_fb_obj_flush(obj, origin, frontbuffer_bits);
+	__intel_fb_flush(front, origin, frontbuffer_bits);
 }
 
+void intel_frontbuffer_track(struct intel_frontbuffer *old,
+			     struct intel_frontbuffer *new,
+			     unsigned int frontbuffer_bits);
+
+void intel_frontbuffer_put(struct intel_frontbuffer *front);
+
 #endif /* __INTEL_FRONTBUFFER_H__ */
diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
index c7d2d980df8c..f0743af79fcb 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -281,9 +281,9 @@ static void intel_overlay_flip_prepare(struct intel_overlay *overlay,
 
 	WARN_ON(overlay->old_vma);
 
-	i915_gem_track_fb(overlay->vma ? overlay->vma->obj : NULL,
-			  vma ? vma->obj : NULL,
-			  INTEL_FRONTBUFFER_OVERLAY(pipe));
+	intel_frontbuffer_track(overlay->vma ? overlay->vma->obj->frontbuffer : NULL,
+				vma ? vma->obj->frontbuffer : NULL,
+				INTEL_FRONTBUFFER_OVERLAY(pipe));
 
 	intel_frontbuffer_flip_prepare(overlay->i915,
 				       INTEL_FRONTBUFFER_OVERLAY(pipe));
@@ -768,7 +768,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
 		ret = PTR_ERR(vma);
 		goto out_pin_section;
 	}
-	intel_fb_obj_flush(new_bo, ORIGIN_DIRTYFB);
+	intel_frontbuffer_flush(new_bo->frontbuffer, ORIGIN_DIRTYFB);
 
 	ret = i915_vma_put_fence(vma);
 	if (ret)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
index 9018e12b536b..3334b3c6c6ed 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_clflush.c
@@ -48,7 +48,7 @@ static void __i915_do_clflush(struct drm_i915_gem_object *obj)
 {
 	GEM_BUG_ON(!i915_gem_object_has_pages(obj));
 	drm_clflush_sg(obj->mm.pages);
-	intel_fb_obj_flush(obj, ORIGIN_CPU);
+	intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 }
 
 static void i915_clflush_work(struct work_struct *work)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 2e3ce2a69653..a1afc2690e9e 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -551,13 +551,6 @@ i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write)
 	return 0;
 }
 
-static inline enum fb_op_origin
-fb_write_origin(struct drm_i915_gem_object *obj, unsigned int domain)
-{
-	return (domain == I915_GEM_DOMAIN_GTT ?
-		obj->frontbuffer_ggtt_origin : ORIGIN_CPU);
-}
-
 /**
  * Called when user space prepares to use an object with the CPU, either
  * through the mmap ioctl's mapping or a GTT mapping.
@@ -661,9 +654,8 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 
 	i915_gem_object_unlock(obj);
 
-	if (write_domain != 0)
-		intel_fb_obj_invalidate(obj,
-					fb_write_origin(obj, write_domain));
+	if (write_domain)
+		intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);
 
 out_unpin:
 	i915_gem_object_unpin_pages(obj);
@@ -783,7 +775,7 @@ int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj,
 	}
 
 out:
-	intel_fb_obj_invalidate(obj, ORIGIN_CPU);
+	intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);
 	obj->mm.dirty = true;
 	/* return with the pages pinned */
 	return 0;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index 391621ee3cbb..bcc53c2b4f69 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -99,9 +99,6 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 		up_write(&mm->mmap_sem);
 		if (IS_ERR_VALUE(addr))
 			goto err;
-
-		/* This may race, but that's ok, it only gets set */
-		WRITE_ONCE(obj->frontbuffer_ggtt_origin, ORIGIN_CPU);
 	}
 	i915_gem_object_put(obj);
 
@@ -281,7 +278,6 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf)
 		 * Userspace is now writing through an untracked VMA, abandon
 		 * all hope that the hardware is able to track future writes.
 		 */
-		obj->frontbuffer_ggtt_origin = ORIGIN_CPU;
 
 		vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags);
 		if (IS_ERR(vma) && !view.type) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 0a105bc29494..8b27f193d5f9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -45,16 +45,6 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
 	return kmem_cache_free(global.slab_objects, obj);
 }
 
-static void
-frontbuffer_retire(struct i915_active_request *active,
-		   struct i915_request *request)
-{
-	struct drm_i915_gem_object *obj =
-		container_of(active, typeof(*obj), frontbuffer_write);
-
-	intel_fb_obj_flush(obj, ORIGIN_CS);
-}
-
 void i915_gem_object_init(struct drm_i915_gem_object *obj,
 			  const struct drm_i915_gem_object_ops *ops)
 {
@@ -73,10 +63,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 	reservation_object_init(&obj->__builtin_resv);
 	obj->resv = &obj->__builtin_resv;
 
-	obj->frontbuffer_ggtt_origin = ORIGIN_GTT;
-	i915_active_request_init(&obj->frontbuffer_write,
-				 NULL, frontbuffer_retire);
-
 	obj->mm.madv = I915_MADV_WILLNEED;
 	INIT_RADIX_TREE(&obj->mm.get_page.radix, GFP_KERNEL | __GFP_NOWARN);
 	mutex_init(&obj->mm.get_page.lock);
@@ -191,7 +177,6 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
 
 		GEM_BUG_ON(atomic_read(&obj->bind_count));
 		GEM_BUG_ON(obj->userfault_count);
-		GEM_BUG_ON(atomic_read(&obj->frontbuffer_bits));
 		GEM_BUG_ON(!list_empty(&obj->lut_list));
 
 		if (obj->ops->release)
@@ -295,6 +280,8 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
 
+	GEM_BUG_ON(i915_gem_object_is_framebuffer(obj));
+
 	/*
 	 * Before we free the object, make sure any pure RCU-only
 	 * read-side critical sections are complete, e.g.
@@ -305,13 +292,6 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
 	call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
 }
 
-static inline enum fb_op_origin
-fb_write_origin(struct drm_i915_gem_object *obj, unsigned int domain)
-{
-	return (domain == I915_GEM_DOMAIN_GTT ?
-		obj->frontbuffer_ggtt_origin : ORIGIN_CPU);
-}
-
 static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj)
 {
 	return !(obj->cache_level == I915_CACHE_NONE ||
@@ -333,9 +313,7 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
 	switch (obj->write_domain) {
 	case I915_GEM_DOMAIN_GTT:
 		i915_gem_flush_ggtt_writes(dev_priv);
-
-		intel_fb_obj_flush(obj,
-				   fb_write_origin(obj, I915_GEM_DOMAIN_GTT));
+		intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 
 		for_each_ggtt_vma(vma, obj) {
 			if (vma->iomap)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 454bfb498001..67d70d144bd9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -161,7 +161,7 @@ i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
 static inline bool
 i915_gem_object_is_framebuffer(const struct drm_i915_gem_object *obj)
 {
-	return READ_ONCE(obj->framebuffer_references);
+	return READ_ONCE(obj->frontbuffer);
 }
 
 static inline unsigned int
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index c299fed2c6b1..21bfb7bd0f57 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -15,6 +15,7 @@
 #include "i915_selftest.h"
 
 struct drm_i915_gem_object;
+struct intel_fronbuffer;
 
 /*
  * struct i915_lut_handle tracks the fast lookups from handle to vma used
@@ -144,9 +145,7 @@ struct drm_i915_gem_object {
 	 */
 	u16 write_domain;
 
-	atomic_t frontbuffer_bits;
-	unsigned int frontbuffer_ggtt_origin; /* write once */
-	struct i915_active_request frontbuffer_write;
+	struct intel_frontbuffer *frontbuffer;
 
 	/** Current tiling stride for the object, if it's tiled. */
 	unsigned int tiling_and_stride;
@@ -239,9 +238,6 @@ struct drm_i915_gem_object {
 	 */
 	struct reservation_object *resv;
 
-	/** References from framebuffers, locks out tiling changes. */
-	unsigned int framebuffer_references;
-
 	/** Record of address bit 17 of each page at last unbind. */
 	unsigned long *bit_17;
 
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 2c8580f375fb..06b3147d81f7 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -136,7 +136,6 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
 	struct intel_engine_cs *engine;
 	struct i915_vma *vma;
-	unsigned int frontbuffer_bits;
 	int pin_count = 0;
 
 	seq_printf(m, "%pK: %c%c%c%c %8zdKiB %02x %02x %s%s%s",
@@ -226,10 +225,6 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
 	engine = i915_gem_object_last_write_engine(obj);
 	if (engine)
 		seq_printf(m, " (%s)", engine->name);
-
-	frontbuffer_bits = atomic_read(&obj->frontbuffer_bits);
-	if (frontbuffer_bits)
-		seq_printf(m, " (frontbuffer: 0x%03x)", frontbuffer_bits);
 }
 
 struct file_stats {
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 5969c83d3ba8..3af540e466d2 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2539,10 +2539,6 @@ int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
 		      u32 handle, u64 *offset);
 int i915_gem_mmap_gtt_version(void);
 
-void i915_gem_track_fb(struct drm_i915_gem_object *old,
-		       struct drm_i915_gem_object *new,
-		       unsigned frontbuffer_bits);
-
 int __must_check i915_gem_set_global_seqno(struct drm_device *dev, u32 seqno);
 
 static inline bool __i915_wedged(struct i915_gpu_error *error)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 190ad54fb072..b1e4db25f94e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -134,17 +134,19 @@ i915_gem_phys_pwrite(struct drm_i915_gem_object *obj,
 	void *vaddr = obj->phys_handle->vaddr + args->offset;
 	char __user *user_data = u64_to_user_ptr(args->data_ptr);
 
-	/* We manually control the domain here and pretend that it
+	/*
+	 * We manually control the domain here and pretend that it
 	 * remains coherent i.e. in the GTT domain, like shmem_pwrite.
 	 */
-	intel_fb_obj_invalidate(obj, ORIGIN_CPU);
+	intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);
+
 	if (copy_from_user(vaddr, user_data, args->size))
 		return -EFAULT;
 
 	drm_clflush_virt_range(vaddr, args->size);
 	i915_gem_chipset_flush(to_i915(obj->base.dev));
 
-	intel_fb_obj_flush(obj, ORIGIN_CPU);
+	intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 	return 0;
 }
 
@@ -632,7 +634,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 		goto out_unpin;
 	}
 
-	intel_fb_obj_invalidate(obj, ORIGIN_CPU);
+	intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);
 
 	user_data = u64_to_user_ptr(args->data_ptr);
 	offset = args->offset;
@@ -673,7 +675,7 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
 		user_data += page_length;
 		offset += page_length;
 	}
-	intel_fb_obj_flush(obj, ORIGIN_CPU);
+	intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 
 	i915_gem_object_unlock_fence(obj, fence);
 out_unpin:
@@ -766,7 +768,7 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
 		offset = 0;
 	}
 
-	intel_fb_obj_flush(obj, ORIGIN_CPU);
+	intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
 	i915_gem_object_unlock_fence(obj, fence);
 
 	return ret;
@@ -1872,39 +1874,6 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 	return ret;
 }
 
-/**
- * i915_gem_track_fb - update frontbuffer tracking
- * @old: current GEM buffer for the frontbuffer slots
- * @new: new GEM buffer for the frontbuffer slots
- * @frontbuffer_bits: bitmask of frontbuffer slots
- *
- * This updates the frontbuffer tracking bits @frontbuffer_bits by clearing them
- * from @old and setting them in @new. Both @old and @new can be NULL.
- */
-void i915_gem_track_fb(struct drm_i915_gem_object *old,
-		       struct drm_i915_gem_object *new,
-		       unsigned frontbuffer_bits)
-{
-	/* Control of individual bits within the mask are guarded by
-	 * the owning plane->mutex, i.e. we can never see concurrent
-	 * manipulation of individual bits. But since the bitfield as a whole
-	 * is updated using RMW, we need to use atomics in order to update
-	 * the bits.
-	 */
-	BUILD_BUG_ON(INTEL_FRONTBUFFER_BITS_PER_PIPE * I915_MAX_PIPES >
-		     BITS_PER_TYPE(atomic_t));
-
-	if (old) {
-		WARN_ON(!(atomic_read(&old->frontbuffer_bits) & frontbuffer_bits));
-		atomic_andnot(frontbuffer_bits, &old->frontbuffer_bits);
-	}
-
-	if (new) {
-		WARN_ON(atomic_read(&new->frontbuffer_bits) & frontbuffer_bits);
-		atomic_or(frontbuffer_bits, &new->frontbuffer_bits);
-	}
-}
-
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_gem_device.c"
 #include "selftests/i915_gem.c"
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index b888ddab3c8b..9722269410b7 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -919,8 +919,10 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 	if (flags & EXEC_OBJECT_WRITE) {
 		obj->write_domain = I915_GEM_DOMAIN_RENDER;
 
-		if (intel_fb_obj_invalidate(obj, ORIGIN_CS))
-			__i915_active_request_set(&obj->frontbuffer_write, rq);
+		if (intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CS))
+			i915_active_ref(&obj->frontbuffer->write,
+					rq->fence.context,
+					rq);
 
 		obj->read_domains = 0;
 	}
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 1d58f7ec5d84..31344905642d 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -69,6 +69,7 @@ enum intel_output_type {
 
 struct intel_framebuffer {
 	struct drm_framebuffer base;
+	struct intel_frontbuffer *frontbuffer;
 	struct intel_rotation_info rot_info;
 
 	/* for each plane in the normal GTT view */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 22/26] drm/i915: Coordinate i915_active with its own mutex
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (19 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 21/26] drm/i915: Extract intel_frontbuffer active tracking Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 23/26] drm/i915: Rename intel_wakeref_[is]_active Chris Wilson
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Forgo the struct_mutex serialisation for i915_active, and interpose its
own mutex handling for active/retire.

This is a multi-layered sleight-of-hand. First, we had to ensure that no
active/retire callbacks accidentally inverted the mutex ordering rules,
nor assumed that they were themselves serialised by struct_mutex. More
challenging though, is the rule over updating elements of the active
rbtree. Instead of the whole i915_active now being serialised by
struct_mutex, allocations/rotations of the tree are serialised by the
i915_active.mutex and individual nodes are serialised by the caller
using the i915_timeline.mutex (we need to use nested spinlocks to
interact with the dma_fence callback lists).

The pain point here is that instead of a single mutex around execbuf, we
now have to take a mutex for active tracker (one for each vma, context,
etc) and a couple of spinlocks for each fence update. The improvement in
fine grained locking allowing for multiple concurrent clients
(eventually!) should be worth it in typical loads.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../gpu/drm/i915/display/intel_frontbuffer.c  |   2 +-
 drivers/gpu/drm/i915/display/intel_overlay.c  |   6 +-
 drivers/gpu/drm/i915/gem/i915_gem_context.c   |  12 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |   2 +-
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   2 +
 drivers/gpu/drm/i915/gem/i915_gem_pm.c        |   9 +-
 drivers/gpu/drm/i915/gt/intel_context.c       |   2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c         |  10 +-
 drivers/gpu/drm/i915/gt/selftest_lrc.c        |  10 +-
 drivers/gpu/drm/i915/gvt/scheduler.c          |   3 -
 drivers/gpu/drm/i915/i915_active.c            | 153 +++++----
 drivers/gpu/drm/i915/i915_active.h            | 305 ++++--------------
 drivers/gpu/drm/i915/i915_active_types.h      |  17 +-
 drivers/gpu/drm/i915/i915_gem.c               |  42 ++-
 drivers/gpu/drm/i915/i915_gem_gtt.c           |   2 +-
 drivers/gpu/drm/i915/i915_request.c           |  58 +---
 drivers/gpu/drm/i915/i915_request.h           |   1 -
 drivers/gpu/drm/i915/i915_timeline.c          |   9 +-
 drivers/gpu/drm/i915/i915_timeline_types.h    |   2 +-
 drivers/gpu/drm/i915/i915_vma.c               |  36 +--
 drivers/gpu/drm/i915/selftests/i915_active.c  |  35 +-
 .../gpu/drm/i915/selftests/mock_timeline.c    |   2 +-
 22 files changed, 234 insertions(+), 486 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.c b/drivers/gpu/drm/i915/display/intel_frontbuffer.c
index 7128148155dc..e01509c6877b 100644
--- a/drivers/gpu/drm/i915/display/intel_frontbuffer.c
+++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.c
@@ -248,7 +248,7 @@ intel_frontbuffer_get(struct drm_i915_gem_object *obj)
 	front->obj = obj;
 	kref_init(&front->ref);
 	atomic_set(&front->bits, 0);
-	i915_active_init(i915, &front->write,
+	i915_active_init(&front->write,
 			 frontbuffer_active,
 			 i915_active_may_sleep(frontbuffer_retire));
 
diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
index f0743af79fcb..d50a0196b166 100644
--- a/drivers/gpu/drm/i915/display/intel_overlay.c
+++ b/drivers/gpu/drm/i915/display/intel_overlay.c
@@ -230,7 +230,8 @@ alloc_request(struct intel_overlay *overlay, void (*fn)(struct intel_overlay *))
 	if (IS_ERR(rq))
 		return rq;
 
-	err = i915_active_ref(&overlay->last_flip, rq->fence.context, rq);
+	err = i915_active_ref(&overlay->last_flip,
+			      rq->fence.context, &rq->fence);
 	if (err) {
 		i915_request_add(rq);
 		return ERR_PTR(err);
@@ -1366,8 +1367,7 @@ void intel_overlay_setup(struct drm_i915_private *dev_priv)
 	overlay->contrast = 75;
 	overlay->saturation = 146;
 
-	i915_active_init(dev_priv,
-			 &overlay->last_flip,
+	i915_active_init(&overlay->last_flip,
 			 NULL, intel_overlay_last_flip_retire);
 
 	ret = get_registers(overlay, OVERLAY_NEEDS_PHYSICAL(dev_priv));
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 55b8f0e04846..1c8869b8ec20 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -911,20 +911,18 @@ static int context_barrier_task(struct i915_gem_context *ctx,
 				void (*task)(void *data),
 				void *data)
 {
-	struct drm_i915_private *i915 = ctx->i915;
 	struct context_barrier_task *cb;
 	struct i915_gem_engines_iter it;
 	struct intel_context *ce;
 	int err = 0;
 
-	lockdep_assert_held(&i915->drm.struct_mutex);
 	GEM_BUG_ON(!task);
 
 	cb = kmalloc(sizeof(*cb), GFP_KERNEL);
 	if (!cb)
 		return -ENOMEM;
 
-	i915_active_init(i915, &cb->base, NULL, cb_retire);
+	i915_active_init(&cb->base, NULL, cb_retire);
 	err = i915_active_acquire(&cb->base);
 	if (err) {
 		kfree(cb);
@@ -956,7 +954,9 @@ static int context_barrier_task(struct i915_gem_context *ctx,
 		if (emit)
 			err = emit(rq, data);
 		if (err == 0)
-			err = i915_active_ref(&cb->base, rq->fence.context, rq);
+			err = i915_active_ref(&cb->base,
+					      rq->fence.context,
+					      &rq->fence);
 
 		i915_request_add(rq);
 		if (err)
@@ -1193,7 +1193,7 @@ gen8_modify_rpcs(struct intel_context *ce, struct intel_sseu sseu)
 		return PTR_ERR(rq);
 
 	/* Queue this switch after all other activity by this context. */
-	ret = i915_active_request_set(&ce->ring->timeline->last_request, rq);
+	ret = i915_active_fence_set(&ce->ring->timeline->last_request, rq);
 	if (ret)
 		goto out_add;
 
@@ -1205,7 +1205,7 @@ gen8_modify_rpcs(struct intel_context *ce, struct intel_sseu sseu)
 	 * words transfer the pinned ce object to tracked active request.
 	 */
 	GEM_BUG_ON(i915_active_is_idle(&ce->active));
-	ret = i915_active_ref(&ce->active, rq->fence.context, rq);
+	ret = i915_active_ref(&ce->active, rq->fence.context, &rq->fence);
 	if (ret)
 		goto out_add;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 5fae0e50aad0..e847a8079705 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1317,7 +1317,7 @@ relocate_entry(struct i915_vma *vma,
 
 	if (!eb->reloc_cache.vaddr &&
 	    (DBG_FORCE_RELOC == FORCE_GPU_RELOC ||
-	     !reservation_object_test_signaled_rcu(vma->resv, true))) {
+	     i915_vma_is_active(vma))) {
 		const unsigned int gen = eb->reloc_cache.gen;
 		unsigned int len;
 		u32 *batch;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 21bfb7bd0f57..e87fca4d8194 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -11,6 +11,8 @@
 
 #include <drm/drm_gem.h>
 
+#include <uapi/drm/i915_drm.h>
+
 #include "i915_active.h"
 #include "i915_selftest.h"
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 05011d4a3b88..35b73c347887 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -15,14 +15,11 @@ static void call_idle_barriers(struct intel_engine_cs *engine)
 	struct llist_node *node, *next;
 
 	llist_for_each_safe(node, next, llist_del_all(&engine->barrier_tasks)) {
-		struct i915_active_request *active =
+		struct dma_fence_cb *cb =
 			container_of((struct list_head *)node,
-				     typeof(*active), link);
+				     typeof(*cb), node);
 
-		INIT_LIST_HEAD(&active->link);
-		RCU_INIT_POINTER(active->request, NULL);
-
-		active->retire(active, NULL);
+		cb->func(NULL, cb);
 	}
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
index 70d3b2568227..d108ca8f2832 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.c
+++ b/drivers/gpu/drm/i915/gt/intel_context.c
@@ -184,7 +184,7 @@ intel_context_init(struct intel_context *ce,
 
 	mutex_init(&ce->pin_mutex);
 
-	i915_active_init(ctx->i915, &ce->active,
+	i915_active_init(&ce->active,
 			 __intel_context_active, __intel_context_retire);
 }
 
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 4c478b38e420..d204aadf5d6f 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -882,10 +882,10 @@ static bool __i915_gem_unset_wedged(struct drm_i915_private *i915)
 	 */
 	mutex_lock(&i915->gt.timelines.mutex);
 	list_for_each_entry(tl, &i915->gt.timelines.active_list, link) {
-		struct i915_request *rq;
+		struct dma_fence *fence;
 
-		rq = i915_active_request_get_unlocked(&tl->last_request);
-		if (!rq)
+		fence = i915_active_fence_get(&tl->last_request);
+		if (!fence)
 			continue;
 
 		/*
@@ -895,8 +895,8 @@ static bool __i915_gem_unset_wedged(struct drm_i915_private *i915)
 		 * (I915_FENCE_TIMEOUT) so this wait should not be unbounded
 		 * in the worst case.
 		 */
-		dma_fence_default_wait(&rq->fence, false, MAX_SCHEDULE_TIMEOUT);
-		i915_request_put(rq);
+		dma_fence_default_wait(fence, false, MAX_SCHEDULE_TIMEOUT);
+		dma_fence_put(fence);
 	}
 	mutex_unlock(&i915->gt.timelines.mutex);
 
diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 0c97f953e908..a391fd52adbc 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -854,9 +854,13 @@ static struct i915_request *dummy_request(struct intel_engine_cs *engine)
 	if (!rq)
 		return NULL;
 
-	INIT_LIST_HEAD(&rq->active_list);
 	rq->engine = engine;
 
+	spin_lock_init(&rq->lock);
+	INIT_LIST_HEAD(&rq->fence.cb_list);
+	rq->fence.lock = &rq->lock;
+	rq->fence.ops = &i915_fence_ops;
+
 	i915_sched_node_init(&rq->sched);
 
 	/* mark this request as permanently incomplete */
@@ -945,8 +949,8 @@ static int live_suppress_wait_preempt(void *arg)
 				}
 
 				/* Disable NEWCLIENT promotion */
-				__i915_active_request_set(&rq[i]->timeline->last_request,
-							  dummy);
+				__i915_active_fence_set(&rq[i]->timeline->last_request,
+							&dummy->fence);
 				i915_request_add(rq[i]);
 			}
 
diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 2144fb46d0e1..f3cec07ab5ab 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -393,11 +393,8 @@ intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload)
 {
 	struct intel_vgpu *vgpu = workload->vgpu;
 	struct intel_vgpu_submission *s = &vgpu->submission;
-	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
 	struct i915_request *rq;
 
-	lockdep_assert_held(&dev_priv->drm.struct_mutex);
-
 	if (workload->req)
 		return 0;
 
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 710805605bb3..cf31ef63713d 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -12,8 +12,6 @@
 #include "i915_active.h"
 #include "i915_globals.h"
 
-#define BKL(ref) (&(ref)->i915->drm.struct_mutex)
-
 /*
  * Active refs memory management
  *
@@ -27,7 +25,7 @@ static struct i915_global_active {
 } global;
 
 struct active_node {
-	struct i915_active_request base;
+	struct i915_active_fence base;
 	struct i915_active *ref;
 	struct rb_node node;
 	u64 timeline;
@@ -122,7 +120,7 @@ __active_retire(struct i915_active *ref)
 	ref->retire(ref);
 
 	rbtree_postorder_for_each_entry_safe(it, n, &root, node) {
-		GEM_BUG_ON(i915_active_request_isset(&it->base));
+		GEM_BUG_ON(i915_active_fence_isset(&it->base));
 		kmem_cache_free(global.slab_cache, it);
 	}
 }
@@ -158,12 +156,13 @@ active_retire(struct i915_active *ref)
 }
 
 static void
-node_retire(struct i915_active_request *base, struct i915_request *rq)
+node_retire(struct dma_fence *fence, struct dma_fence_cb *cb)
 {
-	active_retire(container_of(base, struct active_node, base)->ref);
+	i915_active_fence_cb(fence, cb);
+	active_retire(container_of(cb, struct active_node, base.cb)->ref);
 }
 
-static struct i915_active_request *
+static struct i915_active_fence *
 active_instance(struct i915_active *ref, u64 idx)
 {
 	struct active_node *node, *prealloc;
@@ -206,7 +205,7 @@ active_instance(struct i915_active *ref, u64 idx)
 	}
 
 	node = prealloc;
-	i915_active_request_init(&node->base, NULL, node_retire);
+	__i915_active_fence_init(&node->base, NULL, node_retire);
 	node->ref = ref;
 	node->timeline = idx;
 
@@ -220,8 +219,7 @@ active_instance(struct i915_active *ref, u64 idx)
 	return &node->base;
 }
 
-void __i915_active_init(struct drm_i915_private *i915,
-			struct i915_active *ref,
+void __i915_active_init(struct i915_active *ref,
 			int (*active)(struct i915_active *ref),
 			void (*retire)(struct i915_active *ref),
 			struct lock_class_key *key)
@@ -230,8 +228,6 @@ void __i915_active_init(struct drm_i915_private *i915,
 
 	debug_active_init(ref);
 
-	ref->i915 = i915;
-
 	ref->flags = 0;
 	ref->active = active;
 	ref->retire = ptr_unpack_bits(retire, &bits, 2);
@@ -248,9 +244,9 @@ void __i915_active_init(struct drm_i915_private *i915,
 
 int i915_active_ref(struct i915_active *ref,
 		    u64 timeline,
-		    struct i915_request *rq)
+		    struct dma_fence *fence)
 {
-	struct i915_active_request *active;
+	struct i915_active_fence *active;
 	int err;
 
 	/* Prevent reaping in case we malloc/wait while building the tree */
@@ -264,9 +260,9 @@ int i915_active_ref(struct i915_active *ref,
 		goto out;
 	}
 
-	if (!i915_active_request_isset(active))
+	GEM_BUG_ON(!atomic_read(&ref->count));
+	if (!__i915_active_fence_set(active, fence))
 		atomic_inc(&ref->count);
-	__i915_active_request_set(active, rq);
 
 out:
 	i915_active_release(ref);
@@ -322,7 +318,14 @@ int i915_active_wait(struct i915_active *ref)
 	}
 
 	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
-		err = i915_active_request_retire(&it->base, BKL(ref));
+		struct dma_fence *fence;
+
+		fence = i915_active_fence_get(&it->base);
+		if (!fence)
+			continue;
+
+		err = dma_fence_wait(fence, true);
+		dma_fence_put(fence);
 		if (err)
 			break;
 	}
@@ -338,40 +341,6 @@ int i915_active_wait(struct i915_active *ref)
 	return 0;
 }
 
-int i915_request_await_active_request(struct i915_request *rq,
-				      struct i915_active_request *active)
-{
-	struct i915_request *barrier =
-		i915_active_request_raw(active, &rq->i915->drm.struct_mutex);
-
-	return barrier ? i915_request_await_dma_fence(rq, &barrier->fence) : 0;
-}
-
-int i915_request_await_active(struct i915_request *rq, struct i915_active *ref)
-{
-	struct active_node *it, *n;
-	int err;
-
-	if (RB_EMPTY_ROOT(&ref->tree))
-		return 0;
-
-	/* await allocates and so we need to avoid hitting the shrinker */
-	err = i915_active_acquire(ref);
-	if (err)
-		return err;
-
-	mutex_lock(&ref->mutex);
-	rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
-		err = i915_request_await_active_request(rq, &it->base);
-		if (err)
-			break;
-	}
-	mutex_unlock(&ref->mutex);
-
-	i915_active_release(ref);
-	return err;
-}
-
 #if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
 void i915_active_fini(struct i915_active *ref)
 {
@@ -402,14 +371,13 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 			goto unwind;
 		}
 
-		i915_active_request_init(&node->base,
-					 (void *)engine, node_retire);
+		__i915_active_fence_init(&node->base, engine, node_retire);
 		node->timeline = kctx->ring->timeline->fence_context;
 		node->ref = ref;
 		atomic_inc(&ref->count);
 
 		intel_engine_pm_get(engine);
-		llist_add((struct llist_node *)&node->base.link,
+		llist_add((struct llist_node *)&node->base.cb.node,
 			  &ref->barriers);
 	}
 
@@ -420,8 +388,8 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
 		struct active_node *node;
 
 		node = container_of((struct list_head *)pos,
-				    typeof(*node), base.link);
-		engine = (void *)rcu_access_pointer(node->base.request);
+				    typeof(*node), base.cb.node);
+		engine = (void *)rcu_access_pointer(node->base.fence);
 
 		intel_engine_pm_put(engine);
 		kfree(node);
@@ -442,10 +410,10 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 		struct rb_node **p, *parent;
 
 		node = container_of((struct list_head *)pos,
-				    typeof(*node), base.link);
+				    typeof(*node), base.cb.node);
 
-		engine = (void *)rcu_access_pointer(node->base.request);
-		RCU_INIT_POINTER(node->base.request, ERR_PTR(-EAGAIN));
+		engine = (void *)rcu_access_pointer(node->base.fence);
+		RCU_INIT_POINTER(node->base.fence, ERR_PTR(-EAGAIN));
 
 		parent = NULL;
 		p = &ref->tree.rb_node;
@@ -461,7 +429,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 		rb_link_node(&node->node, parent, p);
 		rb_insert_color(&node->node, &ref->tree);
 
-		llist_add((struct llist_node *)&node->base.link,
+		llist_add((struct llist_node *)&node->base.cb.node,
 			  &engine->barrier_tasks);
 		intel_engine_pm_put(engine);
 	}
@@ -472,29 +440,72 @@ void i915_request_add_barriers(struct i915_request *rq)
 {
 	struct intel_engine_cs *engine = rq->engine;
 	struct llist_node *node, *next;
+	unsigned long flags;
+
+	GEM_BUG_ON(intel_engine_is_virtual(engine));
+	node = llist_del_all(&engine->barrier_tasks);
+	if (!node)
+		return;
 
-	llist_for_each_safe(node, next, llist_del_all(&engine->barrier_tasks))
-		list_add_tail((struct list_head *)node, &rq->active_list);
+	spin_lock_irqsave(&rq->lock, flags);
+	llist_for_each_safe(node, next, node)
+		list_add_tail((struct list_head *)node, &rq->fence.cb_list);
+	spin_unlock_irqrestore(&rq->lock, flags);
 }
 
-int i915_active_request_set(struct i915_active_request *active,
-			    struct i915_request *rq)
+struct dma_fence *
+__i915_active_fence_set(struct i915_active_fence *active,
+			struct dma_fence *fence)
 {
+	struct dma_fence *prev;
+	unsigned long flags;
+
+	/* NB: updates must be serialised by an outer timeline mutex */
+	spin_lock_irqsave(fence->lock, flags);
+	GEM_BUG_ON(test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags));
+
+	prev = rcu_dereference_protected(active->fence, 1 /* timeline mutex */);
+	if (prev) {
+		spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING);
+		__list_del_entry(&active->cb.node);
+		spin_unlock(prev->lock);
+		prev = rcu_access_pointer(active->fence);
+	}
+
+	rcu_assign_pointer(active->fence, fence);
+	list_add_tail(&active->cb.node, &fence->cb_list);
+
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return prev;
+}
+
+int i915_active_fence_set(struct i915_active_fence *active,
+			  struct i915_request *rq)
+{
+	struct dma_fence *fence;
 	int err;
 
 	/* Must maintain ordering wrt previous active requests */
-	err = i915_request_await_active_request(rq, active);
-	if (err)
-		return err;
+	rcu_read_lock();
+	fence = __i915_active_fence_set(active, &rq->fence);
+	if (fence)
+		fence = dma_fence_get_rcu(fence);
+	rcu_read_unlock();
+
+	if (fence) {
+		err = i915_request_await_dma_fence(rq, fence);
+		dma_fence_put(fence);
+		if (err)
+			return err;
+	}
 
-	__i915_active_request_set(active, rq);
 	return 0;
 }
 
-void i915_active_retire_noop(struct i915_active_request *active,
-			     struct i915_request *request)
+void i915_active_noop(struct dma_fence *fence, struct dma_fence_cb *cb)
 {
-	/* Space left intentionally blank */
+	i915_active_fence_cb(fence, cb);
 }
 
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
index 911a8338007a..fdf87bad76d5 100644
--- a/drivers/gpu/drm/i915/i915_active.h
+++ b/drivers/gpu/drm/i915/i915_active.h
@@ -10,7 +10,9 @@
 #include <linux/lockdep.h>
 
 #include "i915_active_types.h"
-#include "i915_request.h"
+
+struct i915_request;
+struct intel_engine_cs;
 
 /*
  * We treat requests as fences. This is not be to confused with our
@@ -28,300 +30,103 @@
  * write access so that we can perform concurrent read operations between
  * the CPU and GPU engines, as well as waiting for all rendering to
  * complete, or waiting for the last GPU user of a "fence register". The
- * object then embeds a #i915_active_request to track the most recent (in
+ * object then embeds a #i915_active_fence to track the most recent (in
  * retirement order) request relevant for the desired mode of access.
- * The #i915_active_request is updated with i915_active_request_set() to
+ * The #i915_active_fence is updated with i915_active_fence_set() to
  * track the most recent fence request, typically this is done as part of
  * i915_vma_move_to_active().
  *
- * When the #i915_active_request completes (is retired), it will
+ * When the #i915_active_fence completes (is retired), it will
  * signal its completion to the owner through a callback as well as mark
- * itself as idle (i915_active_request.request == NULL). The owner
+ * itself as idle (i915_active_fence.request == NULL). The owner
  * can then perform any action, such as delayed freeing of an active
  * resource including itself.
  */
 
-void i915_active_retire_noop(struct i915_active_request *active,
-			     struct i915_request *request);
+void i915_active_noop(struct dma_fence *fence, struct dma_fence_cb *cb);
 
 /**
- * i915_active_request_init - prepares the activity tracker for use
+ * __i915_active_fence_init - prepares the activity tracker for use
  * @active - the active tracker
- * @rq - initial request to track, can be NULL
+ * @fence - initial fence to track, can be NULL
  * @func - a callback when then the tracker is retired (becomes idle),
  *         can be NULL
  *
- * i915_active_request_init() prepares the embedded @active struct for use as
- * an activity tracker, that is for tracking the last known active request
- * associated with it. When the last request becomes idle, when it is retired
+ * i915_active_fence_init() prepares the embedded @active struct for use as
+ * an activity tracker, that is for tracking the last known active fence
+ * associated with it. When the last fence becomes idle, when it is retired
  * after completion, the optional callback @func is invoked.
  */
 static inline void
-i915_active_request_init(struct i915_active_request *active,
-			 struct i915_request *rq,
-			 i915_active_retire_fn retire)
+__i915_active_fence_init(struct i915_active_fence *active,
+			 void *fence,
+			 dma_fence_func_t fn)
 {
-	RCU_INIT_POINTER(active->request, rq);
-	INIT_LIST_HEAD(&active->link);
-	active->retire = retire ?: i915_active_retire_noop;
+	RCU_INIT_POINTER(active->fence, fence);
+	active->cb.func = fn ?: i915_active_noop;
 }
 
-#define INIT_ACTIVE_REQUEST(name) i915_active_request_init((name), NULL, NULL)
-
-/**
- * i915_active_request_set - updates the tracker to watch the current request
- * @active - the active tracker
- * @request - the request to watch
- *
- * __i915_active_request_set() watches the given @request for completion. Whilst
- * that @request is busy, the @active reports busy. When that @request is
- * retired, the @active tracker is updated to report idle.
- */
-static inline void
-__i915_active_request_set(struct i915_active_request *active,
-			  struct i915_request *request)
-{
-	list_move(&active->link, &request->active_list);
-	rcu_assign_pointer(active->request, request);
-}
-
-int __must_check
-i915_active_request_set(struct i915_active_request *active,
-			struct i915_request *rq);
-
-/**
- * i915_active_request_raw - return the active request
- * @active - the active tracker
- *
- * i915_active_request_raw() returns the current request being tracked, or NULL.
- * It does not obtain a reference on the request for the caller, so the caller
- * must hold struct_mutex.
- */
-static inline struct i915_request *
-i915_active_request_raw(const struct i915_active_request *active,
-			struct mutex *mutex)
-{
-	return rcu_dereference_protected(active->request,
-					 lockdep_is_held(mutex));
-}
-
-/**
- * i915_active_request_peek - report the active request being monitored
- * @active - the active tracker
- *
- * i915_active_request_peek() returns the current request being tracked if
- * still active, or NULL. It does not obtain a reference on the request
- * for the caller, so the caller must hold struct_mutex.
- */
-static inline struct i915_request *
-i915_active_request_peek(const struct i915_active_request *active,
-			 struct mutex *mutex)
-{
-	struct i915_request *request;
-
-	request = i915_active_request_raw(active, mutex);
-	if (!request || i915_request_completed(request))
-		return NULL;
+#define INIT_ACTIVE_FENCE(A) __i915_active_fence_init(A, NULL, NULL)
 
-	return request;
-}
+struct dma_fence *
+__i915_active_fence_set(struct i915_active_fence *active,
+			struct dma_fence *fence);
 
 /**
- * i915_active_request_get - return a reference to the active request
+ * i915_active_fence_set - updates the tracker to watch the current fence
  * @active - the active tracker
+ * @rq - the request to watch
  *
- * i915_active_request_get() returns a reference to the active request, or NULL
- * if the active tracker is idle. The caller must hold struct_mutex.
+ * i915_active_fence_set() watches the given @rq for completion. While
+ * that @rq is busy, the @active reports busy. When that @rq is signaled
+ * (or else retired) the @active tracker is updated to report idle.
  */
-static inline struct i915_request *
-i915_active_request_get(const struct i915_active_request *active,
-			struct mutex *mutex)
-{
-	return i915_request_get(i915_active_request_peek(active, mutex));
-}
-
-/**
- * __i915_active_request_get_rcu - return a reference to the active request
- * @active - the active tracker
- *
- * __i915_active_request_get() returns a reference to the active request,
- * or NULL if the active tracker is idle. The caller must hold the RCU read
- * lock, but the returned pointer is safe to use outside of RCU.
- */
-static inline struct i915_request *
-__i915_active_request_get_rcu(const struct i915_active_request *active)
-{
-	/*
-	 * Performing a lockless retrieval of the active request is super
-	 * tricky. SLAB_TYPESAFE_BY_RCU merely guarantees that the backing
-	 * slab of request objects will not be freed whilst we hold the
-	 * RCU read lock. It does not guarantee that the request itself
-	 * will not be freed and then *reused*. Viz,
-	 *
-	 * Thread A			Thread B
-	 *
-	 * rq = active.request
-	 *				retire(rq) -> free(rq);
-	 *				(rq is now first on the slab freelist)
-	 *				active.request = NULL
-	 *
-	 *				rq = new submission on a new object
-	 * ref(rq)
-	 *
-	 * To prevent the request from being reused whilst the caller
-	 * uses it, we take a reference like normal. Whilst acquiring
-	 * the reference we check that it is not in a destroyed state
-	 * (refcnt == 0). That prevents the request being reallocated
-	 * whilst the caller holds on to it. To check that the request
-	 * was not reallocated as we acquired the reference we have to
-	 * check that our request remains the active request across
-	 * the lookup, in the same manner as a seqlock. The visibility
-	 * of the pointer versus the reference counting is controlled
-	 * by using RCU barriers (rcu_dereference and rcu_assign_pointer).
-	 *
-	 * In the middle of all that, we inspect whether the request is
-	 * complete. Retiring is lazy so the request may be completed long
-	 * before the active tracker is updated. Querying whether the
-	 * request is complete is far cheaper (as it involves no locked
-	 * instructions setting cachelines to exclusive) than acquiring
-	 * the reference, so we do it first. The RCU read lock ensures the
-	 * pointer dereference is valid, but does not ensure that the
-	 * seqno nor HWS is the right one! However, if the request was
-	 * reallocated, that means the active tracker's request was complete.
-	 * If the new request is also complete, then both are and we can
-	 * just report the active tracker is idle. If the new request is
-	 * incomplete, then we acquire a reference on it and check that
-	 * it remained the active request.
-	 *
-	 * It is then imperative that we do not zero the request on
-	 * reallocation, so that we can chase the dangling pointers!
-	 * See i915_request_alloc().
-	 */
-	do {
-		struct i915_request *request;
-
-		request = rcu_dereference(active->request);
-		if (!request || i915_request_completed(request))
-			return NULL;
-
-		/*
-		 * An especially silly compiler could decide to recompute the
-		 * result of i915_request_completed, more specifically
-		 * re-emit the load for request->fence.seqno. A race would catch
-		 * a later seqno value, which could flip the result from true to
-		 * false. Which means part of the instructions below might not
-		 * be executed, while later on instructions are executed. Due to
-		 * barriers within the refcounting the inconsistency can't reach
-		 * past the call to i915_request_get_rcu, but not executing
-		 * that while still executing i915_request_put() creates
-		 * havoc enough.  Prevent this with a compiler barrier.
-		 */
-		barrier();
-
-		request = i915_request_get_rcu(request);
-
-		/*
-		 * What stops the following rcu_access_pointer() from occurring
-		 * before the above i915_request_get_rcu()? If we were
-		 * to read the value before pausing to get the reference to
-		 * the request, we may not notice a change in the active
-		 * tracker.
-		 *
-		 * The rcu_access_pointer() is a mere compiler barrier, which
-		 * means both the CPU and compiler are free to perform the
-		 * memory read without constraint. The compiler only has to
-		 * ensure that any operations after the rcu_access_pointer()
-		 * occur afterwards in program order. This means the read may
-		 * be performed earlier by an out-of-order CPU, or adventurous
-		 * compiler.
-		 *
-		 * The atomic operation at the heart of
-		 * i915_request_get_rcu(), see dma_fence_get_rcu(), is
-		 * atomic_inc_not_zero() which is only a full memory barrier
-		 * when successful. That is, if i915_request_get_rcu()
-		 * returns the request (and so with the reference counted
-		 * incremented) then the following read for rcu_access_pointer()
-		 * must occur after the atomic operation and so confirm
-		 * that this request is the one currently being tracked.
-		 *
-		 * The corresponding write barrier is part of
-		 * rcu_assign_pointer().
-		 */
-		if (!request || request == rcu_access_pointer(active->request))
-			return rcu_pointer_handoff(request);
-
-		i915_request_put(request);
-	} while (1);
-}
-
+int __must_check
+i915_active_fence_set(struct i915_active_fence *active,
+		      struct i915_request *rq);
 /**
- * i915_active_request_get_unlocked - return a reference to the active request
+ * i915_active_fence_get - return a reference to the active fence
  * @active - the active tracker
  *
- * i915_active_request_get_unlocked() returns a reference to the active request,
+ * i915_active_fence_get() returns a reference to the active fence,
  * or NULL if the active tracker is idle. The reference is obtained under RCU,
  * so no locking is required by the caller.
  *
- * The reference should be freed with i915_request_put().
+ * The reference should be freed with dma_fence_put().
  */
-static inline struct i915_request *
-i915_active_request_get_unlocked(const struct i915_active_request *active)
+static inline struct dma_fence *
+i915_active_fence_get(struct i915_active_fence *active)
 {
-	struct i915_request *request;
+	struct dma_fence *fence;
 
 	rcu_read_lock();
-	request = __i915_active_request_get_rcu(active);
+	fence = dma_fence_get_rcu_safe(&active->fence);
 	rcu_read_unlock();
 
-	return request;
+	return fence;
 }
 
 /**
- * i915_active_request_isset - report whether the active tracker is assigned
+ * i915_active_fence_isset - report whether the active tracker is assigned
  * @active - the active tracker
  *
- * i915_active_request_isset() returns true if the active tracker is currently
- * assigned to a request. Due to the lazy retiring, that request may be idle
+ * i915_active_fence_isset() returns true if the active tracker is currently
+ * assigned to a fence. Due to the lazy retiring, that fence may be idle
  * and this may report stale information.
  */
 static inline bool
-i915_active_request_isset(const struct i915_active_request *active)
+i915_active_fence_isset(const struct i915_active_fence *active)
 {
-	return rcu_access_pointer(active->request);
+	return rcu_access_pointer(active->fence);
 }
 
-/**
- * i915_active_request_retire - waits until the request is retired
- * @active - the active request on which to wait
- *
- * i915_active_request_retire() waits until the request is completed,
- * and then ensures that at least the retirement handler for this
- * @active tracker is called before returning. If the @active
- * tracker is idle, the function returns immediately.
- */
-static inline int __must_check
-i915_active_request_retire(struct i915_active_request *active,
-			   struct mutex *mutex)
+static inline void
+i915_active_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
 {
-	struct i915_request *request;
-	long ret;
-
-	request = i915_active_request_raw(active, mutex);
-	if (!request)
-		return 0;
-
-	ret = i915_request_wait(request,
-				I915_WAIT_INTERRUPTIBLE,
-				MAX_SCHEDULE_TIMEOUT);
-	if (ret < 0)
-		return ret;
-
-	list_del_init(&active->link);
-	RCU_INIT_POINTER(active->request, NULL);
-
-	active->retire(active, request);
+	struct i915_active_fence *active =
+		container_of(cb, typeof(*active), cb);
 
-	return 0;
+	RCU_INIT_POINTER(active->fence, NULL);
 }
 
 /*
@@ -350,31 +155,29 @@ i915_active_request_retire(struct i915_active_request *active,
  * synchronisation.
  */
 
-void __i915_active_init(struct drm_i915_private *i915,
-			struct i915_active *ref,
+void __i915_active_init(struct i915_active *ref,
 			int (*active)(struct i915_active *ref),
 			void (*retire)(struct i915_active *ref),
 			struct lock_class_key *key);
-#define i915_active_init(i915, ref, active, retire) do {		\
+#define i915_active_init(ref, active, retire) do {		\
 	static struct lock_class_key __key;				\
 									\
-	__i915_active_init(i915, ref, active, retire, &__key);		\
+	__i915_active_init(ref, active, retire, &__key);		\
 } while (0)
 
 int i915_active_ref(struct i915_active *ref,
 		    u64 timeline,
-		    struct i915_request *rq);
+		    struct dma_fence *fence);
 
 int i915_active_wait(struct i915_active *ref);
 
 int i915_request_await_active(struct i915_request *rq,
 			      struct i915_active *ref);
-int i915_request_await_active_request(struct i915_request *rq,
-				      struct i915_active_request *active);
+int i915_request_await_active_fence(struct i915_request *rq,
+				    struct i915_active_fence *active);
 
 int i915_active_acquire(struct i915_active *ref);
 void i915_active_release(struct i915_active *ref);
-void __i915_active_release_nested(struct i915_active *ref, int subclass);
 
 static inline bool
 i915_active_is_idle(const struct i915_active *ref)
diff --git a/drivers/gpu/drm/i915/i915_active_types.h b/drivers/gpu/drm/i915/i915_active_types.h
index a3a5ec3e4163..42aaa29210a8 100644
--- a/drivers/gpu/drm/i915/i915_active_types.h
+++ b/drivers/gpu/drm/i915/i915_active_types.h
@@ -8,6 +8,7 @@
 #define _I915_ACTIVE_TYPES_H_
 
 #include <linux/atomic.h>
+#include <linux/dma-fence.h>
 #include <linux/llist.h>
 #include <linux/mutex.h>
 #include <linux/rbtree.h>
@@ -16,17 +17,9 @@
 
 #include "i915_utils.h"
 
-struct drm_i915_private;
-struct i915_active_request;
-struct i915_request;
-
-typedef void (*i915_active_retire_fn)(struct i915_active_request *,
-				      struct i915_request *);
-
-struct i915_active_request {
-	struct i915_request __rcu *request;
-	struct list_head link;
-	i915_active_retire_fn retire;
+struct i915_active_fence {
+	struct dma_fence __rcu *fence;
+	struct dma_fence_cb cb;
 };
 
 struct active_node;
@@ -37,8 +30,6 @@ struct active_node;
 #define i915_active_may_sleep(fn) ptr_pack_bits(&(fn), I915_ACTIVE_MAY_SLEEP, 2)
 
 struct i915_active {
-	struct drm_i915_private *i915;
-
 	struct active_node *cache;
 	struct rb_root tree;
 	struct mutex mutex;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b1e4db25f94e..bac7df724f07 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -954,28 +954,38 @@ wait_for_timelines(struct drm_i915_private *i915,
 
 	mutex_lock(&gt->mutex);
 	list_for_each_entry(tl, &gt->active_list, link) {
-		struct i915_request *rq;
+		struct dma_fence *fence;
 
-		rq = i915_active_request_get_unlocked(&tl->last_request);
-		if (!rq)
+		fence = i915_active_fence_get(&tl->last_request);
+		if (!fence)
 			continue;
 
 		mutex_unlock(&gt->mutex);
 
-		/*
-		 * "Race-to-idle".
-		 *
-		 * Switching to the kernel context is often used a synchronous
-		 * step prior to idling, e.g. in suspend for flushing all
-		 * current operations to memory before sleeping. These we
-		 * want to complete as quickly as possible to avoid prolonged
-		 * stalls, so allow the gpu to boost to maximum clocks.
-		 */
-		if (flags & I915_WAIT_FOR_IDLE_BOOST)
-			gen6_rps_boost(rq);
+		if (!dma_fence_is_i915(fence)) {
+			timeout = dma_fence_wait_timeout(fence,
+							 flags & I915_WAIT_INTERRUPTIBLE,
+							 timeout);
+		} else {
+			struct i915_request *rq = to_request(fence);
+
+			/*
+			 * "Race-to-idle".
+			 *
+			 * Switching to the kernel context is often used as
+			 * a synchronous step prior to idling, e.g. in suspend
+			 * for flushing all current operations to memory before
+			 * sleeping. These we want to complete as quickly as
+			 * possible to avoid prolonged stalls, so allow the gpu
+			 * to boost to maximum clocks.
+			 */
+			if (flags & I915_WAIT_FOR_IDLE_BOOST)
+				gen6_rps_boost(rq);
+
+			timeout = i915_request_wait(rq, flags, timeout);
+		}
 
-		timeout = i915_request_wait(rq, flags, timeout);
-		i915_request_put(rq);
+		dma_fence_put(fence);
 		if (timeout < 0)
 			return timeout;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 8c58e6f0a2b4..eaeff5f8cd16 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2049,7 +2049,7 @@ static struct i915_vma *pd_vma_create(struct gen6_ppgtt *ppgtt, int size)
 	if (!vma)
 		return ERR_PTR(-ENOMEM);
 
-	i915_active_init(i915, &vma->active, NULL, NULL);
+	i915_active_init(&vma->active, NULL, NULL);
 
 	vma->vm = &ggtt->vm;
 	vma->ops = &pd_vma_ops;
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 201bd8d1506f..ef67c58698d8 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -229,9 +229,8 @@ static void free_capture_list(struct i915_request *request)
 
 static bool i915_request_retire(struct i915_request *rq)
 {
-	struct i915_active_request *active, *next;
+	lockdep_assert_held(&rq->timeline->mutex);
 
-	lockdep_assert_held(&rq->i915->drm.struct_mutex);
 	if (!i915_request_completed(rq))
 		return false;
 
@@ -245,35 +244,6 @@ static bool i915_request_retire(struct i915_request *rq)
 
 	advance_ring(rq);
 
-	/*
-	 * Walk through the active list, calling retire on each. This allows
-	 * objects to track their GPU activity and mark themselves as idle
-	 * when their *last* active request is completed (updating state
-	 * tracking lists for eviction, active references for GEM, etc).
-	 *
-	 * As the ->retire() may free the node, we decouple it first and
-	 * pass along the auxiliary information (to avoid dereferencing
-	 * the node after the callback).
-	 */
-	list_for_each_entry_safe(active, next, &rq->active_list, link) {
-		/*
-		 * In microbenchmarks or focusing upon time inside the kernel,
-		 * we may spend an inordinate amount of time simply handling
-		 * the retirement of requests and processing their callbacks.
-		 * Of which, this loop itself is particularly hot due to the
-		 * cache misses when jumping around the list of
-		 * i915_active_request.  So we try to keep this loop as
-		 * streamlined as possible and also prefetch the next
-		 * i915_active_request to try and hide the likely cache miss.
-		 */
-		prefetchw(next);
-
-		INIT_LIST_HEAD(&active->link);
-		RCU_INIT_POINTER(active->request, NULL);
-
-		active->retire(active, rq);
-	}
-
 	local_irq_disable();
 
 	/*
@@ -328,11 +298,9 @@ void i915_request_retire_upto(struct i915_request *rq)
 		  rq->fence.context, rq->fence.seqno,
 		  hwsp_seqno(rq));
 
-	lockdep_assert_held(&rq->i915->drm.struct_mutex);
+	lockdep_assert_held(&rq->timeline->mutex);
 	GEM_BUG_ON(!i915_request_completed(rq));
-
-	if (list_empty(&rq->ring_link))
-		return;
+	GEM_BUG_ON(list_empty(&rq->ring_link));
 
 	do {
 		tmp = list_first_entry(&ring->request_list,
@@ -567,6 +535,7 @@ static void ring_retire_requests(struct intel_ring *ring)
 {
 	struct i915_request *rq, *rn;
 
+	lockdep_assert_held(&ring->timeline->mutex);
 	list_for_each_entry_safe(rq, rn, &ring->request_list, ring_link)
 		if (!i915_request_retire(rq))
 			break;
@@ -687,7 +656,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	rq->waitboost = false;
 	rq->execution_mask = ALL_ENGINES;
 
-	INIT_LIST_HEAD(&rq->active_list);
 	INIT_LIST_HEAD(&rq->execute_cb);
 
 	/*
@@ -726,7 +694,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
 	ce->ring->emit = rq->head;
 
 	/* Make sure we didn't add ourselves to external state before freeing */
-	GEM_BUG_ON(!list_empty(&rq->active_list));
 	GEM_BUG_ON(!list_empty(&rq->sched.signalers_list));
 	GEM_BUG_ON(!list_empty(&rq->sched.waiters_list));
 
@@ -1115,7 +1082,8 @@ __i915_request_add_to_timeline(struct i915_request *rq)
 	 * precludes optimising to use semaphores serialisation of a single
 	 * timeline across engines.
 	 */
-	prev = rcu_dereference_protected(timeline->last_request.request, 1);
+	prev = to_request(__i915_active_fence_set(&timeline->last_request,
+						  &rq->fence));
 	if (prev && !i915_request_completed(prev)) {
 		if (is_power_of_2(prev->engine->mask | rq->engine->mask))
 			i915_sw_fence_await_sw_fence(&rq->submit,
@@ -1140,7 +1108,6 @@ __i915_request_add_to_timeline(struct i915_request *rq)
 	 * us, the timeline will hold its seqno which is later than ours.
 	 */
 	GEM_BUG_ON(timeline->seqno != rq->fence.seqno);
-	__i915_active_request_set(&timeline->last_request, rq);
 
 	return prev;
 }
@@ -1237,10 +1204,11 @@ struct i915_request *__i915_request_commit(struct i915_request *rq)
 
 void i915_request_add(struct i915_request *rq)
 {
+	struct i915_timeline * const tl = rq->timeline;
 	struct i915_request *prev;
 
-	lockdep_assert_held(&rq->timeline->mutex);
-	lockdep_unpin_lock(&rq->timeline->mutex, rq->cookie);
+	lockdep_assert_held(&tl->mutex);
+	lockdep_unpin_lock(&tl->mutex, rq->cookie);
 
 	trace_i915_request_add(rq);
 
@@ -1263,10 +1231,10 @@ void i915_request_add(struct i915_request *rq)
 	 * work on behalf of others -- but instead we should benefit from
 	 * improved resource management. (Well, that's the theory at least.)
 	 */
-	if (prev && i915_request_completed(prev))
+	if (prev && i915_request_completed(prev) && prev->timeline == tl)
 		i915_request_retire_upto(prev);
 
-	mutex_unlock(&rq->timeline->mutex);
+	mutex_unlock(&tl->mutex);
 }
 
 static unsigned long local_clock_us(unsigned int *cpu)
@@ -1494,7 +1462,11 @@ bool i915_retire_requests(struct drm_i915_private *i915)
 	list_for_each_entry_safe(ring, tmp,
 				 &i915->gt.active_rings, active_link) {
 		intel_ring_get(ring); /* last rq holds reference! */
+		mutex_lock(&ring->timeline->mutex);
+
 		ring_retire_requests(ring);
+
+		mutex_unlock(&ring->timeline->mutex);
 		intel_ring_put(ring);
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h
index bebc1e9b4a5e..8277cff0df70 100644
--- a/drivers/gpu/drm/i915/i915_request.h
+++ b/drivers/gpu/drm/i915/i915_request.h
@@ -211,7 +211,6 @@ struct i915_request {
 	 * on the active_list (of their final request).
 	 */
 	struct i915_capture_list *capture_list;
-	struct list_head active_list;
 
 	/** Time at which this request was emitted, in jiffies. */
 	unsigned long emitted_jiffies;
diff --git a/drivers/gpu/drm/i915/i915_timeline.c b/drivers/gpu/drm/i915/i915_timeline.c
index 0f4ca9478ae5..fcab785c9a29 100644
--- a/drivers/gpu/drm/i915/i915_timeline.c
+++ b/drivers/gpu/drm/i915/i915_timeline.c
@@ -180,8 +180,7 @@ cacheline_alloc(struct i915_timeline_hwsp *hwsp, unsigned int cacheline)
 	cl->hwsp = hwsp;
 	cl->vaddr = page_pack_bits(vaddr, cacheline);
 
-	i915_active_init(hwsp_to_i915(hwsp), &cl->active,
-			 __cacheline_active, __cacheline_retire);
+	i915_active_init(&cl->active, __cacheline_active, __cacheline_retire);
 
 	return cl;
 }
@@ -264,7 +263,7 @@ int i915_timeline_init(struct drm_i915_private *i915,
 
 	mutex_init(&timeline->mutex);
 
-	INIT_ACTIVE_REQUEST(&timeline->last_request);
+	INIT_ACTIVE_FENCE(&timeline->last_request);
 	INIT_LIST_HEAD(&timeline->requests);
 
 	i915_syncmap_init(&timeline->sync);
@@ -461,7 +460,7 @@ __i915_timeline_get_seqno(struct i915_timeline *tl,
 	 * all writes into the cacheline from previous requests are complete.
 	 */
 	err = i915_active_ref(&tl->hwsp_cacheline->active,
-			      tl->fence_context, rq);
+			      tl->fence_context, &rq->fence);
 	if (err)
 		goto err_cacheline;
 
@@ -512,7 +511,7 @@ int i915_timeline_get_seqno(struct i915_timeline *tl,
 static int cacheline_ref(struct i915_timeline_cacheline *cl,
 			 struct i915_request *rq)
 {
-	return i915_active_ref(&cl->active, rq->fence.context, rq);
+	return i915_active_ref(&cl->active, rq->fence.context, &rq->fence);
 }
 
 int i915_timeline_read_hwsp(struct i915_request *from,
diff --git a/drivers/gpu/drm/i915/i915_timeline_types.h b/drivers/gpu/drm/i915/i915_timeline_types.h
index fce5cb4f1090..5af6d185d70c 100644
--- a/drivers/gpu/drm/i915/i915_timeline_types.h
+++ b/drivers/gpu/drm/i915/i915_timeline_types.h
@@ -45,7 +45,7 @@ struct i915_timeline {
 	 * the request using i915_active_request_get_request_rcu(), or hold the
 	 * struct_mutex.
 	 */
-	struct i915_active_request last_request;
+	struct i915_active_fence last_request;
 
 	/**
 	 * We track the most recent seqno that we wait on in every context so
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 9722269410b7..594cab86e50c 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -116,8 +116,7 @@ vma_create(struct drm_i915_gem_object *obj,
 	vma->size = obj->base.size;
 	vma->display_alignment = I915_GTT_MIN_ALIGNMENT;
 
-	i915_active_init(vm->i915, &vma->active,
-			 __i915_vma_active, __i915_vma_retire);
+	i915_active_init(&vma->active, __i915_vma_active, __i915_vma_retire);
 
 	INIT_LIST_HEAD(&vma->closed_link);
 
@@ -911,7 +910,7 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 	 * add the active reference first and queue for it to be dropped
 	 * *last*.
 	 */
-	err = i915_active_ref(&vma->active, rq->fence.context, rq);
+	err = i915_active_ref(&vma->active, rq->fence.context, &rq->fence);
 	if (unlikely(err))
 		return err;
 
@@ -922,7 +921,7 @@ int i915_vma_move_to_active(struct i915_vma *vma,
 		if (intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CS))
 			i915_active_ref(&obj->frontbuffer->write,
 					rq->fence.context,
-					rq);
+					&rq->fence);
 
 		obj->read_domains = 0;
 	}
@@ -941,32 +940,11 @@ int i915_vma_unbind(struct i915_vma *vma)
 
 	lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
 
-	/*
-	 * First wait upon any activity as retiring the request may
-	 * have side-effects such as unpinning or even unbinding this vma.
-	 */
-	might_sleep();
-	if (i915_vma_is_active(vma)) {
-		/*
-		 * When a closed VMA is retired, it is unbound - eek.
-		 * In order to prevent it from being recursively closed,
-		 * take a pin on the vma so that the second unbind is
-		 * aborted.
-		 *
-		 * Even more scary is that the retire callback may free
-		 * the object (last active vma). To prevent the explosion
-		 * we defer the actual object free to a worker that can
-		 * only proceed once it acquires the struct_mutex (which
-		 * we currently hold, therefore it cannot free this object
-		 * before we are finished).
-		 */
-		__i915_vma_pin(vma);
-		ret = i915_active_wait(&vma->active);
-		__i915_vma_unpin(vma);
-		if (ret)
-			return ret;
-	}
+	ret = i915_active_wait(&vma->active);
+	if (ret)
+		return ret;
 
+	GEM_BUG_ON(i915_vma_is_active(vma));
 	if (i915_vma_is_pinned(vma)) {
 		vma_print_allocator(vma, "is pinned");
 		return -EBUSY;
diff --git a/drivers/gpu/drm/i915/selftests/i915_active.c b/drivers/gpu/drm/i915/selftests/i915_active.c
index 84fce379c0de..4384ad51d6e4 100644
--- a/drivers/gpu/drm/i915/selftests/i915_active.c
+++ b/drivers/gpu/drm/i915/selftests/i915_active.c
@@ -67,7 +67,7 @@ static struct live_active *__live_alloc(struct drm_i915_private *i915)
 		return NULL;
 
 	kref_init(&active->ref);
-	i915_active_init(i915, &active->base, __live_active, __live_retire);
+	i915_active_init(&active->base, __live_active, __live_retire);
 
 	return active;
 }
@@ -110,7 +110,8 @@ __live_active_setup(struct drm_i915_private *i915)
 						       GFP_KERNEL);
 		if (err >= 0)
 			err = i915_active_ref(&active->base,
-					      rq->fence.context, rq);
+					      rq->fence.context,
+					      &rq->fence);
 		i915_request_add(rq);
 		if (err) {
 			pr_err("Failed to track active ref!\n");
@@ -146,19 +147,13 @@ static int live_active_wait(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
 	struct live_active *active;
-	intel_wakeref_t wakeref;
 	int err = 0;
 
 	/* Check that we get a callback when requests retire upon waiting */
 
-	mutex_lock(&i915->drm.struct_mutex);
-	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
-
 	active = __live_active_setup(i915);
-	if (IS_ERR(active)) {
-		err = PTR_ERR(active);
-		goto err;
-	}
+	if (IS_ERR(active))
+		return PTR_ERR(active);
 
 	i915_active_wait(&active->base);
 	if (!active->retired) {
@@ -168,11 +163,9 @@ static int live_active_wait(void *arg)
 
 	__live_put(active);
 
+	mutex_lock(&i915->drm.struct_mutex);
 	if (igt_flush_test(i915, I915_WAIT_LOCKED))
 		err = -EIO;
-
-err:
-	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
 	mutex_unlock(&i915->drm.struct_mutex);
 
 	return err;
@@ -182,23 +175,19 @@ static int live_active_retire(void *arg)
 {
 	struct drm_i915_private *i915 = arg;
 	struct live_active *active;
-	intel_wakeref_t wakeref;
 	int err = 0;
 
 	/* Check that we get a callback when requests are indirectly retired */
 
-	mutex_lock(&i915->drm.struct_mutex);
-	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
-
 	active = __live_active_setup(i915);
-	if (IS_ERR(active)) {
-		err = PTR_ERR(active);
-		goto err;
-	}
+	if (IS_ERR(active))
+		return PTR_ERR(active);
 
 	/* waits for & retires all requests */
+	mutex_lock(&i915->drm.struct_mutex);
 	if (igt_flush_test(i915, I915_WAIT_LOCKED))
 		err = -EIO;
+	mutex_unlock(&i915->drm.struct_mutex);
 
 	if (!active->retired) {
 		pr_err("i915_active not retired after flushing!\n");
@@ -207,10 +196,6 @@ static int live_active_retire(void *arg)
 
 	__live_put(active);
 
-err:
-	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
-	mutex_unlock(&i915->drm.struct_mutex);
-
 	return err;
 }
 
diff --git a/drivers/gpu/drm/i915/selftests/mock_timeline.c b/drivers/gpu/drm/i915/selftests/mock_timeline.c
index 65b52be23d42..024de718f66f 100644
--- a/drivers/gpu/drm/i915/selftests/mock_timeline.c
+++ b/drivers/gpu/drm/i915/selftests/mock_timeline.c
@@ -15,7 +15,7 @@ void mock_timeline_init(struct i915_timeline *timeline, u64 context)
 
 	mutex_init(&timeline->mutex);
 
-	INIT_ACTIVE_REQUEST(&timeline->last_request);
+	INIT_ACTIVE_FENCE(&timeline->last_request);
 	INIT_LIST_HEAD(&timeline->requests);
 
 	i915_syncmap_init(&timeline->sync);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 23/26] drm/i915: Rename intel_wakeref_[is]_active
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (20 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 22/26] drm/i915: Coordinate i915_active with its own mutex Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  8:14   ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 24/26] drm/i915: Teach execbuffer to take the engine wakeref not GT Chris Wilson
                   ` (7 subsequent siblings)
  29 siblings, 1 reply; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Our general rule is to use is/has as the verb for boolean functions,
rename intel_wakeref_active to intel_wakeref_is_active so the question
being asked is clear.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_pm.c    | 3 ++-
 drivers/gpu/drm/i915/gt/intel_engine.h    | 6 ++++++
 drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c       | 2 +-
 drivers/gpu/drm/i915/gt/intel_reset.c     | 2 +-
 drivers/gpu/drm/i915/intel_wakeref.h      | 4 ++--
 6 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 35b73c347887..8570e2a64737 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -51,7 +51,8 @@ static void idle_work_handler(struct work_struct *work)
 	mutex_lock(&i915->drm.struct_mutex);
 
 	intel_wakeref_lock(&i915->gt.wakeref);
-	park = !intel_wakeref_active(&i915->gt.wakeref) && !work_pending(work);
+	park = (!intel_wakeref_is_active(&i915->gt.wakeref) &&
+		!work_pending(work));
 	intel_wakeref_unlock(&i915->gt.wakeref);
 	if (park)
 		i915_gem_park(i915);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index 9bb6ff76680e..5c3b15835af6 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -17,6 +17,7 @@
 #include "i915_timeline.h"
 #include "intel_engine_types.h"
 #include "intel_gpu_commands.h"
+#include "intel_wakeref.h"
 #include "intel_workarounds.h"
 
 struct drm_printer;
@@ -422,6 +423,11 @@ static inline void intel_engine_reset(struct intel_engine_cs *engine,
 bool intel_engine_is_idle(struct intel_engine_cs *engine);
 bool intel_engines_are_idle(struct drm_i915_private *dev_priv);
 
+static inline bool intel_engine_is_active(const struct intel_engine_cs *engine)
+{
+	return intel_wakeref_is_active(&engine->wakeref);
+}
+
 void intel_engines_reset_default_submission(struct drm_i915_private *i915);
 unsigned int intel_engines_has_context_isolation(struct drm_i915_private *i915);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 65cf08a8b209..65424eed8f99 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -1152,7 +1152,7 @@ bool intel_engine_is_idle(struct intel_engine_cs *engine)
 	if (i915_reset_failed(engine->i915))
 		return true;
 
-	if (!intel_wakeref_active(&engine->wakeref))
+	if (!intel_engine_is_active(engine))
 		return true;
 
 	/* Waiting to drain ELSP? */
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 5dbc43c70496..c77b2819b092 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -663,7 +663,7 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 	 * that all ELSP are drained i.e. we have processed the CSB,
 	 * before allowing ourselves to idle and calling intel_runtime_pm_put().
 	 */
-	GEM_BUG_ON(!intel_wakeref_active(&engine->wakeref));
+	GEM_BUG_ON(!intel_engine_is_active(engine));
 
 	/*
 	 * ELSQ note: the submit queue is not cleared after being submitted
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index d204aadf5d6f..919cce816f65 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -1081,7 +1081,7 @@ int i915_reset_engine(struct intel_engine_cs *engine, const char *msg)
 	GEM_TRACE("%s flags=%lx\n", engine->name, error->flags);
 	GEM_BUG_ON(!test_bit(I915_RESET_ENGINE + engine->id, &error->flags));
 
-	if (!intel_wakeref_active(&engine->wakeref))
+	if (!intel_engine_is_active(engine))
 		return 0;
 
 	reset_prepare_engine(engine);
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 9cbb2ebf575b..ccefe731ba9f 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -121,13 +121,13 @@ intel_wakeref_unlock(struct intel_wakeref *wf)
 }
 
 /**
- * intel_wakeref_active: Query whether the wakeref is currently held
+ * intel_wakeref_is_active: Query whether the wakeref is currently held
  * @wf: the wakeref
  *
  * Returns: true if the wakeref is currently held.
  */
 static inline bool
-intel_wakeref_active(struct intel_wakeref *wf)
+intel_wakeref_is_active(const struct intel_wakeref *wf)
 {
 	return READ_ONCE(wf->wakeref);
 }
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 24/26] drm/i915: Teach execbuffer to take the engine wakeref not GT
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (21 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 23/26] drm/i915: Rename intel_wakeref_[is]_active Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 25/26] drm/i915: Replace struct_mutex for batch pool serialisation Chris Wilson
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

In the next patch, we would like to couple into the engine wakeref to
free the batch pool on idling. The caveat here is that we therefore want
to track the engine wakeref more precisely and to hold it instead of the
broader GT wakeref as we process the ioctl.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 36 ++++++++++++-------
 drivers/gpu/drm/i915/gt/intel_context.h       |  7 ++++
 2 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index e847a8079705..8fb8613a1555 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2142,13 +2142,35 @@ static int eb_pin_context(struct i915_execbuffer *eb, struct intel_context *ce)
 	if (err)
 		return err;
 
+	/*
+	 * Take a local wakeref for preparing to dispatch the execbuf as
+	 * we expect to access the hardware fairly frequently in the
+	 * process. Upon first dispatch, we acquire another prolonged
+	 * wakeref that we hold until the GPU has been idle for at least
+	 * 100ms.
+	 */
+	err = intel_context_timeline_lock(ce);
+	if (err)
+		goto err_unpin;
+
+	intel_context_enter(ce);
+	intel_context_timeline_unlock(ce);
+
 	eb->engine = ce->engine;
 	eb->context = ce;
 	return 0;
+
+err_unpin:
+	intel_context_unpin(ce);
+	return err;
 }
 
 static void eb_unpin_context(struct i915_execbuffer *eb)
 {
+	__intel_context_timeline_lock(eb->context);
+	intel_context_exit(eb->context);
+	intel_context_timeline_unlock(eb->context);
+
 	intel_context_unpin(eb->context);
 }
 
@@ -2429,18 +2451,9 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	if (unlikely(err))
 		goto err_destroy;
 
-	/*
-	 * Take a local wakeref for preparing to dispatch the execbuf as
-	 * we expect to access the hardware fairly frequently in the
-	 * process. Upon first dispatch, we acquire another prolonged
-	 * wakeref that we hold until the GPU has been idle for at least
-	 * 100ms.
-	 */
-	intel_gt_pm_get(eb.i915);
-
 	err = i915_mutex_lock_interruptible(dev);
 	if (err)
-		goto err_rpm;
+		goto err_context;
 
 	err = eb_select_engine(&eb, file, args);
 	if (unlikely(err))
@@ -2605,8 +2618,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	eb_unpin_context(&eb);
 err_unlock:
 	mutex_unlock(&dev->struct_mutex);
-err_rpm:
-	intel_gt_pm_put(eb.i915);
+err_context:
 	i915_gem_context_put(eb.gem_context);
 err_destroy:
 	eb_destroy(&eb);
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index 40cd8320fcc3..065ba4ac4e87 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -126,6 +126,13 @@ static inline void intel_context_put(struct intel_context *ce)
 	kref_put(&ce->ref, ce->ops->destroy);
 }
 
+static inline void
+__intel_context_timeline_lock(struct intel_context *ce)
+	__acquires(&ce->ring->timeline->mutex)
+{
+	mutex_lock(&ce->ring->timeline->mutex);
+}
+
 static inline int __must_check
 intel_context_timeline_lock(struct intel_context *ce)
 	__acquires(&ce->ring->timeline->mutex)
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 25/26] drm/i915: Replace struct_mutex for batch pool serialisation
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (22 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 24/26] drm/i915: Teach execbuffer to take the engine wakeref not GT Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  7:41 ` [PATCH 26/26] drm/i915: Move idle barrier cleanup into engine-pm Chris Wilson
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Switch to tracking activity via i915_active on individual nodes, only
keeping a list of retired objects in the cache, and reaping the cache
when the engine itself idles.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/Makefile                 |   2 +-
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    |  58 ++++---
 drivers/gpu/drm/i915/gem/i915_gem_object.c    |   1 -
 .../gpu/drm/i915/gem/i915_gem_object_types.h  |   1 -
 drivers/gpu/drm/i915/gem/i915_gem_pm.c        |   4 +-
 drivers/gpu/drm/i915/gt/intel_engine.h        |   1 -
 drivers/gpu/drm/i915/gt/intel_engine_cs.c     |  11 +-
 drivers/gpu/drm/i915/gt/intel_engine_pm.c     |   2 +
 drivers/gpu/drm/i915/gt/intel_engine_pool.c   | 163 ++++++++++++++++++
 drivers/gpu/drm/i915/gt/intel_engine_pool.h   |  33 ++++
 .../gpu/drm/i915/gt/intel_engine_pool_types.h |  29 ++++
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |   4 +-
 drivers/gpu/drm/i915/gt/mock_engine.c         |   3 +
 drivers/gpu/drm/i915/i915_debugfs.c           |  68 --------
 drivers/gpu/drm/i915/i915_gem_batch_pool.c    | 132 --------------
 drivers/gpu/drm/i915/i915_gem_batch_pool.h    |  26 ---
 16 files changed, 274 insertions(+), 264 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_pool.c
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_pool.h
 create mode 100644 drivers/gpu/drm/i915/gt/intel_engine_pool_types.h
 delete mode 100644 drivers/gpu/drm/i915/i915_gem_batch_pool.c
 delete mode 100644 drivers/gpu/drm/i915/i915_gem_batch_pool.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 91355c2ea8a5..68936d5d8096 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -72,6 +72,7 @@ obj-y += gt/
 gt-y += \
 	gt/intel_breadcrumbs.o \
 	gt/intel_context.o \
+	gt/intel_engine_pool.o \
 	gt/intel_engine_cs.o \
 	gt/intel_engine_pm.o \
 	gt/intel_gt_pm.o \
@@ -116,7 +117,6 @@ i915-y += \
 	  $(gem-y) \
 	  i915_active.o \
 	  i915_cmd_parser.o \
-	  i915_gem_batch_pool.o \
 	  i915_gem_evict.o \
 	  i915_gem_fence_reg.o \
 	  i915_gem_gtt.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 8fb8613a1555..70e4436dc2a8 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -16,6 +16,7 @@
 
 #include "gem/i915_gem_ioctls.h"
 #include "gt/intel_context.h"
+#include "gt/intel_engine_pool.h"
 #include "gt/intel_gt_pm.h"
 
 #include "i915_gem_ioctls.h"
@@ -1201,25 +1202,26 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 			     unsigned int len)
 {
 	struct reloc_cache *cache = &eb->reloc_cache;
-	struct drm_i915_gem_object *obj;
+	struct intel_engine_pool_node *pool;
 	struct i915_request *rq;
 	struct i915_vma *batch;
 	u32 *cmd;
 	int err;
 
-	obj = i915_gem_batch_pool_get(&eb->engine->batch_pool, PAGE_SIZE);
-	if (IS_ERR(obj))
-		return PTR_ERR(obj);
+	pool = intel_engine_pool_get(&eb->engine->pool, PAGE_SIZE);
+	if (IS_ERR(pool))
+		return PTR_ERR(pool);
 
-	cmd = i915_gem_object_pin_map(obj,
+	cmd = i915_gem_object_pin_map(pool->obj,
 				      cache->has_llc ?
 				      I915_MAP_FORCE_WB :
 				      I915_MAP_FORCE_WC);
-	i915_gem_object_unpin_pages(obj);
-	if (IS_ERR(cmd))
-		return PTR_ERR(cmd);
+	if (IS_ERR(cmd)) {
+		err = PTR_ERR(cmd);
+		goto out_pool;
+	}
 
-	batch = i915_vma_instance(obj, vma->vm, NULL);
+	batch = i915_vma_instance(pool->obj, vma->vm, NULL);
 	if (IS_ERR(batch)) {
 		err = PTR_ERR(batch);
 		goto err_unmap;
@@ -1235,6 +1237,10 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 		goto err_unpin;
 	}
 
+	err = intel_engine_pool_mark_active(pool, rq);
+	if (err)
+		goto err_request;
+
 	err = reloc_move_to_gpu(rq, vma);
 	if (err)
 		goto err_request;
@@ -1260,7 +1266,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 	cache->rq_size = 0;
 
 	/* Return with batch mapping (cmd) still pinned */
-	return 0;
+	goto out_pool;
 
 skip_request:
 	i915_request_skip(rq, err);
@@ -1269,7 +1275,9 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
 err_unpin:
 	i915_vma_unpin(batch);
 err_unmap:
-	i915_gem_object_unpin_map(obj);
+	i915_gem_object_unpin_map(pool->obj);
+out_pool:
+	intel_engine_pool_put(pool);
 	return err;
 }
 
@@ -2013,18 +2021,17 @@ static int i915_reset_gen7_sol_offsets(struct i915_request *rq)
 
 static struct i915_vma *eb_parse(struct i915_execbuffer *eb, bool is_master)
 {
-	struct drm_i915_gem_object *shadow_batch_obj;
+	struct intel_engine_pool_node *pool;
 	struct i915_vma *vma;
 	int err;
 
-	shadow_batch_obj = i915_gem_batch_pool_get(&eb->engine->batch_pool,
-						   PAGE_ALIGN(eb->batch_len));
-	if (IS_ERR(shadow_batch_obj))
-		return ERR_CAST(shadow_batch_obj);
+	pool = intel_engine_pool_get(&eb->engine->pool, eb->batch_len);
+	if (IS_ERR(pool))
+		return ERR_CAST(pool);
 
 	err = intel_engine_cmd_parser(eb->engine,
 				      eb->batch->obj,
-				      shadow_batch_obj,
+				      pool->obj,
 				      eb->batch_start_offset,
 				      eb->batch_len,
 				      is_master);
@@ -2033,12 +2040,12 @@ static struct i915_vma *eb_parse(struct i915_execbuffer *eb, bool is_master)
 			vma = NULL;
 		else
 			vma = ERR_PTR(err);
-		goto out;
+		goto err;
 	}
 
-	vma = i915_gem_object_ggtt_pin(shadow_batch_obj, NULL, 0, 0, 0);
+	vma = i915_gem_object_ggtt_pin(pool->obj, NULL, 0, 0, 0);
 	if (IS_ERR(vma))
-		goto out;
+		goto err;
 
 	eb->vma[eb->buffer_count] = i915_vma_get(vma);
 	eb->flags[eb->buffer_count] =
@@ -2046,8 +2053,11 @@ static struct i915_vma *eb_parse(struct i915_execbuffer *eb, bool is_master)
 	vma->exec_flags = &eb->flags[eb->buffer_count];
 	eb->buffer_count++;
 
-out:
-	i915_gem_object_unpin_pages(shadow_batch_obj);
+	vma->private = pool;
+	return vma;
+
+err:
+	intel_engine_pool_put(pool);
 	return vma;
 }
 
@@ -2587,6 +2597,8 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	 * to explicitly hold another reference here.
 	 */
 	eb.request->batch = eb.batch;
+	if (eb.batch->private)
+		intel_engine_pool_mark_active(eb.batch->private, eb.request);
 
 	trace_i915_request_queue(eb.request, eb.batch_flags);
 	err = eb_submit(&eb);
@@ -2611,6 +2623,8 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 err_batch_unpin:
 	if (eb.batch_flags & I915_DISPATCH_SECURE)
 		i915_vma_unpin(eb.batch);
+	if (eb.batch->private)
+		intel_engine_pool_put(eb.batch->private);
 err_vma:
 	if (eb.exec)
 		eb_release_vmas(&eb);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 8b27f193d5f9..cd7c4336da25 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -54,7 +54,6 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
 	INIT_LIST_HEAD(&obj->vma.list);
 
 	INIT_LIST_HEAD(&obj->lut_list);
-	INIT_LIST_HEAD(&obj->batch_pool_link);
 
 	init_rcu_head(&obj->rcu);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index e87fca4d8194..e3b3fa074404 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -119,7 +119,6 @@ struct drm_i915_gem_object {
 	unsigned int userfault_count;
 	struct list_head userfault_link;
 
-	struct list_head batch_pool_link;
 	I915_SELFTEST_DECLARE(struct list_head st_link);
 
 	/*
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 8570e2a64737..1f53feac2b22 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -30,10 +30,8 @@ static void i915_gem_park(struct drm_i915_private *i915)
 
 	lockdep_assert_held(&i915->drm.struct_mutex);
 
-	for_each_engine(engine, i915, id) {
+	for_each_engine(engine, i915, id)
 		call_idle_barriers(engine); /* cleanup after wedging */
-		i915_gem_batch_pool_fini(&engine->batch_pool);
-	}
 
 	i915_timelines_park(i915);
 	i915_vma_parked(i915);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
index 5c3b15835af6..cca089520818 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine.h
@@ -9,7 +9,6 @@
 #include <linux/random.h>
 #include <linux/seqlock.h>
 
-#include "i915_gem_batch_pool.h"
 #include "i915_pmu.h"
 #include "i915_reg.h"
 #include "i915_request.h"
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index 65424eed8f99..f78f1096fe2b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -30,6 +30,7 @@
 
 #include "intel_engine.h"
 #include "intel_engine_pm.h"
+#include "intel_engine_pool.h"
 #include "intel_context.h"
 #include "intel_lrc.h"
 #include "intel_reset.h"
@@ -495,11 +496,6 @@ int intel_engines_init(struct drm_i915_private *i915)
 	return err;
 }
 
-static void intel_engine_init_batch_pool(struct intel_engine_cs *engine)
-{
-	i915_gem_batch_pool_init(&engine->batch_pool, engine);
-}
-
 void intel_engine_init_execlists(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -625,10 +621,11 @@ static int intel_engine_setup_common(struct intel_engine_cs *engine)
 	intel_engine_init_breadcrumbs(engine);
 	intel_engine_init_execlists(engine);
 	intel_engine_init_hangcheck(engine);
-	intel_engine_init_batch_pool(engine);
 	intel_engine_init_cmd_parser(engine);
 	intel_engine_init__pm(engine);
 
+	intel_engine_pool_init(&engine->pool);
+
 	/* Use the whole device by default */
 	engine->sseu =
 		intel_sseu_from_device_info(&RUNTIME_INFO(engine->i915)->sseu);
@@ -878,9 +875,9 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
 
 	cleanup_status_page(engine);
 
+	intel_engine_pool_fini(&engine->pool);
 	intel_engine_fini_breadcrumbs(engine);
 	intel_engine_cleanup_cmd_parser(engine);
-	i915_gem_batch_pool_fini(&engine->batch_pool);
 
 	if (engine->default_state)
 		i915_gem_object_put(engine->default_state);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 2ce00d3dc42a..d5d3aac5c268 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -7,6 +7,7 @@
 #include "i915_drv.h"
 
 #include "intel_engine.h"
+#include "intel_engine_pool.h"
 #include "intel_engine_pm.h"
 #include "intel_gt_pm.h"
 
@@ -115,6 +116,7 @@ static int __engine_park(struct intel_wakeref *wf)
 	GEM_TRACE("%s\n", engine->name);
 
 	intel_engine_disarm_breadcrumbs(engine);
+	intel_engine_pool_park(&engine->pool);
 
 	/* Must be reset upon idling, or we may miss the busy wakeup. */
 	GEM_BUG_ON(engine->execlists.queue_priority_hint != INT_MIN);
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pool.c b/drivers/gpu/drm/i915/gt/intel_engine_pool.c
new file mode 100644
index 000000000000..0b725cedfecd
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pool.c
@@ -0,0 +1,163 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2014-2018 Intel Corporation
+ */
+
+#include "gem/i915_gem_object.h"
+
+#include "i915_drv.h"
+#include "intel_engine_pool.h"
+
+static struct intel_engine_cs *to_engine(struct intel_engine_pool *pool)
+{
+	return container_of(pool, struct intel_engine_cs, pool);
+}
+
+static struct list_head *
+bucket_for_size(struct intel_engine_pool *pool, size_t sz)
+{
+	int n;
+
+	/*
+	 * Compute a power-of-two bucket, but throw everything greater than
+	 * 16KiB into the same bucket: i.e. the the buckets hold objects of
+	 * (1 page, 2 pages, 4 pages, 8+ pages).
+	 */
+	n = fls(sz >> PAGE_SHIFT) - 1;
+	if (n >= ARRAY_SIZE(pool->cache_list))
+		n = ARRAY_SIZE(pool->cache_list) - 1;
+
+	return &pool->cache_list[n];
+}
+
+static void node_free(struct intel_engine_pool_node *node)
+{
+	i915_gem_object_put(node->obj);
+	kfree(node);
+}
+
+static int pool_active(struct i915_active *ref)
+{
+	struct intel_engine_pool_node *node =
+		container_of(ref, typeof(*node), active);
+
+	if (reservation_object_trylock(node->obj->resv)) {
+		reservation_object_add_excl_fence(node->obj->resv, NULL);
+		reservation_object_unlock(node->obj->resv);
+	}
+
+	return i915_gem_object_pin_pages(node->obj);
+}
+
+__i915_active_call
+static void pool_retire(struct i915_active *ref)
+{
+	struct intel_engine_pool_node *node =
+		container_of(ref, typeof(*node), active);
+	struct intel_engine_pool *pool = node->pool;
+	struct list_head *list = bucket_for_size(pool, node->obj->base.size);
+	unsigned long flags;
+
+	GEM_BUG_ON(!intel_engine_is_active(to_engine(pool)));
+
+	i915_gem_object_unpin_pages(node->obj);
+
+	spin_lock_irqsave(&pool->lock, flags);
+	list_add(&node->link, list);
+	spin_unlock_irqrestore(&pool->lock, flags);
+}
+
+static struct intel_engine_pool_node *
+node_create(struct intel_engine_pool *pool, size_t sz)
+{
+	struct intel_engine_cs *engine = to_engine(pool);
+	struct intel_engine_pool_node *node;
+	struct drm_i915_gem_object *obj;
+
+	node = kmalloc(sizeof(*node),
+		       GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+	if (!node)
+		return ERR_PTR(-ENOMEM);
+
+	node->pool = pool;
+	i915_active_init(&node->active, pool_active, pool_retire);
+
+	obj = i915_gem_object_create_internal(engine->i915, sz);
+	if (IS_ERR(obj)) {
+		kfree(node);
+		return ERR_CAST(obj);
+	}
+
+	node->obj = obj;
+	return node;
+}
+
+struct intel_engine_pool_node *
+intel_engine_pool_get(struct intel_engine_pool *pool, size_t size)
+{
+	struct intel_engine_pool_node *node;
+	struct list_head *list;
+	unsigned long flags;
+	int ret;
+
+	GEM_BUG_ON(!intel_engine_is_active(to_engine(pool)));
+
+	size = PAGE_ALIGN(size);
+	list = bucket_for_size(pool, size);
+
+	spin_lock_irqsave(&pool->lock, flags);
+	list_for_each_entry(node, list, link) {
+		if (node->obj->base.size < size)
+			continue;
+		list_del(&node->link);
+		break;
+	}
+	spin_unlock_irqrestore(&pool->lock, flags);
+
+	if (&node->link == list) {
+		node = node_create(pool, size);
+		if (IS_ERR(node))
+			return node;
+	}
+
+	ret = i915_active_acquire(&node->active);
+	if (ret) {
+		node_free(node);
+		return ERR_PTR(ret);
+	}
+
+	return node;
+}
+
+void intel_engine_pool_init(struct intel_engine_pool *pool)
+{
+	int n;
+
+	spin_lock_init(&pool->lock);
+	for (n = 0; n < ARRAY_SIZE(pool->cache_list); n++)
+		INIT_LIST_HEAD(&pool->cache_list[n]);
+}
+
+void intel_engine_pool_park(struct intel_engine_pool *pool)
+{
+	int n;
+
+	for (n = 0; n < ARRAY_SIZE(pool->cache_list); n++) {
+		struct list_head *list = &pool->cache_list[n];
+		struct intel_engine_pool_node *node, *nn;
+
+		list_for_each_entry_safe(node, nn, list, link)
+			node_free(node);
+
+		INIT_LIST_HEAD(list);
+	}
+}
+
+void intel_engine_pool_fini(struct intel_engine_pool *pool)
+{
+	int n;
+
+	for (n = 0; n < ARRAY_SIZE(pool->cache_list); n++)
+		GEM_BUG_ON(!list_empty(&pool->cache_list[n]));
+}
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pool.h b/drivers/gpu/drm/i915/gt/intel_engine_pool.h
new file mode 100644
index 000000000000..6bc64bf19485
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pool.h
@@ -0,0 +1,33 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2014-2018 Intel Corporation
+ */
+
+#ifndef INTEL_ENGINE_POOL_H
+#define INTEL_ENGINE_POOL_H
+
+#include "intel_engine_pool_types.h"
+#include "i915_request.h"
+
+struct intel_engine_pool_node *
+intel_engine_pool_get(struct intel_engine_pool *pool, size_t size);
+
+static inline int
+intel_engine_pool_mark_active(struct intel_engine_pool_node *node,
+			      struct i915_request *rq)
+{
+	return i915_active_ref(&node->active, rq->fence.context, &rq->fence);
+}
+
+static inline void
+intel_engine_pool_put(struct intel_engine_pool_node *node)
+{
+	i915_active_release(&node->active);
+}
+
+void intel_engine_pool_init(struct intel_engine_pool *pool);
+void intel_engine_pool_park(struct intel_engine_pool *pool);
+void intel_engine_pool_fini(struct intel_engine_pool *pool);
+
+#endif /* INTEL_ENGINE_POOL_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pool_types.h b/drivers/gpu/drm/i915/gt/intel_engine_pool_types.h
new file mode 100644
index 000000000000..99e4e5d54fe6
--- /dev/null
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pool_types.h
@@ -0,0 +1,29 @@
+/*
+ * SPDX-License-Identifier: MIT
+ *
+ * Copyright © 2014-2018 Intel Corporation
+ */
+
+#ifndef INTEL_ENGINE_POOL_TYPES_H
+#define INTEL_ENGINE_POOL_TYPES_H
+
+#include <linux/list.h>
+#include <linux/spinlock.h>
+
+#include "i915_active.h"
+
+struct drm_i915_gem_object;
+
+struct intel_engine_pool {
+	spinlock_t lock;
+	struct list_head cache_list[4];
+};
+
+struct intel_engine_pool_node {
+	struct i915_active active;
+	struct drm_i915_gem_object *obj;
+	struct list_head link;
+	struct intel_engine_pool *pool;
+};
+
+#endif /* INTEL_ENGINE_POOL_TYPES_H */
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 1cbe10a0fec7..31da966d0669 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -16,11 +16,11 @@
 #include <linux/types.h>
 
 #include "i915_gem.h"
-#include "i915_gem_batch_pool.h"
 #include "i915_pmu.h"
 #include "i915_priolist_types.h"
 #include "i915_selftest.h"
 #include "i915_timeline_types.h"
+#include "intel_engine_pool_types.h"
 #include "intel_sseu.h"
 #include "intel_wakeref.h"
 #include "intel_workarounds_types.h"
@@ -323,7 +323,7 @@ struct intel_engine_cs {
 	 * when the command parser is enabled. Prevents the client from
 	 * modifying the batch contents after software parsing.
 	 */
-	struct i915_gem_batch_pool batch_pool;
+	struct intel_engine_pool pool;
 
 	struct intel_hw_status_page status_page;
 	struct i915_ctx_workarounds wa_ctx;
diff --git a/drivers/gpu/drm/i915/gt/mock_engine.c b/drivers/gpu/drm/i915/gt/mock_engine.c
index b9c2764beca3..8d00c5418825 100644
--- a/drivers/gpu/drm/i915/gt/mock_engine.c
+++ b/drivers/gpu/drm/i915/gt/mock_engine.c
@@ -27,6 +27,7 @@
 #include "i915_drv.h"
 #include "intel_context.h"
 #include "intel_engine_pm.h"
+#include "intel_engine_pool.h"
 
 #include "mock_engine.h"
 #include "selftests/mock_request.h"
@@ -290,6 +291,8 @@ int mock_engine_init(struct intel_engine_cs *engine)
 	intel_engine_init_execlists(engine);
 	intel_engine_init__pm(engine);
 
+	intel_engine_pool_init(&engine->pool);
+
 	engine->kernel_context =
 		i915_gem_context_get_engine(i915->kernel_context, engine->id);
 	if (IS_ERR(engine->kernel_context))
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 06b3147d81f7..d7c98e474d57 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -288,27 +288,6 @@ static int per_file_stats(int id, void *ptr, void *data)
 			   stats.closed); \
 } while (0)
 
-static void print_batch_pool_stats(struct seq_file *m,
-				   struct drm_i915_private *dev_priv)
-{
-	struct drm_i915_gem_object *obj;
-	struct intel_engine_cs *engine;
-	struct file_stats stats = {};
-	enum intel_engine_id id;
-	int j;
-
-	for_each_engine(engine, dev_priv, id) {
-		for (j = 0; j < ARRAY_SIZE(engine->batch_pool.cache_list); j++) {
-			list_for_each_entry(obj,
-					    &engine->batch_pool.cache_list[j],
-					    batch_pool_link)
-				per_file_stats(0, obj, &stats);
-		}
-	}
-
-	print_file_stats(m, "[k]batch pool", stats);
-}
-
 static void print_context_stats(struct seq_file *m,
 				struct drm_i915_private *i915)
 {
@@ -366,58 +345,12 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
 	if (ret)
 		return ret;
 
-	print_batch_pool_stats(m, i915);
 	print_context_stats(m, i915);
 	mutex_unlock(&i915->drm.struct_mutex);
 
 	return 0;
 }
 
-static int i915_gem_batch_pool_info(struct seq_file *m, void *data)
-{
-	struct drm_i915_private *dev_priv = node_to_i915(m->private);
-	struct drm_device *dev = &dev_priv->drm;
-	struct drm_i915_gem_object *obj;
-	struct intel_engine_cs *engine;
-	enum intel_engine_id id;
-	int total = 0;
-	int ret, j;
-
-	ret = mutex_lock_interruptible(&dev->struct_mutex);
-	if (ret)
-		return ret;
-
-	for_each_engine(engine, dev_priv, id) {
-		for (j = 0; j < ARRAY_SIZE(engine->batch_pool.cache_list); j++) {
-			int count;
-
-			count = 0;
-			list_for_each_entry(obj,
-					    &engine->batch_pool.cache_list[j],
-					    batch_pool_link)
-				count++;
-			seq_printf(m, "%s cache[%d]: %d objects\n",
-				   engine->name, j, count);
-
-			list_for_each_entry(obj,
-					    &engine->batch_pool.cache_list[j],
-					    batch_pool_link) {
-				seq_puts(m, "   ");
-				describe_obj(m, obj);
-				seq_putc(m, '\n');
-			}
-
-			total += count;
-		}
-	}
-
-	seq_printf(m, "total: %d\n", total);
-
-	mutex_unlock(&dev->struct_mutex);
-
-	return 0;
-}
-
 static void gen8_display_interrupt_info(struct seq_file *m)
 {
 	struct drm_i915_private *dev_priv = node_to_i915(m->private);
@@ -4357,7 +4290,6 @@ static const struct drm_info_list i915_debugfs_list[] = {
 	{"i915_gem_objects", i915_gem_object_info, 0},
 	{"i915_gem_fence_regs", i915_gem_fence_regs_info, 0},
 	{"i915_gem_interrupt", i915_interrupt_info, 0},
-	{"i915_gem_batch_pool", i915_gem_batch_pool_info, 0},
 	{"i915_guc_info", i915_guc_info, 0},
 	{"i915_guc_load_status", i915_guc_load_status_info, 0},
 	{"i915_guc_log_dump", i915_guc_log_dump, 0},
diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.c b/drivers/gpu/drm/i915/i915_gem_batch_pool.c
deleted file mode 100644
index 1b7595e2ac21..000000000000
--- a/drivers/gpu/drm/i915/i915_gem_batch_pool.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/*
- * SPDX-License-Identifier: MIT
- *
- * Copyright © 2014-2018 Intel Corporation
- */
-
-#include "i915_gem_batch_pool.h"
-#include "i915_drv.h"
-
-/**
- * DOC: batch pool
- *
- * In order to submit batch buffers as 'secure', the software command parser
- * must ensure that a batch buffer cannot be modified after parsing. It does
- * this by copying the user provided batch buffer contents to a kernel owned
- * buffer from which the hardware will actually execute, and by carefully
- * managing the address space bindings for such buffers.
- *
- * The batch pool framework provides a mechanism for the driver to manage a
- * set of scratch buffers to use for this purpose. The framework can be
- * extended to support other uses cases should they arise.
- */
-
-/**
- * i915_gem_batch_pool_init() - initialize a batch buffer pool
- * @pool: the batch buffer pool
- * @engine: the associated request submission engine
- */
-void i915_gem_batch_pool_init(struct i915_gem_batch_pool *pool,
-			      struct intel_engine_cs *engine)
-{
-	int n;
-
-	pool->engine = engine;
-
-	for (n = 0; n < ARRAY_SIZE(pool->cache_list); n++)
-		INIT_LIST_HEAD(&pool->cache_list[n]);
-}
-
-/**
- * i915_gem_batch_pool_fini() - clean up a batch buffer pool
- * @pool: the pool to clean up
- *
- * Note: Callers must hold the struct_mutex.
- */
-void i915_gem_batch_pool_fini(struct i915_gem_batch_pool *pool)
-{
-	int n;
-
-	lockdep_assert_held(&pool->engine->i915->drm.struct_mutex);
-
-	for (n = 0; n < ARRAY_SIZE(pool->cache_list); n++) {
-		struct drm_i915_gem_object *obj, *next;
-
-		list_for_each_entry_safe(obj, next,
-					 &pool->cache_list[n],
-					 batch_pool_link)
-			i915_gem_object_put(obj);
-
-		INIT_LIST_HEAD(&pool->cache_list[n]);
-	}
-}
-
-/**
- * i915_gem_batch_pool_get() - allocate a buffer from the pool
- * @pool: the batch buffer pool
- * @size: the minimum desired size of the returned buffer
- *
- * Returns an inactive buffer from @pool with at least @size bytes,
- * with the pages pinned. The caller must i915_gem_object_unpin_pages()
- * on the returned object.
- *
- * Note: Callers must hold the struct_mutex
- *
- * Return: the buffer object or an error pointer
- */
-struct drm_i915_gem_object *
-i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
-			size_t size)
-{
-	struct drm_i915_gem_object *obj;
-	struct list_head *list;
-	int n, ret;
-
-	lockdep_assert_held(&pool->engine->i915->drm.struct_mutex);
-
-	/* Compute a power-of-two bucket, but throw everything greater than
-	 * 16KiB into the same bucket: i.e. the the buckets hold objects of
-	 * (1 page, 2 pages, 4 pages, 8+ pages).
-	 */
-	n = fls(size >> PAGE_SHIFT) - 1;
-	if (n >= ARRAY_SIZE(pool->cache_list))
-		n = ARRAY_SIZE(pool->cache_list) - 1;
-	list = &pool->cache_list[n];
-
-	list_for_each_entry(obj, list, batch_pool_link) {
-		struct reservation_object *resv = obj->resv;
-
-		/* The batches are strictly LRU ordered */
-		if (!reservation_object_test_signaled_rcu(resv, true))
-			break;
-
-		/*
-		 * The object is now idle, clear the array of shared
-		 * fences before we add a new request. Although, we
-		 * remain on the same engine, we may be on a different
-		 * timeline and so may continually grow the array,
-		 * trapping a reference to all the old fences, rather
-		 * than replace the existing fence.
-		 */
-		if (rcu_access_pointer(resv->fence)) {
-			reservation_object_lock(resv, NULL);
-			reservation_object_add_excl_fence(resv, NULL);
-			reservation_object_unlock(resv);
-		}
-
-		if (obj->base.size >= size)
-			goto found;
-	}
-
-	obj = i915_gem_object_create_internal(pool->engine->i915, size);
-	if (IS_ERR(obj))
-		return obj;
-
-found:
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		return ERR_PTR(ret);
-
-	list_move_tail(&obj->batch_pool_link, list);
-	return obj;
-}
diff --git a/drivers/gpu/drm/i915/i915_gem_batch_pool.h b/drivers/gpu/drm/i915/i915_gem_batch_pool.h
deleted file mode 100644
index feeeeeaa54d8..000000000000
--- a/drivers/gpu/drm/i915/i915_gem_batch_pool.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * SPDX-License-Identifier: MIT
- *
- * Copyright © 2014-2018 Intel Corporation
- */
-
-#ifndef I915_GEM_BATCH_POOL_H
-#define I915_GEM_BATCH_POOL_H
-
-#include <linux/types.h>
-
-struct drm_i915_gem_object;
-struct intel_engine_cs;
-
-struct i915_gem_batch_pool {
-	struct intel_engine_cs *engine;
-	struct list_head cache_list[4];
-};
-
-void i915_gem_batch_pool_init(struct i915_gem_batch_pool *pool,
-			      struct intel_engine_cs *engine);
-void i915_gem_batch_pool_fini(struct i915_gem_batch_pool *pool);
-struct drm_i915_gem_object *
-i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool, size_t size);
-
-#endif /* I915_GEM_BATCH_POOL_H */
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 26/26] drm/i915: Move idle barrier cleanup into engine-pm
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (23 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 25/26] drm/i915: Replace struct_mutex for batch pool serialisation Chris Wilson
@ 2019-06-18  7:41 ` Chris Wilson
  2019-06-18  8:57 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/26] drm/i915: Keep engine alive as we retire the context Patchwork
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  7:41 UTC (permalink / raw)
  To: intel-gfx

Now that we now longer need to guarantee that the active callback is
under the struct_mutex, we can lift it out of the i915_gem_park() and
into the engine parking itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_pm.c    | 19 -------------------
 drivers/gpu/drm/i915/gt/intel_engine_pm.c | 15 +++++++++++++++
 drivers/gpu/drm/i915/i915_active.c        |  1 +
 3 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
index 1f53feac2b22..c233a0955630 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c
@@ -10,29 +10,10 @@
 #include "i915_drv.h"
 #include "i915_globals.h"
 
-static void call_idle_barriers(struct intel_engine_cs *engine)
-{
-	struct llist_node *node, *next;
-
-	llist_for_each_safe(node, next, llist_del_all(&engine->barrier_tasks)) {
-		struct dma_fence_cb *cb =
-			container_of((struct list_head *)node,
-				     typeof(*cb), node);
-
-		cb->func(NULL, cb);
-	}
-}
-
 static void i915_gem_park(struct drm_i915_private *i915)
 {
-	struct intel_engine_cs *engine;
-	enum intel_engine_id id;
-
 	lockdep_assert_held(&i915->drm.struct_mutex);
 
-	for_each_engine(engine, i915, id)
-		call_idle_barriers(engine); /* cleanup after wedging */
-
 	i915_timelines_park(i915);
 	i915_vma_parked(i915);
 
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index d5d3aac5c268..5488c3e557de 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -96,6 +96,19 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
 	return false;
 }
 
+static void call_idle_barriers(struct intel_engine_cs *engine)
+{
+	struct llist_node *node, *next;
+
+	llist_for_each_safe(node, next, llist_del_all(&engine->barrier_tasks)) {
+		struct dma_fence_cb *cb =
+			container_of((struct list_head *)node,
+				     typeof(*cb), node);
+
+		cb->func(NULL, cb);
+	}
+}
+
 static int __engine_park(struct intel_wakeref *wf)
 {
 	struct intel_engine_cs *engine =
@@ -115,6 +128,8 @@ static int __engine_park(struct intel_wakeref *wf)
 
 	GEM_TRACE("%s\n", engine->name);
 
+	call_idle_barriers(engine); /* cleanup after wedging */
+
 	intel_engine_disarm_breadcrumbs(engine);
 	intel_engine_pool_park(&engine->pool);
 
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index cf31ef63713d..505ee42017d0 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -429,6 +429,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
 		rb_link_node(&node->node, parent, p);
 		rb_insert_color(&node->node, &ref->tree);
 
+		GEM_BUG_ON(!intel_engine_is_active(engine));
 		llist_add((struct llist_node *)&node->base.cb.node,
 			  &engine->barrier_tasks);
 		intel_engine_pm_put(engine);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 23/26] drm/i915: Rename intel_wakeref_[is]_active
  2019-06-18  7:41 ` [PATCH 23/26] drm/i915: Rename intel_wakeref_[is]_active Chris Wilson
@ 2019-06-18  8:14   ` Chris Wilson
  0 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18  8:14 UTC (permalink / raw)
  To: intel-gfx

Quoting Chris Wilson (2019-06-18 08:41:50)
> @@ -422,6 +423,11 @@ static inline void intel_engine_reset(struct intel_engine_cs *engine,
>  bool intel_engine_is_idle(struct intel_engine_cs *engine);
>  bool intel_engines_are_idle(struct drm_i915_private *dev_priv);
>  
> +static inline bool intel_engine_is_active(const struct intel_engine_cs *engine)

On review, I think I prefer intel_engine_is_awake() here so that it is
slightly more distinct from intel_engine_is_idle().
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (24 preceding siblings ...)
  2019-06-18  7:41 ` [PATCH 26/26] drm/i915: Move idle barrier cleanup into engine-pm Chris Wilson
@ 2019-06-18  8:57 ` Patchwork
  2019-06-18  9:09 ` ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2019-06-18  8:57 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/26] drm/i915: Keep engine alive as we retire the context
URL   : https://patchwork.freedesktop.org/series/62278/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
cda76c80993d drm/i915: Keep engine alive as we retire the context
42021f601d33 drm/i915: Skip shrinking already freed pages
ab6a6e6954bf drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait()
-:11: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description (prefer a maximum 75 chars per line)
#11: 
References: eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex")

-:11: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex")'
#11: 
References: eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex")

total: 1 errors, 1 warnings, 0 checks, 328 lines checked
145d0e3ed4ef drm/i915: Flush the execution-callbacks on retiring
78b85c062e30 drm/i915/execlists: Preempt-to-busy
-:1494: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'p_ptr' - possible side-effects?
#1494: FILE: drivers/gpu/drm/i915/i915_utils.h:134:
+#define ptr_count_dec(p_ptr) do {					\
+	typeof(p_ptr) __p = (p_ptr);					\
+	unsigned long __v = (unsigned long)(*__p);			\
+	*__p = (typeof(*p_ptr))(--__v);					\
+} while (0)

-:1500: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'p_ptr' - possible side-effects?
#1500: FILE: drivers/gpu/drm/i915/i915_utils.h:140:
+#define ptr_count_inc(p_ptr) do {					\
+	typeof(p_ptr) __p = (p_ptr);					\
+	unsigned long __v = (unsigned long)(*__p);			\
+	*__p = (typeof(*p_ptr))(++__v);					\
+} while (0)

-:1783: WARNING:LINE_SPACING: Missing a blank line after declarations
#1783: FILE: drivers/gpu/drm/i915/intel_guc_submission.c:820:
+		int rem = ARRAY_SIZE(execlists->inflight) - idx;
+		memmove(execlists->inflight, port, rem * sizeof(*port));

total: 0 errors, 1 warnings, 2 checks, 1682 lines checked
71d1fe05be01 drm/i915/execlists: Minimalistic timeslicing
-:345: WARNING:LONG_LINE: line over 100 characters
#345: FILE: drivers/gpu/drm/i915/gt/selftest_lrc.c:211:
+			      2 * RUNTIME_INFO(outer->i915)->num_engines * (count + 2) * (count + 3)) < 0) {

total: 0 errors, 1 warnings, 0 checks, 426 lines checked
40741878cefb drm/i915/execlists: Force preemption
e42940af0600 drm/i915: Make the semaphore saturation mask global
913184b36dae dma-fence: Propagate errors to dma-fence-array container
fbc685fee7cf dma-fence: Report the composite sync_file status
d2298deb220a dma-fence: Refactor signaling for manual invocation
-:33: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#33: 
new file mode 100644

-:38: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#38: FILE: drivers/dma-buf/dma-fence-trace.c:1:
+/*

-:293: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#293: FILE: include/linux/dma-fence-types.h:1:
+/*

-:368: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#368: FILE: include/linux/dma-fence-types.h:76:
+	spinlock_t *lock;

total: 0 errors, 3 warnings, 1 checks, 728 lines checked
fe5ceef08681 dma-fence: Always execute signal callbacks
e6baddc16878 drm/i915: Track i915_active using debugobjects
626786997753 drm/i915: Signal fence completion from i915_request_wait
d80318d1c748 drm/i915: Remove waiting & retiring from shrinker paths
8b858b38bcfc drm/i915: Throw away the active object retirement complexity
c38757f3f210 drm/i915: Provide an i915_active.acquire callback
-:708: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#708: FILE: drivers/gpu/drm/i915/i915_active_types.h:36:
+	struct mutex mutex;

total: 0 errors, 0 warnings, 1 checks, 842 lines checked
6b856a93ddbf drm/i915: Push the i915_active.retire into a worker
babae9b85e46 drm/i915/overlay: Switch to using i915_active tracking
295efee4f1d6 drm/i915: Forgo last_fence active request tracking
044650684354 drm/i915: Extract intel_frontbuffer active tracking
1ba725af6d0c drm/i915: Coordinate i915_active with its own mutex
962860b4976d drm/i915: Rename intel_wakeref_[is]_active
b1132180c60a drm/i915: Teach execbuffer to take the engine wakeref not GT
38774404475a drm/i915: Replace struct_mutex for batch pool serialisation
-:304: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#304: 
new file mode 100644

-:309: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#309: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.c:1:
+/*

-:310: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#310: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.c:2:
+ * SPDX-License-Identifier: MIT

-:478: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#478: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.h:1:
+/*

-:479: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#479: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool.h:2:
+ * SPDX-License-Identifier: MIT

-:517: WARNING:SPDX_LICENSE_TAG: Missing or malformed SPDX-License-Identifier tag in line 1
#517: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool_types.h:1:
+/*

-:518: WARNING:SPDX_LICENSE_TAG: Misplaced SPDX-License-Identifier tag - use line 1 instead
#518: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool_types.h:2:
+ * SPDX-License-Identifier: MIT

-:534: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#534: FILE: drivers/gpu/drm/i915/gt/intel_engine_pool_types.h:18:
+	spinlock_t lock;

total: 0 errors, 7 warnings, 1 checks, 589 lines checked
cc4a2f29f9fe drm/i915: Move idle barrier cleanup into engine-pm

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* ✗ Fi.CI.SPARSE: warning for series starting with [01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (25 preceding siblings ...)
  2019-06-18  8:57 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/26] drm/i915: Keep engine alive as we retire the context Patchwork
@ 2019-06-18  9:09 ` Patchwork
  2019-06-18  9:18 ` ✓ Fi.CI.BAT: success " Patchwork
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2019-06-18  9:09 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/26] drm/i915: Keep engine alive as we retire the context
URL   : https://patchwork.freedesktop.org/series/62278/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.5.2
Commit: drm/i915: Keep engine alive as we retire the context
Okay!

Commit: drm/i915: Skip shrinking already freed pages
Okay!

Commit: drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait()
Okay!

Commit: drm/i915: Flush the execution-callbacks on retiring
Okay!

Commit: drm/i915/execlists: Preempt-to-busy
-drivers/gpu/drm/i915/selftests/../i915_utils.h:220:16: warning: expression using sizeof(void)
+drivers/gpu/drm/i915/selftests/../i915_utils.h:232:16: warning: expression using sizeof(void)

Commit: drm/i915/execlists: Minimalistic timeslicing
+drivers/gpu/drm/i915/gt/intel_lrc.c:876:16: warning: expression using sizeof(void)
+drivers/gpu/drm/i915/gt/intel_lrc.c:876:16: warning: expression using sizeof(void)

Commit: drm/i915/execlists: Force preemption
+
+drivers/gpu/drm/i915/i915_utils.h:232:16: warning: expression using sizeof(void)
+Error in reading or end of file.

Commit: drm/i915: Make the semaphore saturation mask global
Okay!

Commit: dma-fence: Propagate errors to dma-fence-array container
Okay!

Commit: dma-fence: Report the composite sync_file status
Okay!

Commit: dma-fence: Refactor signaling for manual invocation
Okay!

Commit: dma-fence: Always execute signal callbacks
Okay!

Commit: drm/i915: Track i915_active using debugobjects
Okay!

Commit: drm/i915: Signal fence completion from i915_request_wait
Okay!

Commit: drm/i915: Remove waiting & retiring from shrinker paths
Okay!

Commit: drm/i915: Throw away the active object retirement complexity
Okay!

Commit: drm/i915: Provide an i915_active.acquire callback
Okay!

Commit: drm/i915: Push the i915_active.retire into a worker
Okay!

Commit: drm/i915/overlay: Switch to using i915_active tracking
Okay!

Commit: drm/i915: Forgo last_fence active request tracking
Okay!

Commit: drm/i915: Extract intel_frontbuffer active tracking
+drivers/gpu/drm/i915/display/intel_frontbuffer.c:218:13: warning: context imbalance in 'frontbuffer_release' - unexpected unlock

Commit: drm/i915: Coordinate i915_active with its own mutex
-./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)
-./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)
-./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)
-./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)

Commit: drm/i915: Rename intel_wakeref_[is]_active
Okay!

Commit: drm/i915: Teach execbuffer to take the engine wakeref not GT
Okay!

Commit: drm/i915: Replace struct_mutex for batch pool serialisation
+./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)

Commit: drm/i915: Move idle barrier cleanup into engine-pm
Okay!

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (26 preceding siblings ...)
  2019-06-18  9:09 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2019-06-18  9:18 ` Patchwork
  2019-06-18 13:45 ` [PATCH 01/26] " Mika Kuoppala
  2019-06-18 19:15 ` ✗ Fi.CI.IGT: failure for series starting with [01/26] " Patchwork
  29 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2019-06-18  9:18 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/26] drm/i915: Keep engine alive as we retire the context
URL   : https://patchwork.freedesktop.org/series/62278/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_6290 -> Patchwork_13320
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/

Known issues
------------

  Here are the changes found in Patchwork_13320 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live_contexts:
    - fi-bdw-gvtdvm:      [PASS][1] -> [DMESG-FAIL][2] ([fdo#110235])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-bdw-gvtdvm/igt@i915_selftest@live_contexts.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-bdw-gvtdvm/igt@i915_selftest@live_contexts.html

  * igt@i915_selftest@live_hangcheck:
    - fi-icl-dsi:         [PASS][3] -> [INCOMPLETE][4] ([fdo#107713] / [fdo#108569])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-icl-dsi/igt@i915_selftest@live_hangcheck.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-icl-dsi/igt@i915_selftest@live_hangcheck.html

  * igt@kms_prop_blob@basic:
    - fi-icl-u3:          [PASS][5] -> [DMESG-WARN][6] ([fdo#107724])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-icl-u3/igt@kms_prop_blob@basic.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-icl-u3/igt@kms_prop_blob@basic.html

  * igt@prime_vgem@basic-fence-flip:
    - fi-ilk-650:         [PASS][7] -> [DMESG-WARN][8] ([fdo#106387]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-ilk-650/igt@prime_vgem@basic-fence-flip.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-ilk-650/igt@prime_vgem@basic-fence-flip.html

  
#### Possible fixes ####

  * igt@gem_ctx_switch@basic-default:
    - fi-icl-guc:         [INCOMPLETE][9] ([fdo#107713] / [fdo#108569]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-icl-guc/igt@gem_ctx_switch@basic-default.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-icl-guc/igt@gem_ctx_switch@basic-default.html

  * igt@gem_render_tiled_blits@basic:
    - fi-icl-u3:          [DMESG-WARN][11] ([fdo#107724]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-icl-u3/igt@gem_render_tiled_blits@basic.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-icl-u3/igt@gem_render_tiled_blits@basic.html

  * igt@i915_selftest@live_hangcheck:
    - fi-icl-y:           [INCOMPLETE][13] ([fdo#107713] / [fdo#108569]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-icl-y/igt@i915_selftest@live_hangcheck.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-icl-y/igt@i915_selftest@live_hangcheck.html

  * igt@kms_frontbuffer_tracking@basic:
    - fi-hsw-peppy:       [DMESG-WARN][15] ([fdo#102614]) -> [PASS][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/fi-hsw-peppy/igt@kms_frontbuffer_tracking@basic.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/fi-hsw-peppy/igt@kms_frontbuffer_tracking@basic.html

  
  [fdo#102614]: https://bugs.freedesktop.org/show_bug.cgi?id=102614
  [fdo#106387]: https://bugs.freedesktop.org/show_bug.cgi?id=106387
  [fdo#107713]: https://bugs.freedesktop.org/show_bug.cgi?id=107713
  [fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
  [fdo#108569]: https://bugs.freedesktop.org/show_bug.cgi?id=108569
  [fdo#110235]: https://bugs.freedesktop.org/show_bug.cgi?id=110235


Participating hosts (51 -> 38)
------------------------------

  Additional (1): fi-icl-u2 
  Missing    (14): fi-kbl-soraka fi-cml-u2 fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-whl-u fi-kbl-x1275 fi-skl-iommu fi-kbl-8809g fi-byt-clapper fi-bdw-samus fi-cml-u 


Build changes
-------------

  * Linux: CI_DRM_6290 -> Patchwork_13320

  CI_DRM_6290: a0fa10b5d68fd65375029dd8b61d6c8c6aa1413f @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5059: 1f67ee0d09d6513f487f2be74aae9700e755258a @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_13320: cc4a2f29f9fe6c49792503fd6b6fdd4b7182783f @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

cc4a2f29f9fe drm/i915: Move idle barrier cleanup into engine-pm
38774404475a drm/i915: Replace struct_mutex for batch pool serialisation
b1132180c60a drm/i915: Teach execbuffer to take the engine wakeref not GT
962860b4976d drm/i915: Rename intel_wakeref_[is]_active
1ba725af6d0c drm/i915: Coordinate i915_active with its own mutex
044650684354 drm/i915: Extract intel_frontbuffer active tracking
295efee4f1d6 drm/i915: Forgo last_fence active request tracking
babae9b85e46 drm/i915/overlay: Switch to using i915_active tracking
6b856a93ddbf drm/i915: Push the i915_active.retire into a worker
c38757f3f210 drm/i915: Provide an i915_active.acquire callback
8b858b38bcfc drm/i915: Throw away the active object retirement complexity
d80318d1c748 drm/i915: Remove waiting & retiring from shrinker paths
626786997753 drm/i915: Signal fence completion from i915_request_wait
e6baddc16878 drm/i915: Track i915_active using debugobjects
fe5ceef08681 dma-fence: Always execute signal callbacks
d2298deb220a dma-fence: Refactor signaling for manual invocation
fbc685fee7cf dma-fence: Report the composite sync_file status
913184b36dae dma-fence: Propagate errors to dma-fence-array container
e42940af0600 drm/i915: Make the semaphore saturation mask global
40741878cefb drm/i915/execlists: Force preemption
71d1fe05be01 drm/i915/execlists: Minimalistic timeslicing
78b85c062e30 drm/i915/execlists: Preempt-to-busy
145d0e3ed4ef drm/i915: Flush the execution-callbacks on retiring
ab6a6e6954bf drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait()
42021f601d33 drm/i915: Skip shrinking already freed pages
cda76c80993d drm/i915: Keep engine alive as we retire the context

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 02/26] drm/i915: Skip shrinking already freed pages
  2019-06-18  7:41 ` [PATCH 02/26] drm/i915: Skip shrinking already freed pages Chris Wilson
@ 2019-06-18 11:59   ` Chris Wilson
  2019-06-18 16:06   ` Mika Kuoppala
  1 sibling, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18 11:59 UTC (permalink / raw)
  To: intel-gfx

Quoting Chris Wilson (2019-06-18 08:41:29)
> Previously, we want to shrink the pages of freed objects before they
> were RCU collected. However, by removing the struct_mutex serialisation
> around the active reference, we need to acquire an extra reference
> around the wait. Unfortunately this means that we have to skip objects
> that are waiting RCU collection.
> 
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110937
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (27 preceding siblings ...)
  2019-06-18  9:18 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2019-06-18 13:45 ` Mika Kuoppala
  2019-06-18 13:59   ` Chris Wilson
  2019-06-18 19:15 ` ✗ Fi.CI.IGT: failure for series starting with [01/26] " Patchwork
  29 siblings, 1 reply; 42+ messages in thread
From: Mika Kuoppala @ 2019-06-18 13:45 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Though we pin the context first before taking the pm wakeref, during
> retire we need to unpin before dropping the pm wakeref (breaking the
> "natural" onion). During the unpin, we may need to attach a cleanup
> operation on to the engine wakeref, ergo we want to keep the engine
> awake until after the unpin.
>
> v2: Push the engine wakeref into the barrier so we keep the onion unwind
> ordering in the request itself
>
> Fixes: ce476c80b8bf ("drm/i915: Keep contexts pinned until after the next kernel context switch")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/i915_active.c | 22 ++++++++++++++++++++--
>  1 file changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
> index 2d019ac6db20..41ed2798687d 100644
> --- a/drivers/gpu/drm/i915/i915_active.c
> +++ b/drivers/gpu/drm/i915/i915_active.c
> @@ -4,6 +4,8 @@
>   * Copyright © 2019 Intel Corporation
>   */
>  
> +#include "gt/intel_engine_pm.h"
> +
>  #include "i915_drv.h"
>  #include "i915_active.h"
>  #include "i915_globals.h"
> @@ -268,8 +270,9 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
>  					    struct intel_engine_cs *engine)
>  {
>  	struct drm_i915_private *i915 = engine->i915;
> +	struct llist_node *pos, *next;
>  	unsigned long tmp;
> -	int err = 0;
> +	int err;
>  
>  	GEM_BUG_ON(!engine->mask);
>  	for_each_engine_masked(engine, i915, engine->mask, tmp) {
> @@ -279,7 +282,7 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
>  		node = kmem_cache_alloc(global.slab_cache, GFP_KERNEL);
>  		if (unlikely(!node)) {
>  			err = -ENOMEM;
> -			break;
> +			goto unwind;
>  		}
>  
>  		i915_active_request_init(&node->base,
> @@ -288,10 +291,24 @@ int i915_active_acquire_preallocate_barrier(struct i915_active *ref,
>  		node->ref = ref;
>  		ref->count++;
>  
> +		intel_engine_pm_get(engine);
>  		llist_add((struct llist_node *)&node->base.link,
>  			  &ref->barriers);
>  	}
>  
> +	return 0;
> +
> +unwind:
> +	llist_for_each_safe(pos, next, llist_del_all(&ref->barriers)) {
> +		struct active_node *node;
> +
> +		node = container_of((struct list_head *)pos,
> +				    typeof(*node), base.link);
> +		engine = (void *)rcu_access_pointer(node->base.request);
> +
> +		intel_engine_pm_put(engine);
> +		kfree(node);
> +	}
>  	return err;
>  }
>  
> @@ -328,6 +345,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
>  
>  		llist_add((struct llist_node *)&node->base.link,
>  			  &engine->barrier_tasks);
> +		intel_engine_pm_put(engine);

But you said that you keep the wakeref until unpin?
-Mika


>  	}
>  	i915_active_release(ref);
>  }
> -- 
> 2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18 13:45 ` [PATCH 01/26] " Mika Kuoppala
@ 2019-06-18 13:59   ` Chris Wilson
  2019-06-18 14:03     ` Chris Wilson
  2019-06-18 14:08     ` Mika Kuoppala
  0 siblings, 2 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18 13:59 UTC (permalink / raw)
  To: Mika Kuoppala, intel-gfx

Quoting Mika Kuoppala (2019-06-18 14:45:45)
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> > @@ -328,6 +345,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
> >  
> >               llist_add((struct llist_node *)&node->base.link,
> >                         &engine->barrier_tasks);
> > +             intel_engine_pm_put(engine);
> 
> But you said that you keep the wakeref until unpin?

intel_context_unpin ->
  intel_context_active_release ->
    i915_active_acquire_barrier
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18 13:59   ` Chris Wilson
@ 2019-06-18 14:03     ` Chris Wilson
  2019-06-18 14:08     ` Mika Kuoppala
  1 sibling, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18 14:03 UTC (permalink / raw)
  To: Mika Kuoppala, intel-gfx

Quoting Chris Wilson (2019-06-18 14:59:02)
> Quoting Mika Kuoppala (2019-06-18 14:45:45)
> > Chris Wilson <chris@chris-wilson.co.uk> writes:
> > > @@ -328,6 +345,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
> > >  
> > >               llist_add((struct llist_node *)&node->base.link,
> > >                         &engine->barrier_tasks);
> > > +             intel_engine_pm_put(engine);
> > 
> > But you said that you keep the wakeref until unpin?
> 
> intel_context_unpin ->
>   intel_context_active_release ->
>     i915_active_acquire_barrier

That should probably be renamed to i915_active_release_barrier.
Possibly, something even more imaginative.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18 13:59   ` Chris Wilson
  2019-06-18 14:03     ` Chris Wilson
@ 2019-06-18 14:08     ` Mika Kuoppala
  1 sibling, 0 replies; 42+ messages in thread
From: Mika Kuoppala @ 2019-06-18 14:08 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Quoting Mika Kuoppala (2019-06-18 14:45:45)
>> Chris Wilson <chris@chris-wilson.co.uk> writes:
>> > @@ -328,6 +345,7 @@ void i915_active_acquire_barrier(struct i915_active *ref)
>> >  
>> >               llist_add((struct llist_node *)&node->base.link,
>> >                         &engine->barrier_tasks);
>> > +             intel_engine_pm_put(engine);
>> 
>> But you said that you keep the wakeref until unpin?
>
> intel_context_unpin ->
>   intel_context_active_release ->
>     i915_active_acquire_barrier

Not the first time I trip on the naming in here.
But I don't have better suggestions.

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 02/26] drm/i915: Skip shrinking already freed pages
  2019-06-18  7:41 ` [PATCH 02/26] drm/i915: Skip shrinking already freed pages Chris Wilson
  2019-06-18 11:59   ` Chris Wilson
@ 2019-06-18 16:06   ` Mika Kuoppala
  2019-06-18 16:22     ` Chris Wilson
  1 sibling, 1 reply; 42+ messages in thread
From: Mika Kuoppala @ 2019-06-18 16:06 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Previously, we want to shrink the pages of freed objects before they
> were RCU collected. However, by removing the struct_mutex serialisation
> around the active reference, we need to acquire an extra reference
> around the wait. Unfortunately this means that we have to skip objects
> that are waiting RCU collection.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_object.c   | 47 +-------------------
>  drivers/gpu/drm/i915/gem/i915_gem_shrinker.c |  5 +++
>  2 files changed, 6 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 272ce30ce1d3..1b571fd26ed4 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -149,33 +149,6 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
>  	}
>  }
>  
> -static bool discard_backing_storage(struct drm_i915_gem_object *obj)
> -{
> -	/*
> -	 * If we are the last user of the backing storage (be it shmemfs
> -	 * pages or stolen etc), we know that the pages are going to be
> -	 * immediately released. In this case, we can then skip copying
> -	 * back the contents from the GPU.
> -	 */
> -	if (!i915_gem_object_is_shrinkable(obj))
> -		return false;
> -
> -	if (obj->mm.madv != I915_MADV_WILLNEED)
> -		return false;
> -
> -	if (!obj->base.filp)
> -		return true;
> -
> -	/* At first glance, this looks racy, but then again so would be
> -	 * userspace racing mmap against close. However, the first external
> -	 * reference to the filp can only be obtained through the
> -	 * i915_gem_mmap_ioctl() which safeguards us against the user
> -	 * acquiring such a reference whilst we are in the middle of
> -	 * freeing the object.
> -	 */
> -	return file_count(obj->base.filp) == 1;
> -}
> -
>  static void __i915_gem_free_objects(struct drm_i915_private *i915,
>  				    struct llist_node *freed)
>  {
> @@ -225,8 +198,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
>  		if (obj->ops->release)
>  			obj->ops->release(obj);
>  
> -		if (WARN_ON(i915_gem_object_has_pinned_pages(obj)))
> -			atomic_set(&obj->mm.pages_pin_count, 0);
> +		atomic_set(&obj->mm.pages_pin_count, 0);
>  		__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
>  		GEM_BUG_ON(i915_gem_object_has_pages(obj));
>  
> @@ -324,23 +296,6 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
>  {
>  	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
>  
> -	if (obj->mm.quirked)
> -		__i915_gem_object_unpin_pages(obj);
> -
> -	if (discard_backing_storage(obj)) {
> -		struct drm_i915_private *i915 = to_i915(obj->base.dev);
> -
> -		obj->mm.madv = I915_MADV_DONTNEED;
> -
> -		if (i915_gem_object_has_pages(obj)) {
> -			unsigned long flags;
> -
> -			spin_lock_irqsave(&i915->mm.obj_lock, flags);
> -			list_move_tail(&obj->mm.link, &i915->mm.purge_list);
> -			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
> -		}
> -	}
> -
>  	/*
>  	 * Before we free the object, make sure any pure RCU-only
>  	 * read-side critical sections are complete, e.g.
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> index c851c4029597..3a926a8755c6 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> @@ -241,6 +241,9 @@ i915_gem_shrink(struct drm_i915_private *i915,
>  			if (!can_release_pages(obj))
>  				continue;
>  
> +			if (!kref_get_unless_zero(&obj->base.refcount))
> +				continue;
> +

The comment above, in this function, seems a little bit
stale on talking about struct mutex. Straighten it up.

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

>  			spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
>  
>  			if (unsafe_drop_pages(obj)) {
> @@ -253,7 +256,9 @@ i915_gem_shrink(struct drm_i915_private *i915,
>  				}
>  				mutex_unlock(&obj->mm.lock);
>  			}
> +
>  			scanned += obj->base.size >> PAGE_SHIFT;
> +			i915_gem_object_put(obj);
>  
>  			spin_lock_irqsave(&i915->mm.obj_lock, flags);
>  		}
> -- 
> 2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 02/26] drm/i915: Skip shrinking already freed pages
  2019-06-18 16:06   ` Mika Kuoppala
@ 2019-06-18 16:22     ` Chris Wilson
  0 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-18 16:22 UTC (permalink / raw)
  To: Mika Kuoppala, intel-gfx

Quoting Mika Kuoppala (2019-06-18 17:06:36)
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> > index c851c4029597..3a926a8755c6 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
> > @@ -241,6 +241,9 @@ i915_gem_shrink(struct drm_i915_private *i915,
> >                       if (!can_release_pages(obj))
> >                               continue;
> >  
> > +                     if (!kref_get_unless_zero(&obj->base.refcount))
> > +                             continue;
> > +
> 
> The comment above, in this function, seems a little bit
> stale on talking about struct mutex. Straighten it up.
> 
> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

There's a series with the goal of straightening that up. :-p
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* ✗ Fi.CI.IGT: failure for series starting with [01/26] drm/i915: Keep engine alive as we retire the context
  2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
                   ` (28 preceding siblings ...)
  2019-06-18 13:45 ` [PATCH 01/26] " Mika Kuoppala
@ 2019-06-18 19:15 ` Patchwork
  29 siblings, 0 replies; 42+ messages in thread
From: Patchwork @ 2019-06-18 19:15 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [01/26] drm/i915: Keep engine alive as we retire the context
URL   : https://patchwork.freedesktop.org/series/62278/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_6290_full -> Patchwork_13320_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_13320_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_13320_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_13320_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_ctx_clone@vm:
    - shard-glk:          [PASS][1] -> [DMESG-WARN][2] +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-glk9/igt@gem_ctx_clone@vm.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-glk6/igt@gem_ctx_clone@vm.html
    - shard-iclb:         [PASS][3] -> [DMESG-WARN][4] +2 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-iclb6/igt@gem_ctx_clone@vm.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-iclb2/igt@gem_ctx_clone@vm.html

  * igt@gem_ctx_engines@execute-one:
    - shard-skl:          [PASS][5] -> [FAIL][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl8/igt@gem_ctx_engines@execute-one.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl7/igt@gem_ctx_engines@execute-one.html
    - shard-apl:          [PASS][7] -> [FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-apl2/igt@gem_ctx_engines@execute-one.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl7/igt@gem_ctx_engines@execute-one.html
    - shard-glk:          [PASS][9] -> [FAIL][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-glk5/igt@gem_ctx_engines@execute-one.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-glk7/igt@gem_ctx_engines@execute-one.html

  * igt@gem_vm_create@async-destroy:
    - shard-skl:          [PASS][11] -> [DMESG-WARN][12] +1 similar issue
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl5/igt@gem_vm_create@async-destroy.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl10/igt@gem_vm_create@async-destroy.html

  * igt@gem_vm_create@execbuf:
    - shard-hsw:          [PASS][13] -> [DMESG-WARN][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-hsw6/igt@gem_vm_create@execbuf.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@gem_vm_create@execbuf.html
    - shard-apl:          [PASS][15] -> [DMESG-WARN][16] +1 similar issue
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-apl7/igt@gem_vm_create@execbuf.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl8/igt@gem_vm_create@execbuf.html

  * igt@runner@aborted:
    - shard-hsw:          NOTRUN -> ([FAIL][17], [FAIL][18], [FAIL][19], [FAIL][20], [FAIL][21]) ([fdo#108903] / [fdo#108904] / [fdo#108905])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw4/igt@runner@aborted.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@runner@aborted.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@runner@aborted.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@runner@aborted.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@runner@aborted.html
    - shard-apl:          NOTRUN -> ([FAIL][22], [FAIL][23], [FAIL][24], [FAIL][25], [FAIL][26])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl4/igt@runner@aborted.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl8/igt@runner@aborted.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl8/igt@runner@aborted.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl8/igt@runner@aborted.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl8/igt@runner@aborted.html
    - shard-snb:          NOTRUN -> ([FAIL][27], [FAIL][28], [FAIL][29], [FAIL][30], [FAIL][31], [FAIL][32], [FAIL][33], [FAIL][34], [FAIL][35], [FAIL][36], [FAIL][37], [FAIL][38], [FAIL][39], [FAIL][40], [FAIL][41], [FAIL][42], [FAIL][43], [FAIL][44], [FAIL][45], [FAIL][46], [FAIL][47], [FAIL][48]) ([fdo#107469] / [fdo#108929])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@runner@aborted.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb7/igt@runner@aborted.html

  * igt@syncobj_basic@create-signaled:
    - shard-snb:          [PASS][49] -> [DMESG-WARN][50] +1 similar issue
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-snb5/igt@syncobj_basic@create-signaled.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb6/igt@syncobj_basic@create-signaled.html

  * igt@syncobj_wait@invalid-reset-one-illegal-handle:
    - shard-kbl:          [PASS][51] -> [DMESG-WARN][52] +3 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-kbl1/igt@syncobj_wait@invalid-reset-one-illegal-handle.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@syncobj_wait@invalid-reset-one-illegal-handle.html

  
#### Warnings ####

  * igt@runner@aborted:
    - shard-kbl:          [FAIL][53] ([fdo#110938]) -> ([FAIL][54], [FAIL][55], [FAIL][56], [FAIL][57], [FAIL][58], [FAIL][59], [FAIL][60], [FAIL][61], [FAIL][62], [FAIL][63], [FAIL][64], [FAIL][65], [FAIL][66], [FAIL][67], [FAIL][68], [FAIL][69], [FAIL][70], [FAIL][71], [FAIL][72], [FAIL][73], [FAIL][74], [FAIL][75], [FAIL][76], [FAIL][77], [FAIL][78], [FAIL][79], [FAIL][80], [FAIL][81]) ([fdo#108903] / [fdo#108904] / [fdo#108905])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-kbl3/igt@runner@aborted.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl7/igt@runner@aborted.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl1/igt@runner@aborted.html
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl2/igt@runner@aborted.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl4/igt@runner@aborted.html

  

### Piglit changes ###

#### Possible regressions ####

  * spec@arb_shader_image_load_store@shader-mem-barrier (NEW):
    - pig-glk-j5005:      NOTRUN -> [FAIL][82] +4 similar issues
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/pig-glk-j5005/spec@arb_shader_image_load_store@shader-mem-barrier.html
    - pig-skl-6260u:      NOTRUN -> [FAIL][83]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/pig-skl-6260u/spec@arb_shader_image_load_store@shader-mem-barrier.html

  
New tests
---------

  New tests have been introduced between CI_DRM_6290_full and Patchwork_13320_full:

### New Piglit tests (5) ###

  * spec@arb_shader_image_load_store@shader-mem-barrier:
    - Statuses : 2 fail(s)
    - Exec time: [0.13, 0.16] s

  * spec@ext_transform_feedback@order arrays lines:
    - Statuses : 1 fail(s)
    - Exec time: [0.11] s

  * spec@ext_transform_feedback@order arrays points:
    - Statuses : 1 fail(s)
    - Exec time: [0.12] s

  * spec@ext_transform_feedback@order elements points:
    - Statuses : 1 fail(s)
    - Exec time: [0.12] s

  * spec@glsl-1.30@execution@fs-execution-ordering:
    - Statuses : 1 fail(s)
    - Exec time: [0.60] s

  

Known issues
------------

  Here are the changes found in Patchwork_13320_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_eio@execbuf:
    - shard-kbl:          [PASS][84] -> [DMESG-WARN][85] ([fdo#110913 ]) +1 similar issue
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-kbl3/igt@gem_eio@execbuf.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl3/igt@gem_eio@execbuf.html

  * igt@gem_eio@unwedge-stress:
    - shard-apl:          [PASS][86] -> [DMESG-WARN][87] ([fdo#110913 ]) +1 similar issue
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-apl6/igt@gem_eio@unwedge-stress.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl5/igt@gem_eio@unwedge-stress.html

  * igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrashing:
    - shard-snb:          [PASS][88] -> [DMESG-WARN][89] ([fdo#110789] / [fdo#110913 ])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-snb2/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrashing.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-snb5/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrashing.html

  * igt@gem_vm_create@async-destroy:
    - shard-hsw:          [PASS][90] -> [DMESG-WARN][91] ([fdo#110789])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-hsw7/igt@gem_vm_create@async-destroy.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw4/igt@gem_vm_create@async-destroy.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding:
    - shard-skl:          [PASS][92] -> [FAIL][93] ([fdo#103232])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl9/igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl3/igt@kms_cursor_crc@pipe-a-cursor-256x256-sliding.html

  * igt@kms_dp_dsc@basic-dsc-enable-edp:
    - shard-iclb:         [PASS][94] -> [SKIP][95] ([fdo#109349])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-iclb2/igt@kms_dp_dsc@basic-dsc-enable-edp.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-iclb3/igt@kms_dp_dsc@basic-dsc-enable-edp.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-ytiled:
    - shard-skl:          [PASS][96] -> [FAIL][97] ([fdo#103184] / [fdo#103232])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl9/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-ytiled.html
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl3/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-ytiled.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-msflip-blt:
    - shard-iclb:         [PASS][98] -> [FAIL][99] ([fdo#103167]) +8 similar issues
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-iclb8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-msflip-blt.html
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-iclb7/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-pwrite:
    - shard-hsw:          [PASS][100] -> [SKIP][101] ([fdo#109271]) +18 similar issues
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-hsw7/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-pwrite.html
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-pwrite.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - shard-skl:          [PASS][102] -> [INCOMPLETE][103] ([fdo#104108] / [fdo#107773])
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl3/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes:
    - shard-kbl:          [PASS][104] -> [DMESG-WARN][105] ([fdo#108566])
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-kbl2/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-kbl6/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][106] -> [FAIL][107] ([fdo#108145])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl9/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         [PASS][108] -> [SKIP][109] ([fdo#109441])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-iclb3/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_sysfs_edid_timing:
    - shard-hsw:          [PASS][110] -> [FAIL][111] ([fdo#100047])
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-hsw7/igt@kms_sysfs_edid_timing.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-hsw1/igt@kms_sysfs_edid_timing.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - shard-apl:          [PASS][112] -> [DMESG-WARN][113] ([fdo#108566]) +2 similar issues
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-apl4/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl5/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  
#### Possible fixes ####

  * igt@gem_eio@in-flight-internal-immediate:
    - shard-apl:          [DMESG-WARN][114] ([fdo#110913 ]) -> [PASS][115] +1 similar issue
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-apl3/igt@gem_eio@in-flight-internal-immediate.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl7/igt@gem_eio@in-flight-internal-immediate.html

  * igt@gem_exec_schedule@semaphore-resolve:
    - shard-skl:          [FAIL][116] ([fdo#110519]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-skl3/igt@gem_exec_schedule@semaphore-resolve.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-skl7/igt@gem_exec_schedule@semaphore-resolve.html
    - shard-apl:          [FAIL][118] ([fdo#110519]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-apl7/igt@gem_exec_schedule@semaphore-resolve.html
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-apl6/igt@gem_exec_schedule@semaphore-resolve.html
    - shard-glk:          [FAIL][120] ([fdo#110519]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-glk7/igt@gem_exec_schedule@semaphore-resolve.html
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-glk4/igt@gem_exec_schedule@semaphore-resolve.html
    - shard-iclb:         [FAIL][122] ([fdo#110519]) -> [PASS][123]
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/shard-iclb6/igt@gem_exec_schedule@semaphore-resolve.html
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/shard-iclb7/igt@gem_exec_schedule@semaphore-resolve.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy:
    - shard-snb:          [DMESG-WARN][124] ([fdo#110913 ]) -> [PASS][125]
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6290/

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13320/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 08/26] drm/i915: Make the semaphore saturation mask global
  2019-06-18  7:41 ` [PATCH 08/26] drm/i915: Make the semaphore saturation mask global Chris Wilson
@ 2019-06-19 10:45   ` Tvrtko Ursulin
  0 siblings, 0 replies; 42+ messages in thread
From: Tvrtko Ursulin @ 2019-06-19 10:45 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx; +Cc: Dmitry Ermilov


On 18/06/2019 08:41, Chris Wilson wrote:
> The idea behind keeping the saturation mask local to a context backfired
> spectacularly. The premise with the local mask was that we would be more
> proactive in attempting to use semaphores after each time the context
> idled, and that all new contexts would attempt to use semaphores
> ignoring the current state of the system. This turns out to be horribly
> optimistic. If the system state is still oversaturated and the existing
> workloads have all stopped using semaphores, the new workloads would
> attempt to use semaphores and be deprioritised behind real work. The
> new contexts would not switch off using semaphores until their initial
> batch of low priority work had completed. Given sufficient backload load
> of equal user priority, this would completely starve the new work of any
> GPU time.
> 
> To compensate, remove the local tracking in favour of keeping it as
> global state on the engine -- once the system is saturated and
> semaphores are disabled, everyone stops attempting to use semaphores
> until the system is idle again. One of the reason for preferring local
> context tracking was that it worked with virtual engines, so for
> switching to global state we could either do a complete check of all the
> virtual siblings or simply disable semaphores for those requests. This
> takes the simpler approach of disabling semaphores on virtual engines.
> 
> The downside is that the decision that the engine is saturated is a
> local measure -- we are only checking whether or not this context was
> scheduled in a timely fashion, it may be legitimately delayed due to user
> priorities. We still have the same dilemma though, that we do not want
> to employ the semaphore poll unless it will be used.
> 
> Fixes: ca6e56f654e7 ("drm/i915: Disable semaphore busywaits on saturated systems")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
> Cc: Dmitry Ermilov <dmitry.ermilov@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_context.c       | 2 --
>   drivers/gpu/drm/i915/gt/intel_context_types.h | 2 --
>   drivers/gpu/drm/i915/gt/intel_engine_pm.c     | 2 ++
>   drivers/gpu/drm/i915/gt/intel_engine_types.h  | 2 ++
>   drivers/gpu/drm/i915/gt/intel_lrc.c           | 2 +-
>   drivers/gpu/drm/i915/i915_request.c           | 4 ++--
>   6 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> index 42f45744d859..2c454f227c2e 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> @@ -142,7 +142,6 @@ intel_context_init(struct intel_context *ce,
>   	ce->engine = engine;
>   	ce->ops = engine->cops;
>   	ce->sseu = engine->sseu;
> -	ce->saturated = 0;
>   
>   	INIT_LIST_HEAD(&ce->signal_link);
>   	INIT_LIST_HEAD(&ce->signals);
> @@ -223,7 +222,6 @@ void intel_context_enter_engine(struct intel_context *ce)
>   
>   void intel_context_exit_engine(struct intel_context *ce)
>   {
> -	ce->saturated = 0;
>   	intel_engine_pm_put(ce->engine);
>   }
>   
> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
> index b565c3ff4378..4c0e211c715d 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> @@ -58,8 +58,6 @@ struct intel_context {
>   	atomic_t pin_count;
>   	struct mutex pin_mutex; /* guards pinning and associated on-gpuing */
>   
> -	intel_engine_mask_t saturated; /* submitting semaphores too late? */
> -
>   	/**
>   	 * active: Active tracker for the rq activity (inc. external) on this
>   	 * intel_context object.
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> index d14e352b0b17..2ce00d3dc42a 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
> @@ -100,6 +100,8 @@ static int __engine_park(struct intel_wakeref *wf)
>   	struct intel_engine_cs *engine =
>   		container_of(wf, typeof(*engine), wakeref);
>   
> +	engine->saturated = 0;
> +
>   	/*
>   	 * If one and only one request is completed between pm events,
>   	 * we know that we are inside the kernel context and it is
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> index 11a25f060fed..1cbe10a0fec7 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> @@ -258,6 +258,8 @@ struct intel_engine_cs {
>   	struct intel_context *kernel_context; /* pinned */
>   	struct intel_context *preempt_context; /* pinned; optional */
>   
> +	intel_engine_mask_t saturated; /* submitting semaphores too late? */
> +
>   	unsigned long serial;
>   
>   	unsigned long wakeref_serial;
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index 0563fe8398c5..bbbdc63906c6 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -3198,7 +3198,6 @@ static void virtual_context_exit(struct intel_context *ce)
>   	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
>   	unsigned int n;
>   
> -	ce->saturated = 0;
>   	for (n = 0; n < ve->num_siblings; n++)
>   		intel_engine_pm_put(ve->siblings[n]);
>   }
> @@ -3397,6 +3396,7 @@ intel_execlists_create_virtual(struct i915_gem_context *ctx,
>   	ve->base.uabi_class = I915_ENGINE_CLASS_INVALID;
>   	ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;
>   	ve->base.flags = I915_ENGINE_IS_VIRTUAL;
> +	ve->base.saturated = ALL_ENGINES;

This could use a comment.

>   
>   	snprintf(ve->base.name, sizeof(ve->base.name), "virtual");
>   
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 98e4743b03be..27b9893fa8e3 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -418,7 +418,7 @@ void __i915_request_submit(struct i915_request *request)
>   	 */
>   	if (request->sched.semaphores &&
>   	    i915_sw_fence_signaled(&request->semaphore))
> -		request->hw_context->saturated |= request->sched.semaphores;
> +		engine->saturated |= request->sched.semaphores;
>   
>   	/* We may be recursing from the signal callback of another i915 fence */
>   	spin_lock_nested(&request->lock, SINGLE_DEPTH_NESTING);
> @@ -798,7 +798,7 @@ already_busywaiting(struct i915_request *rq)
>   	 *
>   	 * See the are-we-too-late? check in __i915_request_submit().
>   	 */
> -	return rq->sched.semaphores | rq->hw_context->saturated;
> +	return rq->sched.semaphores | rq->engine->saturated;
>   }
>   
>   static int
> 

Otherwise:

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 03/26] drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait()
  2019-06-18  7:41 ` [PATCH 03/26] drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait() Chris Wilson
@ 2019-06-19 11:44   ` Mika Kuoppala
  0 siblings, 0 replies; 42+ messages in thread
From: Mika Kuoppala @ 2019-06-19 11:44 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Since commit eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on
> struct_mutex"), the I915_WAIT_LOCKED flags passed to i915_request_wait()
> has been defunct. Now go ahead and remove it from all callers.
>
> References: eb8d0f5af4ec ("drm/i915: Remove GPU reset dependence on struct_mutex")

This reminds me about the warm and soothing feeling I got
seeing RESET_HANDOFF vanishing. I like simple things.

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

And the comment about locked was amended also so,

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

> ---
>  .../drm/i915/gem/selftests/i915_gem_context.c | 14 +++------
>  drivers/gpu/drm/i915/gt/intel_ringbuffer.c    |  2 +-
>  drivers/gpu/drm/i915/gt/intel_workarounds.c   |  2 +-
>  drivers/gpu/drm/i915/gt/selftest_hangcheck.c  |  9 ++----
>  drivers/gpu/drm/i915/gt/selftest_lrc.c        | 24 ++++++---------
>  .../gpu/drm/i915/gt/selftest_workarounds.c    |  6 ++--
>  drivers/gpu/drm/i915/i915_active.h            |  2 +-
>  drivers/gpu/drm/i915/i915_request.c           |  4 ---
>  drivers/gpu/drm/i915/i915_trace.h             |  3 +-
>  drivers/gpu/drm/i915/selftests/i915_request.c | 30 +++++++------------
>  .../gpu/drm/i915/selftests/i915_timeline.c    |  6 ++--
>  11 files changed, 36 insertions(+), 66 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> index 03ac5003abf1..eaa2b16574c7 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> @@ -83,9 +83,7 @@ static int live_nop_switch(void *arg)
>  			}
>  			i915_request_add(rq);
>  		}
> -		if (i915_request_wait(rq,
> -				      I915_WAIT_LOCKED,
> -				      HZ / 5) < 0) {
> +		if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  			pr_err("Failed to populated %d contexts\n", nctx);
>  			i915_gem_set_wedged(i915);
>  			err = -EIO;
> @@ -128,9 +126,7 @@ static int live_nop_switch(void *arg)
>  
>  				i915_request_add(rq);
>  			}
> -			if (i915_request_wait(rq,
> -					      I915_WAIT_LOCKED,
> -					      HZ / 5) < 0) {
> +			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  				pr_err("Switching between %ld contexts timed out\n",
>  				       prime);
>  				i915_gem_set_wedged(i915);
> @@ -893,7 +889,7 @@ __read_slice_count(struct drm_i915_private *i915,
>  	if (spin)
>  		igt_spinner_end(spin);
>  
> -	ret = i915_request_wait(rq, I915_WAIT_LOCKED, MAX_SCHEDULE_TIMEOUT);
> +	ret = i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT);
>  	i915_request_put(rq);
>  	if (ret < 0)
>  		return ret;
> @@ -980,9 +976,7 @@ __sseu_finish(struct drm_i915_private *i915,
>  		igt_spinner_end(spin);
>  
>  	if ((flags & TEST_IDLE) && ret == 0) {
> -		ret = i915_gem_wait_for_idle(i915,
> -					     I915_WAIT_LOCKED,
> -					     MAX_SCHEDULE_TIMEOUT);
> +		ret = i915_gem_wait_for_idle(i915, 0, MAX_SCHEDULE_TIMEOUT);
>  		if (ret)
>  			return ret;
>  
> diff --git a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
> index bd35ad202459..c6023bc9452d 100644
> --- a/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/gt/intel_ringbuffer.c
> @@ -1815,7 +1815,7 @@ static noinline int wait_for_space(struct intel_ring *ring, unsigned int bytes)
>  		return -ENOSPC;
>  
>  	timeout = i915_request_wait(target,
> -				    I915_WAIT_INTERRUPTIBLE | I915_WAIT_LOCKED,
> +				    I915_WAIT_INTERRUPTIBLE,
>  				    MAX_SCHEDULE_TIMEOUT);
>  	if (timeout < 0)
>  		return timeout;
> diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> index 165b0a45e009..9624d9e776e3 100644
> --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
> +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
> @@ -1384,7 +1384,7 @@ static int engine_wa_list_verify(struct intel_context *ce,
>  		goto err_vma;
>  
>  	i915_request_add(rq);
> -	if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
> +	if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  		err = -ETIME;
>  		goto err_vma;
>  	}
> diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> index b0b2998e56b8..1ee4c923044f 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> @@ -339,8 +339,7 @@ static int igt_hang_sanitycheck(void *arg)
>  
>  		timeout = 0;
>  		igt_wedge_on_timeout(&w, i915, HZ / 10 /* 100ms timeout*/)
> -			timeout = i915_request_wait(rq,
> -						    I915_WAIT_LOCKED,
> +			timeout = i915_request_wait(rq, 0,
>  						    MAX_SCHEDULE_TIMEOUT);
>  		if (i915_reset_failed(i915))
>  			timeout = -EIO;
> @@ -1098,7 +1097,7 @@ static int igt_reset_wait(void *arg)
>  
>  	reset_count = fake_hangcheck(i915, ALL_ENGINES);
>  
> -	timeout = i915_request_wait(rq, I915_WAIT_LOCKED, 10);
> +	timeout = i915_request_wait(rq, 0, 10);
>  	if (timeout < 0) {
>  		pr_err("i915_request_wait failed on a stuck request: err=%ld\n",
>  		       timeout);
> @@ -1666,9 +1665,7 @@ static int igt_atomic_reset_engine(struct intel_engine_cs *engine,
>  		struct igt_wedge_me w;
>  
>  		igt_wedge_on_timeout(&w, i915, HZ / 20 /* 50ms timeout*/)
> -			i915_request_wait(rq,
> -					  I915_WAIT_LOCKED,
> -					  MAX_SCHEDULE_TIMEOUT);
> +			i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT);
>  		if (i915_reset_failed(i915))
>  			err = -EIO;
>  	}
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index d84d31e3da19..401e8b539297 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -192,7 +192,7 @@ static int live_busywait_preempt(void *arg)
>  		}
>  
>  		/* Low priority request should be busywaiting now */
> -		if (i915_request_wait(lo, I915_WAIT_LOCKED, 1) != -ETIME) {
> +		if (i915_request_wait(lo, 0, 1) != -ETIME) {
>  			pr_err("%s: Busywaiting request did not!\n",
>  			       engine->name);
>  			err = -EIO;
> @@ -220,7 +220,7 @@ static int live_busywait_preempt(void *arg)
>  		intel_ring_advance(hi, cs);
>  		i915_request_add(hi);
>  
> -		if (i915_request_wait(lo, I915_WAIT_LOCKED, HZ / 5) < 0) {
> +		if (i915_request_wait(lo, 0, HZ / 5) < 0) {
>  			struct drm_printer p = drm_info_printer(i915->drm.dev);
>  
>  			pr_err("%s: Failed to preempt semaphore busywait!\n",
> @@ -739,7 +739,6 @@ static int live_suppress_wait_preempt(void *arg)
>  			GEM_BUG_ON(!i915_request_started(rq[0]));
>  
>  			if (i915_request_wait(rq[depth],
> -					      I915_WAIT_LOCKED |
>  					      I915_WAIT_PRIORITY,
>  					      1) != -ETIME) {
>  				pr_err("%s: Waiter depth:%d completed!\n",
> @@ -841,7 +840,7 @@ static int live_chain_preempt(void *arg)
>  			 __func__, engine->name, ring_size);
>  
>  		igt_spinner_end(&lo.spin);
> -		if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 2) < 0) {
> +		if (i915_request_wait(rq, 0, HZ / 2) < 0) {
>  			pr_err("Timed out waiting to flush %s\n", engine->name);
>  			goto err_wedged;
>  		}
> @@ -882,7 +881,7 @@ static int live_chain_preempt(void *arg)
>  			engine->schedule(rq, &attr);
>  
>  			igt_spinner_end(&hi.spin);
> -			if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
> +			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  				struct drm_printer p =
>  					drm_info_printer(i915->drm.dev);
>  
> @@ -898,7 +897,7 @@ static int live_chain_preempt(void *arg)
>  			if (IS_ERR(rq))
>  				goto err_wedged;
>  			i915_request_add(rq);
> -			if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
> +			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  				struct drm_printer p =
>  					drm_info_printer(i915->drm.dev);
>  
> @@ -1396,9 +1395,7 @@ static int nop_virtual_engine(struct drm_i915_private *i915,
>  		}
>  
>  		for (nc = 0; nc < nctx; nc++) {
> -			if (i915_request_wait(request[nc],
> -					      I915_WAIT_LOCKED,
> -					      HZ / 10) < 0) {
> +			if (i915_request_wait(request[nc], 0, HZ / 10) < 0) {
>  				pr_err("%s(%s): wait for %llx:%lld timed out\n",
>  				       __func__, ve[0]->engine->name,
>  				       request[nc]->fence.context,
> @@ -1545,7 +1542,7 @@ static int mask_virtual_engine(struct drm_i915_private *i915,
>  	}
>  
>  	for (n = 0; n < nsibling; n++) {
> -		if (i915_request_wait(request[n], I915_WAIT_LOCKED, HZ / 10) < 0) {
> +		if (i915_request_wait(request[n], 0, HZ / 10) < 0) {
>  			pr_err("%s(%s): wait for %llx:%lld timed out\n",
>  			       __func__, ve->engine->name,
>  			       request[n]->fence.context,
> @@ -1720,9 +1717,7 @@ static int bond_virtual_engine(struct drm_i915_private *i915,
>  		}
>  		onstack_fence_fini(&fence);
>  
> -		if (i915_request_wait(rq[0],
> -				      I915_WAIT_LOCKED,
> -				      HZ / 10) < 0) {
> +		if (i915_request_wait(rq[0], 0, HZ / 10) < 0) {
>  			pr_err("Master request did not execute (on %s)!\n",
>  			       rq[0]->engine->name);
>  			err = -EIO;
> @@ -1730,8 +1725,7 @@ static int bond_virtual_engine(struct drm_i915_private *i915,
>  		}
>  
>  		for (n = 0; n < nsibling; n++) {
> -			if (i915_request_wait(rq[n + 1],
> -					      I915_WAIT_LOCKED,
> +			if (i915_request_wait(rq[n + 1], 0,
>  					      MAX_SCHEDULE_TIMEOUT) < 0) {
>  				err = -EIO;
>  				goto out;
> diff --git a/drivers/gpu/drm/i915/gt/selftest_workarounds.c b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
> index 91449d5157f6..87426f9183cd 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_workarounds.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_workarounds.c
> @@ -541,7 +541,7 @@ static int check_dirty_whitelist(struct i915_gem_context *ctx,
>  		if (err)
>  			goto out_batch;
>  
> -		if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
> +		if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  			pr_err("%s: Futzing %x timedout; cancelling test\n",
>  			       engine->name, reg);
>  			i915_gem_set_wedged(ctx->i915);
> @@ -739,7 +739,7 @@ static int read_whitelisted_registers(struct i915_gem_context *ctx,
>  err_req:
>  	i915_request_add(rq);
>  
> -	if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0)
> +	if (i915_request_wait(rq, 0, HZ / 5) < 0)
>  		err = -EIO;
>  
>  	return err;
> @@ -790,7 +790,7 @@ static int scrub_whitelisted_registers(struct i915_gem_context *ctx,
>  
>  err_request:
>  	i915_request_add(rq);
> -	if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0)
> +	if (i915_request_wait(rq, 0, HZ / 5) < 0)
>  		err = -EIO;
>  
>  err_unpin:
> diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
> index d55d37673944..c14eebf6d074 100644
> --- a/drivers/gpu/drm/i915/i915_active.h
> +++ b/drivers/gpu/drm/i915/i915_active.h
> @@ -330,7 +330,7 @@ i915_active_request_retire(struct i915_active_request *active,
>  		return 0;
>  
>  	ret = i915_request_wait(request,
> -				I915_WAIT_INTERRUPTIBLE | I915_WAIT_LOCKED,
> +				I915_WAIT_INTERRUPTIBLE,
>  				MAX_SCHEDULE_TIMEOUT);
>  	if (ret < 0)
>  		return ret;
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 9819483d1b5d..d7fd77e8a789 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -1360,10 +1360,6 @@ static void request_wait_wake(struct dma_fence *fence, struct dma_fence_cb *cb)
>   * maximum of @timeout jiffies (with MAX_SCHEDULE_TIMEOUT implying an
>   * unbounded wait).
>   *
> - * If the caller holds the struct_mutex, the caller must pass I915_WAIT_LOCKED
> - * in via the flags, and vice versa if the struct_mutex is not held, the caller
> - * must not specify that the wait is locked.
> - *
>   * Returns the remaining time (in jiffies) if the request completed, which may
>   * be zero or -ETIME if the request is unfinished after the timeout expires.
>   * May return -EINTR is called with I915_WAIT_INTERRUPTIBLE and a signal is
> diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
> index 5c8cfaa70d72..f4ce643b3bc3 100644
> --- a/drivers/gpu/drm/i915/i915_trace.h
> +++ b/drivers/gpu/drm/i915/i915_trace.h
> @@ -863,10 +863,9 @@ TRACE_EVENT(i915_request_wait_begin,
>  			   __entry->flags = flags;
>  			   ),
>  
> -	    TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, blocking=%u, flags=0x%x",
> +	    TP_printk("dev=%u, engine=%u:%u, hw_id=%u, ctx=%llu, seqno=%u, flags=0x%x",
>  		      __entry->dev, __entry->class, __entry->instance,
>  		      __entry->hw_id, __entry->ctx, __entry->seqno,
> -		      !!(__entry->flags & I915_WAIT_LOCKED),
>  		      __entry->flags)
>  );
>  
> diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
> index 3de24f3d4ed5..298bb7116c51 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_request.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_request.c
> @@ -74,12 +74,12 @@ static int igt_wait_request(void *arg)
>  		goto out_unlock;
>  	}
>  
> -	if (i915_request_wait(request, I915_WAIT_LOCKED, 0) != -ETIME) {
> +	if (i915_request_wait(request, 0, 0) != -ETIME) {
>  		pr_err("request wait (busy query) succeeded (expected timeout before submit!)\n");
>  		goto out_unlock;
>  	}
>  
> -	if (i915_request_wait(request, I915_WAIT_LOCKED, T) != -ETIME) {
> +	if (i915_request_wait(request, 0, T) != -ETIME) {
>  		pr_err("request wait succeeded (expected timeout before submit!)\n");
>  		goto out_unlock;
>  	}
> @@ -91,7 +91,7 @@ static int igt_wait_request(void *arg)
>  
>  	i915_request_add(request);
>  
> -	if (i915_request_wait(request, I915_WAIT_LOCKED, 0) != -ETIME) {
> +	if (i915_request_wait(request, 0, 0) != -ETIME) {
>  		pr_err("request wait (busy query) succeeded (expected timeout after submit!)\n");
>  		goto out_unlock;
>  	}
> @@ -101,12 +101,12 @@ static int igt_wait_request(void *arg)
>  		goto out_unlock;
>  	}
>  
> -	if (i915_request_wait(request, I915_WAIT_LOCKED, T / 2) != -ETIME) {
> +	if (i915_request_wait(request, 0, T / 2) != -ETIME) {
>  		pr_err("request wait succeeded (expected timeout!)\n");
>  		goto out_unlock;
>  	}
>  
> -	if (i915_request_wait(request, I915_WAIT_LOCKED, T) == -ETIME) {
> +	if (i915_request_wait(request, 0, T) == -ETIME) {
>  		pr_err("request wait timed out!\n");
>  		goto out_unlock;
>  	}
> @@ -116,7 +116,7 @@ static int igt_wait_request(void *arg)
>  		goto out_unlock;
>  	}
>  
> -	if (i915_request_wait(request, I915_WAIT_LOCKED, T) == -ETIME) {
> +	if (i915_request_wait(request, 0, T) == -ETIME) {
>  		pr_err("request wait timed out when already complete!\n");
>  		goto out_unlock;
>  	}
> @@ -574,9 +574,7 @@ static int live_nop_request(void *arg)
>  
>  				i915_request_add(request);
>  			}
> -			i915_request_wait(request,
> -					  I915_WAIT_LOCKED,
> -					  MAX_SCHEDULE_TIMEOUT);
> +			i915_request_wait(request, 0, MAX_SCHEDULE_TIMEOUT);
>  
>  			times[1] = ktime_sub(ktime_get_raw(), times[1]);
>  			if (prime == 1)
> @@ -706,9 +704,7 @@ static int live_empty_request(void *arg)
>  			err = PTR_ERR(request);
>  			goto out_batch;
>  		}
> -		i915_request_wait(request,
> -				  I915_WAIT_LOCKED,
> -				  MAX_SCHEDULE_TIMEOUT);
> +		i915_request_wait(request, 0, MAX_SCHEDULE_TIMEOUT);
>  
>  		for_each_prime_number_from(prime, 1, 8192) {
>  			times[1] = ktime_get_raw();
> @@ -720,9 +716,7 @@ static int live_empty_request(void *arg)
>  					goto out_batch;
>  				}
>  			}
> -			i915_request_wait(request,
> -					  I915_WAIT_LOCKED,
> -					  MAX_SCHEDULE_TIMEOUT);
> +			i915_request_wait(request, 0, MAX_SCHEDULE_TIMEOUT);
>  
>  			times[1] = ktime_sub(ktime_get_raw(), times[1]);
>  			if (prime == 1)
> @@ -895,8 +889,7 @@ static int live_all_engines(void *arg)
>  	for_each_engine(engine, i915, id) {
>  		long timeout;
>  
> -		timeout = i915_request_wait(request[id],
> -					    I915_WAIT_LOCKED,
> +		timeout = i915_request_wait(request[id], 0,
>  					    MAX_SCHEDULE_TIMEOUT);
>  		if (timeout < 0) {
>  			err = timeout;
> @@ -1013,8 +1006,7 @@ static int live_sequential_engines(void *arg)
>  			goto out_request;
>  		}
>  
> -		timeout = i915_request_wait(request[id],
> -					    I915_WAIT_LOCKED,
> +		timeout = i915_request_wait(request[id], 0,
>  					    MAX_SCHEDULE_TIMEOUT);
>  		if (timeout < 0) {
>  			err = timeout;
> diff --git a/drivers/gpu/drm/i915/selftests/i915_timeline.c b/drivers/gpu/drm/i915/selftests/i915_timeline.c
> index 724bf3650b3e..76d3977f1d4b 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_timeline.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_timeline.c
> @@ -724,7 +724,7 @@ static int live_hwsp_wrap(void *arg)
>  
>  		i915_request_add(rq);
>  
> -		if (i915_request_wait(rq, I915_WAIT_LOCKED, HZ / 5) < 0) {
> +		if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  			pr_err("Wait for timeline writes timed out!\n");
>  			err = -EIO;
>  			goto out;
> @@ -797,9 +797,7 @@ static int live_hwsp_recycle(void *arg)
>  				goto out;
>  			}
>  
> -			if (i915_request_wait(rq,
> -					      I915_WAIT_LOCKED,
> -					      HZ / 5) < 0) {
> +			if (i915_request_wait(rq, 0, HZ / 5) < 0) {
>  				pr_err("Wait for timeline writes timed out!\n");
>  				i915_timeline_put(tl);
>  				err = -EIO;
> -- 
> 2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring
  2019-06-18  7:41 ` [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring Chris Wilson
@ 2019-06-19 13:12   ` Mika Kuoppala
  2019-06-19 13:18     ` Chris Wilson
  0 siblings, 1 reply; 42+ messages in thread
From: Mika Kuoppala @ 2019-06-19 13:12 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> In the unlikely case the request completes while we regard it as not even
> executing on the GPU (see the next patch!), we have to flush any pending
> execution callbacks at retirement and ensure that we do not add any
> more.
>

I did see the next patch. Looked like a mountain.

Well we don't lose warnings and we should never see
a precompleted request with current codebase so,

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_request.c | 93 +++++++++++++++--------------
>  1 file changed, 49 insertions(+), 44 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index d7fd77e8a789..51b068a57193 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -119,6 +119,50 @@ const struct dma_fence_ops i915_fence_ops = {
>  	.release = i915_fence_release,
>  };
>  
> +static void irq_execute_cb(struct irq_work *wrk)
> +{
> +	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
> +
> +	i915_sw_fence_complete(cb->fence);
> +	kmem_cache_free(global.slab_execute_cbs, cb);
> +}
> +
> +static void irq_execute_cb_hook(struct irq_work *wrk)
> +{
> +	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
> +
> +	cb->hook(container_of(cb->fence, struct i915_request, submit),
> +		 &cb->signal->fence);
> +	i915_request_put(cb->signal);
> +
> +	irq_execute_cb(wrk);
> +}
> +
> +static void __notify_execute_cb(struct i915_request *rq)
> +{
> +	struct execute_cb *cb;
> +
> +	lockdep_assert_held(&rq->lock);
> +
> +	if (list_empty(&rq->execute_cb))
> +		return;
> +
> +	list_for_each_entry(cb, &rq->execute_cb, link)
> +		irq_work_queue(&cb->work);
> +
> +	/*
> +	 * XXX Rollback on __i915_request_unsubmit()
> +	 *
> +	 * In the future, perhaps when we have an active time-slicing scheduler,
> +	 * it will be interesting to unsubmit parallel execution and remove
> +	 * busywaits from the GPU until their master is restarted. This is
> +	 * quite hairy, we have to carefully rollback the fence and do a
> +	 * preempt-to-idle cycle on the target engine, all the while the
> +	 * master execute_cb may refire.
> +	 */
> +	INIT_LIST_HEAD(&rq->execute_cb);
> +}
> +
>  static inline void
>  i915_request_remove_from_client(struct i915_request *request)
>  {
> @@ -246,6 +290,11 @@ static bool i915_request_retire(struct i915_request *rq)
>  		GEM_BUG_ON(!atomic_read(&rq->i915->gt_pm.rps.num_waiters));
>  		atomic_dec(&rq->i915->gt_pm.rps.num_waiters);
>  	}
> +	if (!test_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags)) {
> +		set_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags);
> +		__notify_execute_cb(rq);
> +	}
> +	GEM_BUG_ON(!list_empty(&rq->execute_cb));
>  	spin_unlock(&rq->lock);
>  
>  	local_irq_enable();
> @@ -285,50 +334,6 @@ void i915_request_retire_upto(struct i915_request *rq)
>  	} while (i915_request_retire(tmp) && tmp != rq);
>  }
>  
> -static void irq_execute_cb(struct irq_work *wrk)
> -{
> -	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
> -
> -	i915_sw_fence_complete(cb->fence);
> -	kmem_cache_free(global.slab_execute_cbs, cb);
> -}
> -
> -static void irq_execute_cb_hook(struct irq_work *wrk)
> -{
> -	struct execute_cb *cb = container_of(wrk, typeof(*cb), work);
> -
> -	cb->hook(container_of(cb->fence, struct i915_request, submit),
> -		 &cb->signal->fence);
> -	i915_request_put(cb->signal);
> -
> -	irq_execute_cb(wrk);
> -}
> -
> -static void __notify_execute_cb(struct i915_request *rq)
> -{
> -	struct execute_cb *cb;
> -
> -	lockdep_assert_held(&rq->lock);
> -
> -	if (list_empty(&rq->execute_cb))
> -		return;
> -
> -	list_for_each_entry(cb, &rq->execute_cb, link)
> -		irq_work_queue(&cb->work);
> -
> -	/*
> -	 * XXX Rollback on __i915_request_unsubmit()
> -	 *
> -	 * In the future, perhaps when we have an active time-slicing scheduler,
> -	 * it will be interesting to unsubmit parallel execution and remove
> -	 * busywaits from the GPU until their master is restarted. This is
> -	 * quite hairy, we have to carefully rollback the fence and do a
> -	 * preempt-to-idle cycle on the target engine, all the while the
> -	 * master execute_cb may refire.
> -	 */
> -	INIT_LIST_HEAD(&rq->execute_cb);
> -}
> -
>  static int
>  __i915_request_await_execution(struct i915_request *rq,
>  			       struct i915_request *signal,
> -- 
> 2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring
  2019-06-19 13:12   ` Mika Kuoppala
@ 2019-06-19 13:18     ` Chris Wilson
  0 siblings, 0 replies; 42+ messages in thread
From: Chris Wilson @ 2019-06-19 13:18 UTC (permalink / raw)
  To: Mika Kuoppala, intel-gfx

Quoting Mika Kuoppala (2019-06-19 14:12:46)
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> 
> > In the unlikely case the request completes while we regard it as not even
> > executing on the GPU (see the next patch!), we have to flush any pending
> > execution callbacks at retirement and ensure that we do not add any
> > more.
> >
> 
> I did see the next patch. Looked like a mountain.
> 
> Well we don't lose warnings and we should never see
> a precompleted request with current codebase so,

That is very true. I had a little freak out when I found the race, and
believed a sigh of relief on realising it only affected the next path.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2019-06-19 13:18 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-18  7:41 [PATCH 01/26] drm/i915: Keep engine alive as we retire the context Chris Wilson
2019-06-18  7:41 ` [PATCH 02/26] drm/i915: Skip shrinking already freed pages Chris Wilson
2019-06-18 11:59   ` Chris Wilson
2019-06-18 16:06   ` Mika Kuoppala
2019-06-18 16:22     ` Chris Wilson
2019-06-18  7:41 ` [PATCH 03/26] drm/i915: Stop passing I915_WAIT_LOCKED to i915_request_wait() Chris Wilson
2019-06-19 11:44   ` Mika Kuoppala
2019-06-18  7:41 ` [PATCH 04/26] drm/i915: Flush the execution-callbacks on retiring Chris Wilson
2019-06-19 13:12   ` Mika Kuoppala
2019-06-19 13:18     ` Chris Wilson
2019-06-18  7:41 ` [PATCH 05/26] drm/i915/execlists: Preempt-to-busy Chris Wilson
2019-06-18  7:41 ` [PATCH 06/26] drm/i915/execlists: Minimalistic timeslicing Chris Wilson
2019-06-18  7:41 ` [PATCH 07/26] drm/i915/execlists: Force preemption Chris Wilson
2019-06-18  7:41 ` [PATCH 08/26] drm/i915: Make the semaphore saturation mask global Chris Wilson
2019-06-19 10:45   ` Tvrtko Ursulin
2019-06-18  7:41 ` [PATCH 09/26] dma-fence: Propagate errors to dma-fence-array container Chris Wilson
2019-06-18  7:41 ` [PATCH 10/26] dma-fence: Report the composite sync_file status Chris Wilson
2019-06-18  7:41 ` [PATCH 11/26] dma-fence: Refactor signaling for manual invocation Chris Wilson
2019-06-18  7:41 ` [PATCH 12/26] dma-fence: Always execute signal callbacks Chris Wilson
2019-06-18  7:41 ` [PATCH 13/26] drm/i915: Track i915_active using debugobjects Chris Wilson
2019-06-18  7:41 ` [PATCH 14/26] drm/i915: Signal fence completion from i915_request_wait Chris Wilson
2019-06-18  7:41 ` [PATCH 15/26] drm/i915: Remove waiting & retiring from shrinker paths Chris Wilson
2019-06-18  7:41 ` [PATCH 16/26] drm/i915: Throw away the active object retirement complexity Chris Wilson
2019-06-18  7:41 ` [PATCH 17/26] drm/i915: Provide an i915_active.acquire callback Chris Wilson
2019-06-18  7:41 ` [PATCH 18/26] drm/i915: Push the i915_active.retire into a worker Chris Wilson
2019-06-18  7:41 ` [PATCH 19/26] drm/i915/overlay: Switch to using i915_active tracking Chris Wilson
2019-06-18  7:41 ` [PATCH 20/26] drm/i915: Forgo last_fence active request tracking Chris Wilson
2019-06-18  7:41 ` [PATCH 21/26] drm/i915: Extract intel_frontbuffer active tracking Chris Wilson
2019-06-18  7:41 ` [PATCH 22/26] drm/i915: Coordinate i915_active with its own mutex Chris Wilson
2019-06-18  7:41 ` [PATCH 23/26] drm/i915: Rename intel_wakeref_[is]_active Chris Wilson
2019-06-18  8:14   ` Chris Wilson
2019-06-18  7:41 ` [PATCH 24/26] drm/i915: Teach execbuffer to take the engine wakeref not GT Chris Wilson
2019-06-18  7:41 ` [PATCH 25/26] drm/i915: Replace struct_mutex for batch pool serialisation Chris Wilson
2019-06-18  7:41 ` [PATCH 26/26] drm/i915: Move idle barrier cleanup into engine-pm Chris Wilson
2019-06-18  8:57 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/26] drm/i915: Keep engine alive as we retire the context Patchwork
2019-06-18  9:09 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-06-18  9:18 ` ✓ Fi.CI.BAT: success " Patchwork
2019-06-18 13:45 ` [PATCH 01/26] " Mika Kuoppala
2019-06-18 13:59   ` Chris Wilson
2019-06-18 14:03     ` Chris Wilson
2019-06-18 14:08     ` Mika Kuoppala
2019-06-18 19:15 ` ✗ Fi.CI.IGT: failure for series starting with [01/26] " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.