intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names
@ 2020-03-19  9:19 Chris Wilson
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup Chris Wilson
                   ` (7 more replies)
  0 siblings, 8 replies; 21+ messages in thread
From: Chris Wilson @ 2020-03-19  9:19 UTC (permalink / raw)
  To: intel-gfx

%pS includes the offset, which is useful for return addresses but noise
when we are pretty printing a known (and expected) function entry point.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_sw_fence.c          | 2 +-
 drivers/gpu/drm/i915/selftests/i915_active.c  | 2 +-
 drivers/gpu/drm/i915/selftests/i915_request.c | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
index a3d38e089b6e..7daf81f55c90 100644
--- a/drivers/gpu/drm/i915/i915_sw_fence.c
+++ b/drivers/gpu/drm/i915/i915_sw_fence.c
@@ -421,7 +421,7 @@ static void timer_i915_sw_fence_wake(struct timer_list *t)
 	if (!fence)
 		return;
 
-	pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%pS)\n",
+	pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%ps)\n",
 		  cb->dma->ops->get_driver_name(cb->dma),
 		  cb->dma->ops->get_timeline_name(cb->dma),
 		  cb->dma->seqno,
diff --git a/drivers/gpu/drm/i915/selftests/i915_active.c b/drivers/gpu/drm/i915/selftests/i915_active.c
index 68bbb1580162..54080fb4af4b 100644
--- a/drivers/gpu/drm/i915/selftests/i915_active.c
+++ b/drivers/gpu/drm/i915/selftests/i915_active.c
@@ -277,7 +277,7 @@ static struct intel_engine_cs *node_to_barrier(struct active_node *it)
 
 void i915_active_print(struct i915_active *ref, struct drm_printer *m)
 {
-	drm_printf(m, "active %pS:%pS\n", ref->active, ref->retire);
+	drm_printf(m, "active %ps:%ps\n", ref->active, ref->retire);
 	drm_printf(m, "\tcount: %d\n", atomic_read(&ref->count));
 	drm_printf(m, "\tpreallocated barriers? %s\n",
 		   yesno(!llist_empty(&ref->preallocated_barriers)));
diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
index f89d9c42f1fa..7ac9616de9d8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -1233,7 +1233,7 @@ static int live_parallel_engines(void *arg)
 		struct igt_live_test t;
 		unsigned int idx;
 
-		snprintf(name, sizeof(name), "%pS", fn);
+		snprintf(name, sizeof(name), "%ps", fn);
 		err = igt_live_test_begin(&t, i915, __func__, name);
 		if (err)
 			break;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
@ 2020-03-19  9:19 ` Chris Wilson
  2020-03-19 14:20   ` Tvrtko Ursulin
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels Chris Wilson
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19  9:19 UTC (permalink / raw)
  To: intel-gfx

As we store the handle lookup inside a radix tree, we do not need the
gem_context->mutex except until we need to insert our lookup into the
common radix tree. This takes a small bit of rearranging to ensure that
the lut we insert into the tree is ready prior to actually inserting it
(as soon as it is exposed via the radixtree, it is visible to any other
submission).

v2: For brownie points, remove the goto spaghetti.
v3: Tighten up the closed-handle checks.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 136 +++++++++++-------
 1 file changed, 87 insertions(+), 49 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index d3f4f28e9468..042a9ccf348f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -481,7 +481,7 @@ eb_add_vma(struct i915_execbuffer *eb,
 
 	GEM_BUG_ON(i915_vma_is_closed(vma));
 
-	ev->vma = i915_vma_get(vma);
+	ev->vma = vma;
 	ev->exec = entry;
 	ev->flags = entry->flags;
 
@@ -728,77 +728,117 @@ static int eb_select_context(struct i915_execbuffer *eb)
 	return 0;
 }
 
-static int eb_lookup_vmas(struct i915_execbuffer *eb)
+static int __eb_add_lut(struct i915_execbuffer *eb,
+			u32 handle, struct i915_vma *vma)
 {
-	struct radix_tree_root *handles_vma = &eb->gem_context->handles_vma;
-	struct drm_i915_gem_object *obj;
-	unsigned int i, batch;
+	struct i915_gem_context *ctx = eb->gem_context;
+	struct i915_lut_handle *lut;
 	int err;
 
-	if (unlikely(i915_gem_context_is_closed(eb->gem_context)))
-		return -ENOENT;
+	lut = i915_lut_handle_alloc();
+	if (unlikely(!lut))
+		return -ENOMEM;
 
-	INIT_LIST_HEAD(&eb->relocs);
-	INIT_LIST_HEAD(&eb->unbound);
+	i915_vma_get(vma);
+	if (!atomic_fetch_inc(&vma->open_count))
+		i915_vma_reopen(vma);
+	lut->handle = handle;
+	lut->ctx = ctx;
+
+	/* Check that the context hasn't been closed in the meantime */
+	err = -EINTR;
+	if (!mutex_lock_interruptible(&ctx->mutex)) {
+		err = -ENOENT;
+		if (likely(!i915_gem_context_is_closed(ctx)))
+			err = radix_tree_insert(&ctx->handles_vma, handle, vma);
+		if (err == 0) { /* And nor has this handle */
+			struct drm_i915_gem_object *obj = vma->obj;
+
+			i915_gem_object_lock(obj);
+			if (idr_find(&eb->file->object_idr, handle) == obj) {
+				list_add(&lut->obj_link, &obj->lut_list);
+			} else {
+				radix_tree_delete(&ctx->handles_vma, handle);
+				err = -ENOENT;
+			}
+			i915_gem_object_unlock(obj);
+		}
+		mutex_unlock(&ctx->mutex);
+	}
+	if (unlikely(err))
+		goto err;
 
-	batch = eb_batch_index(eb);
+	return 0;
 
-	for (i = 0; i < eb->buffer_count; i++) {
-		u32 handle = eb->exec[i].handle;
-		struct i915_lut_handle *lut;
+err:
+	atomic_dec(&vma->open_count);
+	i915_vma_put(vma);
+	i915_lut_handle_free(lut);
+	return err;
+}
+
+static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle)
+{
+	do {
+		struct drm_i915_gem_object *obj;
 		struct i915_vma *vma;
+		int err;
 
-		vma = radix_tree_lookup(handles_vma, handle);
+		rcu_read_lock();
+		vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle);
+		if (likely(vma))
+			vma = i915_vma_tryget(vma);
+		rcu_read_unlock();
 		if (likely(vma))
-			goto add_vma;
+			return vma;
 
 		obj = i915_gem_object_lookup(eb->file, handle);
-		if (unlikely(!obj)) {
-			err = -ENOENT;
-			goto err_vma;
-		}
+		if (unlikely(!obj))
+			return ERR_PTR(-ENOENT);
 
 		vma = i915_vma_instance(obj, eb->context->vm, NULL);
 		if (IS_ERR(vma)) {
-			err = PTR_ERR(vma);
-			goto err_obj;
+			i915_gem_object_put(obj);
+			return vma;
 		}
 
-		lut = i915_lut_handle_alloc();
-		if (unlikely(!lut)) {
-			err = -ENOMEM;
-			goto err_obj;
-		}
+		err = __eb_add_lut(eb, handle, vma);
+		if (likely(!err))
+			return vma;
 
-		err = radix_tree_insert(handles_vma, handle, vma);
-		if (unlikely(err)) {
-			i915_lut_handle_free(lut);
-			goto err_obj;
-		}
+		i915_gem_object_put(obj);
+		if (err != -EEXIST)
+			return ERR_PTR(err);
+	} while (1);
+}
 
-		/* transfer ref to lut */
-		if (!atomic_fetch_inc(&vma->open_count))
-			i915_vma_reopen(vma);
-		lut->handle = handle;
-		lut->ctx = eb->gem_context;
+static int eb_lookup_vmas(struct i915_execbuffer *eb)
+{
+	unsigned int batch = eb_batch_index(eb);
+	unsigned int i;
+	int err = 0;
 
-		i915_gem_object_lock(obj);
-		list_add(&lut->obj_link, &obj->lut_list);
-		i915_gem_object_unlock(obj);
+	INIT_LIST_HEAD(&eb->relocs);
+	INIT_LIST_HEAD(&eb->unbound);
+
+	for (i = 0; i < eb->buffer_count; i++) {
+		struct i915_vma *vma;
+
+		vma = eb_lookup_vma(eb, eb->exec[i].handle);
+		if (IS_ERR(vma)) {
+			err = PTR_ERR(vma);
+			break;
+		}
 
-add_vma:
 		err = eb_validate_vma(eb, &eb->exec[i], vma);
-		if (unlikely(err))
-			goto err_vma;
+		if (unlikely(err)) {
+			i915_vma_put(vma);
+			break;
+		}
 
 		eb_add_vma(eb, i, batch, vma);
 	}
 
-	return 0;
-
-err_obj:
-	i915_gem_object_put(obj);
-err_vma:
 	eb->vma[i].vma = NULL;
 	return err;
 }
@@ -1494,9 +1534,7 @@ static int eb_relocate(struct i915_execbuffer *eb)
 {
 	int err;
 
-	mutex_lock(&eb->gem_context->mutex);
 	err = eb_lookup_vmas(eb);
-	mutex_unlock(&eb->gem_context->mutex);
 	if (err)
 		return err;
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup Chris Wilson
@ 2020-03-19  9:19 ` Chris Wilson
  2020-03-19 14:31   ` Tvrtko Ursulin
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 4/6] drm/i915/gem: Wait until the context is finally retired before releasing engines Chris Wilson
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19  9:19 UTC (permalink / raw)
  To: intel-gfx

Currently, we only combine a sentinel request with a max-priority
barrier such that a sentinel request is always in ELSP[0] with nothing
following it. However, we will want to create similar ELSP[] submissions
providing a full-barrier in the submission queue, but without forcing
maximum priority. As such I915_FENCE_FLAG_SENTINEL takes on the
single-submission property and so we can remove the gvt special casing.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_context.h       | 24 +++++++-------
 drivers/gpu/drm/i915/gt/intel_context_types.h |  4 +--
 drivers/gpu/drm/i915/gt/intel_lrc.c           | 33 +++++--------------
 drivers/gpu/drm/i915/gvt/scheduler.c          |  7 ++--
 4 files changed, 26 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index 18efad255124..ee5d47165c12 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -198,18 +198,6 @@ static inline bool intel_context_set_banned(struct intel_context *ce)
 	return test_and_set_bit(CONTEXT_BANNED, &ce->flags);
 }
 
-static inline bool
-intel_context_force_single_submission(const struct intel_context *ce)
-{
-	return test_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ce->flags);
-}
-
-static inline void
-intel_context_set_single_submission(struct intel_context *ce)
-{
-	__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ce->flags);
-}
-
 static inline bool
 intel_context_nopreempt(const struct intel_context *ce)
 {
@@ -228,6 +216,18 @@ intel_context_clear_nopreempt(struct intel_context *ce)
 	clear_bit(CONTEXT_NOPREEMPT, &ce->flags);
 }
 
+static inline bool
+intel_context_is_gvt(const struct intel_context *ce)
+{
+	return test_bit(CONTEXT_GVT, &ce->flags);
+}
+
+static inline void
+intel_context_set_gvt(struct intel_context *ce)
+{
+	set_bit(CONTEXT_GVT, &ce->flags);
+}
+
 static inline u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
 {
 	const u32 period =
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index 0f3b68b95c56..fd2703efc10c 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -64,8 +64,8 @@ struct intel_context {
 #define CONTEXT_VALID_BIT		2
 #define CONTEXT_USE_SEMAPHORES		3
 #define CONTEXT_BANNED			4
-#define CONTEXT_FORCE_SINGLE_SUBMISSION	5
-#define CONTEXT_NOPREEMPT		6
+#define CONTEXT_NOPREEMPT		5
+#define CONTEXT_GVT			6
 
 	u32 *lrc_reg_state;
 	u64 lrc_desc;
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 112531b29f59..f0c4084c5b9a 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1579,22 +1579,10 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 		writel(EL_CTRL_LOAD, execlists->ctrl_reg);
 }
 
-static bool ctx_single_port_submission(const struct intel_context *ce)
-{
-	return (IS_ENABLED(CONFIG_DRM_I915_GVT) &&
-		intel_context_force_single_submission(ce));
-}
-
 static bool can_merge_ctx(const struct intel_context *prev,
 			  const struct intel_context *next)
 {
-	if (prev != next)
-		return false;
-
-	if (ctx_single_port_submission(prev))
-		return false;
-
-	return true;
+	return prev == next;
 }
 
 static unsigned long i915_request_flags(const struct i915_request *rq)
@@ -1844,6 +1832,12 @@ static inline void clear_ports(struct i915_request **ports, int count)
 	memset_p((void **)ports, NULL, count);
 }
 
+static bool has_sentinel(struct i915_request *prev, struct i915_request *next)
+{
+	return (i915_request_flags(prev) | i915_request_flags(next)) &
+		BIT(I915_FENCE_FLAG_SENTINEL);
+}
+
 static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -2125,18 +2119,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 				if (last->context == rq->context)
 					goto done;
 
-				if (i915_request_has_sentinel(last))
-					goto done;
-
-				/*
-				 * If GVT overrides us we only ever submit
-				 * port[0], leaving port[1] empty. Note that we
-				 * also have to be careful that we don't queue
-				 * the same context (even though a different
-				 * request) to the second port.
-				 */
-				if (ctx_single_port_submission(last->context) ||
-				    ctx_single_port_submission(rq->context))
+				if (has_sentinel(last, rq))
 					goto done;
 
 				merge = false;
diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 1c95bf8cbed0..4fccf4b194b0 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -204,9 +204,9 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload)
 	return 0;
 }
 
-static inline bool is_gvt_request(struct i915_request *rq)
+static inline bool is_gvt_request(const struct i915_request *rq)
 {
-	return intel_context_force_single_submission(rq->context);
+	return intel_context_is_gvt(rq->context);
 }
 
 static void save_ring_hw_state(struct intel_vgpu *vgpu,
@@ -401,6 +401,7 @@ intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload)
 		return PTR_ERR(rq);
 	}
 
+	__set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
 	workload->req = i915_request_get(rq);
 	return 0;
 }
@@ -1226,7 +1227,7 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
 
 		i915_vm_put(ce->vm);
 		ce->vm = i915_vm_get(&ppgtt->vm);
-		intel_context_set_single_submission(ce);
+		intel_context_set_gvt(ce);
 
 		/* Max ring buffer size */
 		if (!intel_uc_wants_guc_submission(&engine->gt->uc)) {
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-gfx] [PATCH 4/6] drm/i915/gem: Wait until the context is finally retired before releasing engines
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup Chris Wilson
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels Chris Wilson
@ 2020-03-19  9:19 ` Chris Wilson
  2020-03-19 14:36   ` Tvrtko Ursulin
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 5/6] drm/i915: Use explicit flag to mark unreachable intel_context Chris Wilson
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19  9:19 UTC (permalink / raw)
  To: intel-gfx

If we want to percolate information back from the HW, up through the GEM
context, we need to wait until the intel_context is scheduled out for
the last time. This is handled by the retirement of the intel_context's
barrier, i.e. by listening to the pulse after the notional unpin.

To accommodate this, we need to be able to flush the i915_active's
barriers before awaiting on them. However, this also requires us to
ensure the context is unpinned *before* the barrier request can be
signaled, so mark it as a sentinel.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c | 17 ++++------
 drivers/gpu/drm/i915/i915_active.c          | 37 ++++++++++++++++-----
 drivers/gpu/drm/i915/i915_active.h          |  3 +-
 3 files changed, 37 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index c0e476fcd1fa..05fed8797d37 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -570,23 +570,20 @@ static void engines_idle_release(struct i915_gem_context *ctx,
 	engines->ctx = i915_gem_context_get(ctx);
 
 	for_each_gem_engine(ce, engines, it) {
-		struct dma_fence *fence;
-		int err = 0;
+		int err;
 
 		/* serialises with execbuf */
 		RCU_INIT_POINTER(ce->gem_context, NULL);
 		if (!intel_context_pin_if_active(ce))
 			continue;
 
-		fence = i915_active_fence_get(&ce->timeline->last_request);
-		if (fence) {
-			err = i915_sw_fence_await_dma_fence(&engines->fence,
-							    fence, 0,
-							    GFP_KERNEL);
-			dma_fence_put(fence);
-		}
+		/* Wait until context is finally scheduled out and retired */
+		err = i915_sw_fence_await_active(&engines->fence,
+						 &ce->active,
+						 I915_ACTIVE_AWAIT_ACTIVE |
+						 I915_ACTIVE_AWAIT_BARRIER);
 		intel_context_unpin(ce);
-		if (err < 0)
+		if (err)
 			goto kill;
 	}
 
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index c4048628188a..da7d35f66dd0 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -518,19 +518,18 @@ int i915_active_wait(struct i915_active *ref)
 	return 0;
 }
 
-static int __await_active(struct i915_active_fence *active,
-			  int (*fn)(void *arg, struct dma_fence *fence),
-			  void *arg)
+static int __await_fence(struct i915_active_fence *active,
+			 int (*fn)(void *arg, struct dma_fence *fence),
+			 void *arg)
 {
 	struct dma_fence *fence;
+	int err;
 
-	if (is_barrier(active)) /* XXX flush the barrier? */
+	if (is_barrier(active))
 		return 0;
 
 	fence = i915_active_fence_get(active);
 	if (fence) {
-		int err;
-
 		err = fn(arg, fence);
 		dma_fence_put(fence);
 		if (err < 0)
@@ -540,6 +539,22 @@ static int __await_active(struct i915_active_fence *active,
 	return 0;
 }
 
+static int __await_active(struct active_node *it,
+			  unsigned int flags,
+			  int (*fn)(void *arg, struct dma_fence *fence),
+			  void *arg)
+{
+	int err;
+
+	if (flags & I915_ACTIVE_AWAIT_BARRIER) {
+		err = flush_barrier(it);
+		if (err)
+			return err;
+	}
+
+	return __await_fence(&it->base, fn, arg);
+}
+
 static int await_active(struct i915_active *ref,
 			unsigned int flags,
 			int (*fn)(void *arg, struct dma_fence *fence),
@@ -549,16 +564,17 @@ static int await_active(struct i915_active *ref,
 
 	/* We must always wait for the exclusive fence! */
 	if (rcu_access_pointer(ref->excl.fence)) {
-		err = __await_active(&ref->excl, fn, arg);
+		err = __await_fence(&ref->excl, fn, arg);
 		if (err)
 			return err;
 	}
 
-	if (flags & I915_ACTIVE_AWAIT_ALL && i915_active_acquire_if_busy(ref)) {
+	if (flags & I915_ACTIVE_AWAIT_ACTIVE &&
+	    i915_active_acquire_if_busy(ref)) {
 		struct active_node *it, *n;
 
 		rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
-			err = __await_active(&it->base, fn, arg);
+			err = __await_active(it, flags, fn, arg);
 			if (err)
 				break;
 		}
@@ -852,6 +868,9 @@ void i915_request_add_active_barriers(struct i915_request *rq)
 		list_add_tail((struct list_head *)node, &rq->fence.cb_list);
 	}
 	spin_unlock_irqrestore(&rq->lock, flags);
+
+	/* Ensure that all who came before the barrier are flushed out */
+	__set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
index b3282ae7913c..9697592235fa 100644
--- a/drivers/gpu/drm/i915/i915_active.h
+++ b/drivers/gpu/drm/i915/i915_active.h
@@ -189,7 +189,8 @@ int i915_sw_fence_await_active(struct i915_sw_fence *fence,
 int i915_request_await_active(struct i915_request *rq,
 			      struct i915_active *ref,
 			      unsigned int flags);
-#define I915_ACTIVE_AWAIT_ALL BIT(0)
+#define I915_ACTIVE_AWAIT_ACTIVE BIT(0)
+#define I915_ACTIVE_AWAIT_BARRIER BIT(1)
 
 int i915_active_acquire(struct i915_active *ref);
 bool i915_active_acquire_if_busy(struct i915_active *ref);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-gfx] [PATCH 5/6] drm/i915: Use explicit flag to mark unreachable intel_context
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
                   ` (2 preceding siblings ...)
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 4/6] drm/i915/gem: Wait until the context is finally retired before releasing engines Chris Wilson
@ 2020-03-19  9:19 ` Chris Wilson
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed Chris Wilson
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 21+ messages in thread
From: Chris Wilson @ 2020-03-19  9:19 UTC (permalink / raw)
  To: intel-gfx

I need to keep the GEM context around a bit longer so adding an explicit
flag for syncing execbuf with closed/abandonded contexts.

v2:
 * Use already available context flags. (Chris)

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gem/i915_gem_context.c    | 2 +-
 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 +-
 drivers/gpu/drm/i915/gt/intel_context.h        | 5 +++++
 drivers/gpu/drm/i915/gt/intel_context_types.h  | 9 +++++----
 4 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 05fed8797d37..1280b627adcf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -573,7 +573,7 @@ static void engines_idle_release(struct i915_gem_context *ctx,
 		int err;
 
 		/* serialises with execbuf */
-		RCU_INIT_POINTER(ce->gem_context, NULL);
+		set_bit(CONTEXT_CLOSED_BIT, &ce->flags);
 		if (!intel_context_pin_if_active(ce))
 			continue;
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 042a9ccf348f..5c6bcf2b4488 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -2354,7 +2354,7 @@ static void eb_request_add(struct i915_execbuffer *eb)
 	prev = __i915_request_commit(rq);
 
 	/* Check that the context wasn't destroyed before submission */
-	if (likely(rcu_access_pointer(eb->context->gem_context))) {
+	if (likely(!intel_context_is_closed(eb->context))) {
 		attr = eb->gem_context->sched;
 
 		/*
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
index ee5d47165c12..02df04f76547 100644
--- a/drivers/gpu/drm/i915/gt/intel_context.h
+++ b/drivers/gpu/drm/i915/gt/intel_context.h
@@ -173,6 +173,11 @@ static inline bool intel_context_is_barrier(const struct intel_context *ce)
 	return test_bit(CONTEXT_BARRIER_BIT, &ce->flags);
 }
 
+static inline bool intel_context_is_closed(const struct intel_context *ce)
+{
+	return test_bit(CONTEXT_CLOSED_BIT, &ce->flags);
+}
+
 static inline bool intel_context_use_semaphores(const struct intel_context *ce)
 {
 	return test_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
index fd2703efc10c..418516fd9b9e 100644
--- a/drivers/gpu/drm/i915/gt/intel_context_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
@@ -62,10 +62,11 @@ struct intel_context {
 #define CONTEXT_BARRIER_BIT		0
 #define CONTEXT_ALLOC_BIT		1
 #define CONTEXT_VALID_BIT		2
-#define CONTEXT_USE_SEMAPHORES		3
-#define CONTEXT_BANNED			4
-#define CONTEXT_NOPREEMPT		5
-#define CONTEXT_GVT			6
+#define CONTEXT_CLOSED_BIT		3
+#define CONTEXT_USE_SEMAPHORES		4
+#define CONTEXT_BANNED			5
+#define CONTEXT_NOPREEMPT		6
+#define CONTEXT_GVT			7
 
 	u32 *lrc_reg_state;
 	u64 lrc_desc;
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
                   ` (3 preceding siblings ...)
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 5/6] drm/i915: Use explicit flag to mark unreachable intel_context Chris Wilson
@ 2020-03-19  9:19 ` Chris Wilson
  2020-03-19 14:40   ` Tvrtko Ursulin
  2020-03-19 10:20 ` [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names Patchwork
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19  9:19 UTC (permalink / raw)
  To: intel-gfx

Use the restored ability to check if a context is closed to decide
whether or not to immediately ban the context from further execution
after a hang.

Fixes: be90e344836a ("drm/i915/gt: Cancel banned contexts after GT reset")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_reset.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index 9a15bdf31c7f..003f26b42998 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -88,6 +88,11 @@ static bool mark_guilty(struct i915_request *rq)
 	bool banned;
 	int i;
 
+	if (intel_context_is_closed(rq->context)) {
+		intel_context_set_banned(rq->context);
+		return true;
+	}
+
 	rcu_read_lock();
 	ctx = rcu_dereference(rq->context->gem_context);
 	if (ctx && !kref_get_unless_zero(&ctx->ref))
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
                   ` (4 preceding siblings ...)
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed Chris Wilson
@ 2020-03-19 10:20 ` Patchwork
  2020-03-19 11:39 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
  2020-03-19 13:53 ` [Intel-gfx] [PATCH 1/6] " Tvrtko Ursulin
  7 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2020-03-19 10:20 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names
URL   : https://patchwork.freedesktop.org/series/74865/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8157 -> Patchwork_17022
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/index.html

Known issues
------------

  Here are the changes found in Patchwork_17022 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@execlists:
    - fi-icl-y:           [PASS][1] -> [DMESG-FAIL][2] ([fdo#108569])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/fi-icl-y/igt@i915_selftest@live@execlists.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/fi-icl-y/igt@i915_selftest@live@execlists.html
    - fi-bxt-dsi:         [PASS][3] -> [INCOMPLETE][4] ([fdo#103927] / [i915#1430] / [i915#656])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/fi-bxt-dsi/igt@i915_selftest@live@execlists.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/fi-bxt-dsi/igt@i915_selftest@live@execlists.html

  
#### Possible fixes ####

  * igt@i915_pm_rpm@module-reload:
    - fi-icl-dsi:         [INCOMPLETE][5] ([i915#189]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/fi-icl-dsi/igt@i915_pm_rpm@module-reload.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/fi-icl-dsi/igt@i915_pm_rpm@module-reload.html

  * igt@kms_cursor_legacy@basic-flip-before-cursor-varying-size:
    - fi-kbl-soraka:      [FAIL][7] ([IGT#5]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/fi-kbl-soraka/igt@kms_cursor_legacy@basic-flip-before-cursor-varying-size.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/fi-kbl-soraka/igt@kms_cursor_legacy@basic-flip-before-cursor-varying-size.html

  
#### Warnings ####

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-kbl-7500u:       [FAIL][9] ([i915#323]) -> [FAIL][10] ([fdo#111407])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html

  
  [IGT#5]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/5
  [fdo#103927]: https://bugs.freedesktop.org/show_bug.cgi?id=103927
  [fdo#108569]: https://bugs.freedesktop.org/show_bug.cgi?id=108569
  [fdo#111407]: https://bugs.freedesktop.org/show_bug.cgi?id=111407
  [i915#1430]: https://gitlab.freedesktop.org/drm/intel/issues/1430
  [i915#189]: https://gitlab.freedesktop.org/drm/intel/issues/189
  [i915#323]: https://gitlab.freedesktop.org/drm/intel/issues/323
  [i915#656]: https://gitlab.freedesktop.org/drm/intel/issues/656


Participating hosts (41 -> 36)
------------------------------

  Additional (2): fi-skl-lmem fi-bwr-2160 
  Missing    (7): fi-byt-j1900 fi-hsw-peppy fi-byt-squawks fi-bsw-cyan fi-ilk-650 fi-gdg-551 fi-snb-2600 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8157 -> Patchwork_17022

  CI-20190529: 20190529
  CI_DRM_8157: 4f297a639d15ec6c293b74ff0904de6146b18e4f @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5522: bd2b01af69c9720d54e68a8702a23e4ff3637746 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17022: 5ac96f1320c04c2e34bbc577ba9c130c2f5aabd3 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

5ac96f1320c0 drm/i915/gt: Cancel a hung context if already closed
24264f0110bf drm/i915: Use explicit flag to mark unreachable intel_context
0d627b859e10 drm/i915/gem: Wait until the context is finally retired before releasing engines
41e5d3f53cb8 drm/i915/execlists: Force single submission for sentinels
bfbf43d0da6a drm/i915/gem: Avoid gem_context->mutex for simple vma lookup
a8c2d338a0df drm/i915: Prefer '%ps' for printing function symbol names

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
                   ` (5 preceding siblings ...)
  2020-03-19 10:20 ` [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names Patchwork
@ 2020-03-19 11:39 ` Patchwork
  2020-03-19 13:53 ` [Intel-gfx] [PATCH 1/6] " Tvrtko Ursulin
  7 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2020-03-19 11:39 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names
URL   : https://patchwork.freedesktop.org/series/74865/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8157_full -> Patchwork_17022_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_17022_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17022_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_17022_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_ctx_persistence@engines-cleanup@vecs0:
    - shard-tglb:         [PASS][1] -> [FAIL][2] +32 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb2/igt@gem_ctx_persistence@engines-cleanup@vecs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb1/igt@gem_ctx_persistence@engines-cleanup@vecs0.html

  * igt@gem_ctx_persistence@engines-hostile@bcs0:
    - shard-kbl:          [PASS][3] -> [FAIL][4] +28 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl6/igt@gem_ctx_persistence@engines-hostile@bcs0.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl3/igt@gem_ctx_persistence@engines-hostile@bcs0.html

  * igt@gem_ctx_persistence@engines-queued@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][5] +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb4/igt@gem_ctx_persistence@engines-queued@vcs1.html

  * igt@gem_ctx_persistence@legacy-engines-hostile@bsd:
    - shard-iclb:         [PASS][6] -> [FAIL][7] +25 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb6/igt@gem_ctx_persistence@legacy-engines-hostile@bsd.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb6/igt@gem_ctx_persistence@legacy-engines-hostile@bsd.html

  * igt@gem_ctx_persistence@legacy-engines-queued@vebox:
    - shard-skl:          [PASS][8] -> [FAIL][9] +20 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl3/igt@gem_ctx_persistence@legacy-engines-queued@vebox.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl9/igt@gem_ctx_persistence@legacy-engines-queued@vebox.html

  * igt@gem_ctx_persistence@replace@vcs0:
    - shard-apl:          [PASS][10] -> [FAIL][11] +28 similar issues
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl4/igt@gem_ctx_persistence@replace@vcs0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl8/igt@gem_ctx_persistence@replace@vcs0.html

  * igt@gem_ctx_persistence@saturated-hostile@vecs0:
    - shard-glk:          [PASS][12] -> [FAIL][13] +20 similar issues
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk8/igt@gem_ctx_persistence@saturated-hostile@vecs0.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk2/igt@gem_ctx_persistence@saturated-hostile@vecs0.html

  * igt@gem_exec_schedule@preempt-hang-vebox:
    - shard-tglb:         [PASS][14] -> [INCOMPLETE][15] +8 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb1/igt@gem_exec_schedule@preempt-hang-vebox.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb8/igt@gem_exec_schedule@preempt-hang-vebox.html
    - shard-skl:          [PASS][16] -> [INCOMPLETE][17] +8 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl3/igt@gem_exec_schedule@preempt-hang-vebox.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl9/igt@gem_exec_schedule@preempt-hang-vebox.html

  * igt@gem_exec_schedule@preemptive-hang-render:
    - shard-iclb:         NOTRUN -> [INCOMPLETE][18] +1 similar issue
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb7/igt@gem_exec_schedule@preemptive-hang-render.html
    - shard-kbl:          [PASS][19] -> [INCOMPLETE][20] +10 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl4/igt@gem_exec_schedule@preemptive-hang-render.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl1/igt@gem_exec_schedule@preemptive-hang-render.html

  * igt@gem_exec_schedule@preemptive-hang-vebox:
    - shard-iclb:         [PASS][21] -> [INCOMPLETE][22] +3 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb3/igt@gem_exec_schedule@preemptive-hang-vebox.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb2/igt@gem_exec_schedule@preemptive-hang-vebox.html

  
#### Warnings ####

  * igt@gem_exec_schedule@preempt-hang-bsd1:
    - shard-iclb:         [SKIP][23] ([fdo#109276]) -> [INCOMPLETE][24] +1 similar issue
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb8/igt@gem_exec_schedule@preempt-hang-bsd1.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb4/igt@gem_exec_schedule@preempt-hang-bsd1.html

  * igt@gem_exec_schedule@preemptive-hang-bsd:
    - shard-iclb:         [SKIP][25] ([fdo#112146]) -> [INCOMPLETE][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb4/igt@gem_exec_schedule@preemptive-hang-bsd.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb8/igt@gem_exec_schedule@preemptive-hang-bsd.html

  
Known issues
------------

  Here are the changes found in Patchwork_17022_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_exec@basic-nohangcheck:
    - shard-glk:          [PASS][27] -> [FAIL][28] ([i915#1166])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk1/igt@gem_ctx_exec@basic-nohangcheck.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk3/igt@gem_ctx_exec@basic-nohangcheck.html
    - shard-snb:          [PASS][29] -> [FAIL][30] ([i915#1379])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-snb4/igt@gem_ctx_exec@basic-nohangcheck.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-snb4/igt@gem_ctx_exec@basic-nohangcheck.html
    - shard-hsw:          [PASS][31] -> [FAIL][32] ([i915#1379])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw5/igt@gem_ctx_exec@basic-nohangcheck.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-hsw2/igt@gem_ctx_exec@basic-nohangcheck.html
    - shard-kbl:          [PASS][33] -> [FAIL][34] ([i915#1166])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl2/igt@gem_ctx_exec@basic-nohangcheck.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl2/igt@gem_ctx_exec@basic-nohangcheck.html
    - shard-apl:          [PASS][35] -> [FAIL][36] ([i915#1166])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl1/igt@gem_ctx_exec@basic-nohangcheck.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl3/igt@gem_ctx_exec@basic-nohangcheck.html
    - shard-skl:          [PASS][37] -> [FAIL][38] ([i915#1166])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl3/igt@gem_ctx_exec@basic-nohangcheck.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl2/igt@gem_ctx_exec@basic-nohangcheck.html

  * igt@gem_ctx_persistence@close-replace-race:
    - shard-tglb:         [PASS][39] -> [TIMEOUT][40] ([i915#1340])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb6/igt@gem_ctx_persistence@close-replace-race.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb7/igt@gem_ctx_persistence@close-replace-race.html
    - shard-iclb:         [PASS][41] -> [TIMEOUT][42] ([i915#1340])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb2/igt@gem_ctx_persistence@close-replace-race.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb5/igt@gem_ctx_persistence@close-replace-race.html
    - shard-apl:          [PASS][43] -> [TIMEOUT][44] ([i915#1340])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl6/igt@gem_ctx_persistence@close-replace-race.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl1/igt@gem_ctx_persistence@close-replace-race.html
    - shard-skl:          [PASS][45] -> [TIMEOUT][46] ([i915#1340])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl9/igt@gem_ctx_persistence@close-replace-race.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl7/igt@gem_ctx_persistence@close-replace-race.html

  * igt@gem_ctx_persistence@engines-mixed-process@bcs0:
    - shard-kbl:          [PASS][47] -> [INCOMPLETE][48] ([i915#1197] / [i915#1239])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl2/igt@gem_ctx_persistence@engines-mixed-process@bcs0.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl2/igt@gem_ctx_persistence@engines-mixed-process@bcs0.html
    - shard-apl:          [PASS][49] -> [INCOMPLETE][50] ([fdo#103927] / [i915#1197] / [i915#1239])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl1/igt@gem_ctx_persistence@engines-mixed-process@bcs0.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl3/igt@gem_ctx_persistence@engines-mixed-process@bcs0.html

  * igt@gem_ctx_persistence@engines-mixed-process@rcs0:
    - shard-iclb:         [PASS][51] -> [FAIL][52] ([i915#679]) +2 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb8/igt@gem_ctx_persistence@engines-mixed-process@rcs0.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb3/igt@gem_ctx_persistence@engines-mixed-process@rcs0.html

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd:
    - shard-iclb:         [PASS][53] -> [INCOMPLETE][54] ([i915#1239]) +1 similar issue
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb7/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb8/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd.html
    - shard-glk:          [PASS][55] -> [INCOMPLETE][56] ([i915#1197] / [i915#1239] / [i915#58] / [k.org#198133]) +1 similar issue
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk5/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk7/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd.html
    - shard-skl:          [PASS][57] -> [INCOMPLETE][58] ([i915#1197] / [i915#1239]) +1 similar issue
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl1/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl3/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd.html

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd1:
    - shard-tglb:         [PASS][59] -> [INCOMPLETE][60] ([i915#1239]) +1 similar issue
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb3/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd1.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb6/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd1.html
    - shard-kbl:          [PASS][61] -> [INCOMPLETE][62] ([i915#1239])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl7/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd1.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl6/igt@gem_ctx_persistence@legacy-engines-mixed-process@bsd1.html

  * igt@gem_ctx_persistence@legacy-engines-mixed-process@render:
    - shard-tglb:         [PASS][63] -> [FAIL][64] ([i915#679]) +1 similar issue
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb3/igt@gem_ctx_persistence@legacy-engines-mixed-process@render.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb6/igt@gem_ctx_persistence@legacy-engines-mixed-process@render.html
    - shard-kbl:          [PASS][65] -> [FAIL][66] ([i915#679]) +2 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl7/igt@gem_ctx_persistence@legacy-engines-mixed-process@render.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl6/igt@gem_ctx_persistence@legacy-engines-mixed-process@render.html

  * igt@gem_ctx_persistence@legacy-engines-mixed@blt:
    - shard-apl:          [PASS][67] -> [FAIL][68] ([i915#679]) +5 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl4/igt@gem_ctx_persistence@legacy-engines-mixed@blt.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl8/igt@gem_ctx_persistence@legacy-engines-mixed@blt.html
    - shard-glk:          [PASS][69] -> [FAIL][70] ([i915#679]) +6 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk4/igt@gem_ctx_persistence@legacy-engines-mixed@blt.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk1/igt@gem_ctx_persistence@legacy-engines-mixed@blt.html
    - shard-skl:          [PASS][71] -> [FAIL][72] ([i915#679]) +6 similar issues
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl7/igt@gem_ctx_persistence@legacy-engines-mixed@blt.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl10/igt@gem_ctx_persistence@legacy-engines-mixed@blt.html

  * igt@gem_ctx_persistence@processes:
    - shard-apl:          [PASS][73] -> [FAIL][74] ([i915#570] / [i915#679])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl7/igt@gem_ctx_persistence@processes.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl2/igt@gem_ctx_persistence@processes.html
    - shard-iclb:         [PASS][75] -> [FAIL][76] ([i915#570])
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb4/igt@gem_ctx_persistence@processes.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb4/igt@gem_ctx_persistence@processes.html
    - shard-glk:          [PASS][77] -> [FAIL][78] ([i915#570] / [i915#679])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk2/igt@gem_ctx_persistence@processes.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk3/igt@gem_ctx_persistence@processes.html
    - shard-kbl:          [PASS][79] -> [FAIL][80] ([i915#570] / [i915#679])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl2/igt@gem_ctx_persistence@processes.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl2/igt@gem_ctx_persistence@processes.html
    - shard-tglb:         [PASS][81] -> [FAIL][82] ([i915#570])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb5/igt@gem_ctx_persistence@processes.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb7/igt@gem_ctx_persistence@processes.html
    - shard-skl:          [PASS][83] -> [FAIL][84] ([i915#570] / [i915#679])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl7/igt@gem_ctx_persistence@processes.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl6/igt@gem_ctx_persistence@processes.html

  * igt@gem_ctx_persistence@replace-hostile@bcs0:
    - shard-skl:          [PASS][85] -> [FAIL][86] ([i915#1154]) +3 similar issues
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl5/igt@gem_ctx_persistence@replace-hostile@bcs0.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl4/igt@gem_ctx_persistence@replace-hostile@bcs0.html
    - shard-apl:          [PASS][87] -> [FAIL][88] ([i915#1154]) +3 similar issues
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl8/igt@gem_ctx_persistence@replace-hostile@bcs0.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl7/igt@gem_ctx_persistence@replace-hostile@bcs0.html
    - shard-glk:          [PASS][89] -> [FAIL][90] ([i915#1154]) +3 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk2/igt@gem_ctx_persistence@replace-hostile@bcs0.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk9/igt@gem_ctx_persistence@replace-hostile@bcs0.html

  * igt@gem_ctx_persistence@replace-hostile@vecs0:
    - shard-iclb:         [PASS][91] -> [FAIL][92] ([i915#1154]) +3 similar issues
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb6/igt@gem_ctx_persistence@replace-hostile@vecs0.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb6/igt@gem_ctx_persistence@replace-hostile@vecs0.html

  * igt@gem_ctx_persistence@saturated-hostile@bcs0:
    - shard-kbl:          [PASS][93] -> [FAIL][94] ([i915#1368]) +1 similar issue
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl3/igt@gem_ctx_persistence@saturated-hostile@bcs0.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl4/igt@gem_ctx_persistence@saturated-hostile@bcs0.html

  * igt@gem_exec_balancer@hang:
    - shard-kbl:          [PASS][95] -> [INCOMPLETE][96] ([i915#1212])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl4/igt@gem_exec_balancer@hang.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl6/igt@gem_exec_balancer@hang.html

  * igt@gem_exec_schedule@implicit-write-read-bsd:
    - shard-iclb:         [PASS][97] -> [SKIP][98] ([i915#677])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb3/igt@gem_exec_schedule@implicit-write-read-bsd.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb2/igt@gem_exec_schedule@implicit-write-read-bsd.html

  * igt@gem_exec_schedule@preempt-hang-blt:
    - shard-tglb:         [PASS][99] -> [INCOMPLETE][100] ([fdo#111606]) +1 similar issue
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-tglb1/igt@gem_exec_schedule@preempt-hang-blt.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-tglb8/igt@gem_exec_schedule@preempt-hang-blt.html

  * igt@gem_exec_schedule@preempt-hang-bsd:
    - shard-glk:          [PASS][101] -> [INCOMPLETE][102] ([i915#58] / [k.org#198133]) +9 similar issues
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk3/igt@gem_exec_schedule@preempt-hang-bsd.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk9/igt@gem_exec_schedule@preempt-hang-bsd.html

  * igt@gem_exec_schedule@preempt-hang-vebox:
    - shard-apl:          [PASS][103] -> [INCOMPLETE][104] ([fdo#103927]) +8 similar issues
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl7/igt@gem_exec_schedule@preempt-hang-vebox.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl6/igt@gem_exec_schedule@preempt-hang-vebox.html

  * igt@gem_exec_schedule@preempt-other-chain-bsd:
    - shard-iclb:         [PASS][105] -> [SKIP][106] ([fdo#112146]) +7 similar issues
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb5/igt@gem_exec_schedule@preempt-other-chain-bsd.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb4/igt@gem_exec_schedule@preempt-other-chain-bsd.html

  * igt@gem_exec_schedule@preemptive-hang-blt:
    - shard-iclb:         [PASS][107] -> [INCOMPLETE][108] ([fdo#109100]) +1 similar issue
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb1/igt@gem_exec_schedule@preemptive-hang-blt.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb7/igt@gem_exec_schedule@preemptive-hang-blt.html

  * igt@i915_pm_rpm@cursor:
    - shard-iclb:         [PASS][109] -> [INCOMPLETE][110] ([i915#189])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb4/igt@i915_pm_rpm@cursor.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb4/igt@i915_pm_rpm@cursor.html

  * igt@i915_pm_rps@waitboost:
    - shard-iclb:         [PASS][111] -> [FAIL][112] ([i915#413])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb7/igt@i915_pm_rps@waitboost.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb8/igt@i915_pm_rps@waitboost.html

  * igt@i915_selftest@live@execlists:
    - shard-apl:          [PASS][113] -> [INCOMPLETE][114] ([fdo#103927] / [i915#656])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl6/igt@i915_selftest@live@execlists.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl7/igt@i915_selftest@live@execlists.html

  * igt@kms_flip@flip-vs-expired-vblank:
    - shard-glk:          [PASS][115] -> [FAIL][116] ([i915#79])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk4/igt@kms_flip@flip-vs-expired-vblank.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk1/igt@kms_flip@flip-vs-expired-vblank.html

  * igt@kms_flip@plain-flip-ts-check-interruptible:
    - shard-skl:          [PASS][117] -> [FAIL][118] ([i915#34])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl1/igt@kms_flip@plain-flip-ts-check-interruptible.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl3/igt@kms_flip@plain-flip-ts-check-interruptible.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-apl:          [PASS][119] -> [DMESG-WARN][120] ([i915#180])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl8/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl4/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [PASS][121] -> [FAIL][122] ([i915#1188])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl5/igt@kms_hdr@bpc-switch-suspend.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl10/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes:
    - shard-kbl:          [PASS][123] -> [DMESG-WARN][124] ([i915#180]) +1 similar issue
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl7/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl4/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html

  * igt@kms_psr2_su@frontbuffer:
    - shard-iclb:         [PASS][125] -> [SKIP][126] ([fdo#109642] / [fdo#111068])
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb2/igt@kms_psr2_su@frontbuffer.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb1/igt@kms_psr2_su@frontbuffer.html

  * igt@kms_psr@psr2_primary_page_flip:
    - shard-iclb:         [PASS][127] -> [SKIP][128] ([fdo#109441]) +1 similar issue
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb5/igt@kms_psr@psr2_primary_page_flip.html

  * igt@perf_pmu@busy-no-semaphores-vcs1:
    - shard-iclb:         [PASS][129] -> [SKIP][130] ([fdo#112080]) +10 similar issues
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb2/igt@perf_pmu@busy-no-semaphores-vcs1.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb5/igt@perf_pmu@busy-no-semaphores-vcs1.html

  * igt@prime_vgem@fence-wait-bsd2:
    - shard-iclb:         [PASS][131] -> [SKIP][132] ([fdo#109276]) +17 similar issues
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb1/igt@prime_vgem@fence-wait-bsd2.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb6/igt@prime_vgem@fence-wait-bsd2.html

  
#### Possible fixes ####

  * igt@gem_ctx_isolation@vcs1-none:
    - shard-iclb:         [SKIP][133] ([fdo#112080]) -> [PASS][134] +8 similar issues
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb5/igt@gem_ctx_isolation@vcs1-none.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb1/igt@gem_ctx_isolation@vcs1-none.html

  * igt@gem_exec_schedule@implicit-both-bsd1:
    - shard-iclb:         [SKIP][135] ([fdo#109276] / [i915#677]) -> [PASS][136] +1 similar issue
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb5/igt@gem_exec_schedule@implicit-both-bsd1.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb1/igt@gem_exec_schedule@implicit-both-bsd1.html

  * igt@gem_exec_schedule@pi-common-bsd:
    - shard-iclb:         [SKIP][137] ([i915#677]) -> [PASS][138]
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb1/igt@gem_exec_schedule@pi-common-bsd.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb6/igt@gem_exec_schedule@pi-common-bsd.html

  * igt@gem_exec_schedule@preempt-bsd:
    - shard-iclb:         [SKIP][139] ([fdo#112146]) -> [PASS][140] +4 similar issues
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb1/igt@gem_exec_schedule@preempt-bsd.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb6/igt@gem_exec_schedule@preempt-bsd.html

  * igt@gem_exec_schedule@preempt-contexts-bsd2:
    - shard-iclb:         [SKIP][141] ([fdo#109276]) -> [PASS][142] +11 similar issues
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb3/igt@gem_exec_schedule@preempt-contexts-bsd2.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb2/igt@gem_exec_schedule@preempt-contexts-bsd2.html

  * igt@i915_suspend@fence-restore-untiled:
    - shard-skl:          [INCOMPLETE][143] ([i915#69]) -> [PASS][144]
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-skl3/igt@i915_suspend@fence-restore-untiled.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-skl9/igt@i915_suspend@fence-restore-untiled.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-apl:          [DMESG-WARN][145] ([i915#180]) -> [PASS][146]
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-apl4/igt@kms_flip@flip-vs-suspend.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-apl8/igt@kms_flip@flip-vs-suspend.html
    - shard-iclb:         [INCOMPLETE][147] ([i915#1185] / [i915#221]) -> [PASS][148]
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb3/igt@kms_flip@flip-vs-suspend.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb2/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-kbl:          [DMESG-WARN][149] ([i915#180]) -> [PASS][150] +7 similar issues
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl4/igt@kms_hdr@bpc-switch-suspend.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl6/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_lowres@pipe-a-tiling-y:
    - shard-glk:          [FAIL][151] ([i915#899]) -> [PASS][152]
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk8/igt@kms_plane_lowres@pipe-a-tiling-y.html
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk5/igt@kms_plane_lowres@pipe-a-tiling-y.html

  * igt@kms_psr@psr2_cursor_mmap_cpu:
    - shard-iclb:         [SKIP][153] ([fdo#109441]) -> [PASS][154] +2 similar issues
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-iclb3/igt@kms_psr@psr2_cursor_mmap_cpu.html
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_cpu.html

  
#### Warnings ####

  * igt@gem_ctx_persistence@close-replace-race:
    - shard-kbl:          [INCOMPLETE][155] ([i915#1402]) -> [TIMEOUT][156] ([i915#1340])
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-kbl4/igt@gem_ctx_persistence@close-replace-race.html
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-kbl7/igt@gem_ctx_persistence@close-replace-race.html
    - shard-glk:          [INCOMPLETE][157] ([i915#1402] / [i915#58] / [k.org#198133]) -> [TIMEOUT][158] ([i915#1340])
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-glk6/igt@gem_ctx_persistence@close-replace-race.html
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/shard-glk4/igt@gem_ctx_persistence@close-replace-race.html

  * igt@runner@aborted:
    - shard-hsw:          ([FAIL][159], [FAIL][160], [FAIL][161], [FAIL][162], [FAIL][163], [FAIL][164], [FAIL][165], [FAIL][166]) ([fdo#111870]) -> ([FAIL][167], [FAIL][168], [FAIL][169], [FAIL][170], [FAIL][171], [FAIL][172], [FAIL][173], [FAIL][174]) ([fdo#111870] / [i915#1485])
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw6/igt@runner@aborted.html
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw2/igt@runner@aborted.html
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw6/igt@runner@aborted.html
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw6/igt@runner@aborted.html
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw4/igt@runner@aborted.html
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8157/shard-hsw4/igt@runner@aborted.html
   [165]: https://intel-gfx-ci.

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17022/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names
  2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
                   ` (6 preceding siblings ...)
  2020-03-19 11:39 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
@ 2020-03-19 13:53 ` Tvrtko Ursulin
  7 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 13:53 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 09:19, Chris Wilson wrote:
> %pS includes the offset, which is useful for return addresses but noise
> when we are pretty printing a known (and expected) function entry point.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_sw_fence.c          | 2 +-
>   drivers/gpu/drm/i915/selftests/i915_active.c  | 2 +-
>   drivers/gpu/drm/i915/selftests/i915_request.c | 2 +-
>   3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c
> index a3d38e089b6e..7daf81f55c90 100644
> --- a/drivers/gpu/drm/i915/i915_sw_fence.c
> +++ b/drivers/gpu/drm/i915/i915_sw_fence.c
> @@ -421,7 +421,7 @@ static void timer_i915_sw_fence_wake(struct timer_list *t)
>   	if (!fence)
>   		return;
>   
> -	pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%pS)\n",
> +	pr_notice("Asynchronous wait on fence %s:%s:%llx timed out (hint:%ps)\n",
>   		  cb->dma->ops->get_driver_name(cb->dma),
>   		  cb->dma->ops->get_timeline_name(cb->dma),
>   		  cb->dma->seqno,
> diff --git a/drivers/gpu/drm/i915/selftests/i915_active.c b/drivers/gpu/drm/i915/selftests/i915_active.c
> index 68bbb1580162..54080fb4af4b 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_active.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_active.c
> @@ -277,7 +277,7 @@ static struct intel_engine_cs *node_to_barrier(struct active_node *it)
>   
>   void i915_active_print(struct i915_active *ref, struct drm_printer *m)
>   {
> -	drm_printf(m, "active %pS:%pS\n", ref->active, ref->retire);
> +	drm_printf(m, "active %ps:%ps\n", ref->active, ref->retire);
>   	drm_printf(m, "\tcount: %d\n", atomic_read(&ref->count));
>   	drm_printf(m, "\tpreallocated barriers? %s\n",
>   		   yesno(!llist_empty(&ref->preallocated_barriers)));
> diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
> index f89d9c42f1fa..7ac9616de9d8 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_request.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_request.c
> @@ -1233,7 +1233,7 @@ static int live_parallel_engines(void *arg)
>   		struct igt_live_test t;
>   		unsigned int idx;
>   
> -		snprintf(name, sizeof(name), "%pS", fn);
> +		snprintf(name, sizeof(name), "%ps", fn);
>   		err = igt_live_test_begin(&t, i915, __func__, name);
>   		if (err)
>   			break;
> 

Reviewed-by: Tvrtko ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup Chris Wilson
@ 2020-03-19 14:20   ` Tvrtko Ursulin
  2020-03-19 14:52     ` Chris Wilson
  0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 14:20 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 09:19, Chris Wilson wrote:
> As we store the handle lookup inside a radix tree, we do not need the
> gem_context->mutex except until we need to insert our lookup into the
> common radix tree. This takes a small bit of rearranging to ensure that
> the lut we insert into the tree is ready prior to actually inserting it
> (as soon as it is exposed via the radixtree, it is visible to any other
> submission).
> 
> v2: For brownie points, remove the goto spaghetti.
> v3: Tighten up the closed-handle checks.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 136 +++++++++++-------
>   1 file changed, 87 insertions(+), 49 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index d3f4f28e9468..042a9ccf348f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -481,7 +481,7 @@ eb_add_vma(struct i915_execbuffer *eb,
>   
>   	GEM_BUG_ON(i915_vma_is_closed(vma));
>   
> -	ev->vma = i915_vma_get(vma);
> +	ev->vma = vma;
>   	ev->exec = entry;
>   	ev->flags = entry->flags;
>   
> @@ -728,77 +728,117 @@ static int eb_select_context(struct i915_execbuffer *eb)
>   	return 0;
>   }
>   
> -static int eb_lookup_vmas(struct i915_execbuffer *eb)
> +static int __eb_add_lut(struct i915_execbuffer *eb,
> +			u32 handle, struct i915_vma *vma)
>   {
> -	struct radix_tree_root *handles_vma = &eb->gem_context->handles_vma;
> -	struct drm_i915_gem_object *obj;
> -	unsigned int i, batch;
> +	struct i915_gem_context *ctx = eb->gem_context;
> +	struct i915_lut_handle *lut;
>   	int err;
>   
> -	if (unlikely(i915_gem_context_is_closed(eb->gem_context)))
> -		return -ENOENT;
> +	lut = i915_lut_handle_alloc();
> +	if (unlikely(!lut))
> +		return -ENOMEM;
>   
> -	INIT_LIST_HEAD(&eb->relocs);
> -	INIT_LIST_HEAD(&eb->unbound);
> +	i915_vma_get(vma);
> +	if (!atomic_fetch_inc(&vma->open_count))
> +		i915_vma_reopen(vma);
> +	lut->handle = handle;
> +	lut->ctx = ctx;
> +
> +	/* Check that the context hasn't been closed in the meantime */
> +	err = -EINTR;
> +	if (!mutex_lock_interruptible(&ctx->mutex)) {
> +		err = -ENOENT;
> +		if (likely(!i915_gem_context_is_closed(ctx)))
> +			err = radix_tree_insert(&ctx->handles_vma, handle, vma);
> +		if (err == 0) { /* And nor has this handle */
> +			struct drm_i915_gem_object *obj = vma->obj;
> +
> +			i915_gem_object_lock(obj);
> +			if (idr_find(&eb->file->object_idr, handle) == obj) {
> +				list_add(&lut->obj_link, &obj->lut_list);
> +			} else {
> +				radix_tree_delete(&ctx->handles_vma, handle);
> +				err = -ENOENT;
> +			}
> +			i915_gem_object_unlock(obj);
> +		}
> +		mutex_unlock(&ctx->mutex);
> +	}
> +	if (unlikely(err))
> +		goto err;
>   
> -	batch = eb_batch_index(eb);
> +	return 0;
>   
> -	for (i = 0; i < eb->buffer_count; i++) {
> -		u32 handle = eb->exec[i].handle;
> -		struct i915_lut_handle *lut;
> +err:
> +	atomic_dec(&vma->open_count);
> +	i915_vma_put(vma);
> +	i915_lut_handle_free(lut);
> +	return err;
> +}
> +
> +static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle)
> +{
> +	do {
> +		struct drm_i915_gem_object *obj;
>   		struct i915_vma *vma;
> +		int err;
>   
> -		vma = radix_tree_lookup(handles_vma, handle);
> +		rcu_read_lock();
> +		vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle);

radix_tree_lookup is documented to be RCU safe okay. How about freeing 
VMAs - is that done after a RCU grace period?

Regards,

Tvrtko

> +		if (likely(vma))
> +			vma = i915_vma_tryget(vma);
> +		rcu_read_unlock();
>   		if (likely(vma))
> -			goto add_vma;
> +			return vma;
>   
>   		obj = i915_gem_object_lookup(eb->file, handle);
> -		if (unlikely(!obj)) {
> -			err = -ENOENT;
> -			goto err_vma;
> -		}
> +		if (unlikely(!obj))
> +			return ERR_PTR(-ENOENT);
>   
>   		vma = i915_vma_instance(obj, eb->context->vm, NULL);
>   		if (IS_ERR(vma)) {
> -			err = PTR_ERR(vma);
> -			goto err_obj;
> +			i915_gem_object_put(obj);
> +			return vma;
>   		}
>   
> -		lut = i915_lut_handle_alloc();
> -		if (unlikely(!lut)) {
> -			err = -ENOMEM;
> -			goto err_obj;
> -		}
> +		err = __eb_add_lut(eb, handle, vma);
> +		if (likely(!err))
> +			return vma;
>   
> -		err = radix_tree_insert(handles_vma, handle, vma);
> -		if (unlikely(err)) {
> -			i915_lut_handle_free(lut);
> -			goto err_obj;
> -		}
> +		i915_gem_object_put(obj);
> +		if (err != -EEXIST)
> +			return ERR_PTR(err);
> +	} while (1);
> +}
>   
> -		/* transfer ref to lut */
> -		if (!atomic_fetch_inc(&vma->open_count))
> -			i915_vma_reopen(vma);
> -		lut->handle = handle;
> -		lut->ctx = eb->gem_context;
> +static int eb_lookup_vmas(struct i915_execbuffer *eb)
> +{
> +	unsigned int batch = eb_batch_index(eb);
> +	unsigned int i;
> +	int err = 0;
>   
> -		i915_gem_object_lock(obj);
> -		list_add(&lut->obj_link, &obj->lut_list);
> -		i915_gem_object_unlock(obj);
> +	INIT_LIST_HEAD(&eb->relocs);
> +	INIT_LIST_HEAD(&eb->unbound);
> +
> +	for (i = 0; i < eb->buffer_count; i++) {
> +		struct i915_vma *vma;
> +
> +		vma = eb_lookup_vma(eb, eb->exec[i].handle);
> +		if (IS_ERR(vma)) {
> +			err = PTR_ERR(vma);
> +			break;
> +		}
>   
> -add_vma:
>   		err = eb_validate_vma(eb, &eb->exec[i], vma);
> -		if (unlikely(err))
> -			goto err_vma;
> +		if (unlikely(err)) {
> +			i915_vma_put(vma);
> +			break;
> +		}
>   
>   		eb_add_vma(eb, i, batch, vma);
>   	}
>   
> -	return 0;
> -
> -err_obj:
> -	i915_gem_object_put(obj);
> -err_vma:
>   	eb->vma[i].vma = NULL;
>   	return err;
>   }
> @@ -1494,9 +1534,7 @@ static int eb_relocate(struct i915_execbuffer *eb)
>   {
>   	int err;
>   
> -	mutex_lock(&eb->gem_context->mutex);
>   	err = eb_lookup_vmas(eb);
> -	mutex_unlock(&eb->gem_context->mutex);
>   	if (err)
>   		return err;
>   
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels Chris Wilson
@ 2020-03-19 14:31   ` Tvrtko Ursulin
  2020-03-19 15:02     ` Chris Wilson
  0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 14:31 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 09:19, Chris Wilson wrote:
> Currently, we only combine a sentinel request with a max-priority
> barrier such that a sentinel request is always in ELSP[0] with nothing
> following it. However, we will want to create similar ELSP[] submissions
> providing a full-barrier in the submission queue, but without forcing
> maximum priority. As such I915_FENCE_FLAG_SENTINEL takes on the
> single-submission property and so we can remove the gvt special casing.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/intel_context.h       | 24 +++++++-------
>   drivers/gpu/drm/i915/gt/intel_context_types.h |  4 +--
>   drivers/gpu/drm/i915/gt/intel_lrc.c           | 33 +++++--------------
>   drivers/gpu/drm/i915/gvt/scheduler.c          |  7 ++--
>   4 files changed, 26 insertions(+), 42 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h
> index 18efad255124..ee5d47165c12 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context.h
> @@ -198,18 +198,6 @@ static inline bool intel_context_set_banned(struct intel_context *ce)
>   	return test_and_set_bit(CONTEXT_BANNED, &ce->flags);
>   }
>   
> -static inline bool
> -intel_context_force_single_submission(const struct intel_context *ce)
> -{
> -	return test_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ce->flags);
> -}
> -
> -static inline void
> -intel_context_set_single_submission(struct intel_context *ce)
> -{
> -	__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ce->flags);
> -}
> -
>   static inline bool
>   intel_context_nopreempt(const struct intel_context *ce)
>   {
> @@ -228,6 +216,18 @@ intel_context_clear_nopreempt(struct intel_context *ce)
>   	clear_bit(CONTEXT_NOPREEMPT, &ce->flags);
>   }
>   
> +static inline bool
> +intel_context_is_gvt(const struct intel_context *ce)
> +{
> +	return test_bit(CONTEXT_GVT, &ce->flags);
> +}
> +
> +static inline void
> +intel_context_set_gvt(struct intel_context *ce)
> +{
> +	set_bit(CONTEXT_GVT, &ce->flags);
> +}
> +
>   static inline u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
>   {
>   	const u32 period =
> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
> index 0f3b68b95c56..fd2703efc10c 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> @@ -64,8 +64,8 @@ struct intel_context {
>   #define CONTEXT_VALID_BIT		2
>   #define CONTEXT_USE_SEMAPHORES		3
>   #define CONTEXT_BANNED			4
> -#define CONTEXT_FORCE_SINGLE_SUBMISSION	5
> -#define CONTEXT_NOPREEMPT		6
> +#define CONTEXT_NOPREEMPT		5
> +#define CONTEXT_GVT			6
>   
>   	u32 *lrc_reg_state;
>   	u64 lrc_desc;
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index 112531b29f59..f0c4084c5b9a 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -1579,22 +1579,10 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
>   		writel(EL_CTRL_LOAD, execlists->ctrl_reg);
>   }
>   
> -static bool ctx_single_port_submission(const struct intel_context *ce)
> -{
> -	return (IS_ENABLED(CONFIG_DRM_I915_GVT) &&
> -		intel_context_force_single_submission(ce));
> -}
> -
>   static bool can_merge_ctx(const struct intel_context *prev,
>   			  const struct intel_context *next)
>   {
> -	if (prev != next)
> -		return false;
> -
> -	if (ctx_single_port_submission(prev))
> -		return false;
> -
> -	return true;
> +	return prev == next;
>   }
>   
>   static unsigned long i915_request_flags(const struct i915_request *rq)
> @@ -1844,6 +1832,12 @@ static inline void clear_ports(struct i915_request **ports, int count)
>   	memset_p((void **)ports, NULL, count);
>   }
>   
> +static bool has_sentinel(struct i915_request *prev, struct i915_request *next)
> +{
> +	return (i915_request_flags(prev) | i915_request_flags(next)) &
> +		BIT(I915_FENCE_FLAG_SENTINEL);
> +}
> +
>   static void execlists_dequeue(struct intel_engine_cs *engine)
>   {
>   	struct intel_engine_execlists * const execlists = &engine->execlists;
> @@ -2125,18 +2119,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   				if (last->context == rq->context)
>   					goto done;
>   
> -				if (i915_request_has_sentinel(last))
> -					goto done;
> -
> -				/*
> -				 * If GVT overrides us we only ever submit
> -				 * port[0], leaving port[1] empty. Note that we
> -				 * also have to be careful that we don't queue
> -				 * the same context (even though a different
> -				 * request) to the second port.
> -				 */
> -				if (ctx_single_port_submission(last->context) ||
> -				    ctx_single_port_submission(rq->context))
> +				if (has_sentinel(last, rq))
>   					goto done;

I am only confused by can_merge_rq saying two sentinel requests can be
merged together:

	if (unlikely((i915_request_flags(prev) ^ i915_request_flags(next)) &
		     (BIT(I915_FENCE_FLAG_NOPREEMPT) |
		      BIT(I915_FENCE_FLAG_SENTINEL))))
		return false;

What am I missing?

Regards,

Tvrtko

>   
>   				merge = false;
> diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
> index 1c95bf8cbed0..4fccf4b194b0 100644
> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> @@ -204,9 +204,9 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload)
>   	return 0;
>   }
>   
> -static inline bool is_gvt_request(struct i915_request *rq)
> +static inline bool is_gvt_request(const struct i915_request *rq)
>   {
> -	return intel_context_force_single_submission(rq->context);
> +	return intel_context_is_gvt(rq->context);
>   }
>   
>   static void save_ring_hw_state(struct intel_vgpu *vgpu,
> @@ -401,6 +401,7 @@ intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload)
>   		return PTR_ERR(rq);
>   	}
>   
> +	__set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
>   	workload->req = i915_request_get(rq);
>   	return 0;
>   }
> @@ -1226,7 +1227,7 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
>   
>   		i915_vm_put(ce->vm);
>   		ce->vm = i915_vm_get(&ppgtt->vm);
> -		intel_context_set_single_submission(ce);
> +		intel_context_set_gvt(ce);
>   
>   		/* Max ring buffer size */
>   		if (!intel_uc_wants_guc_submission(&engine->gt->uc)) {
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 4/6] drm/i915/gem: Wait until the context is finally retired before releasing engines
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 4/6] drm/i915/gem: Wait until the context is finally retired before releasing engines Chris Wilson
@ 2020-03-19 14:36   ` Tvrtko Ursulin
  0 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 14:36 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 09:19, Chris Wilson wrote:
> If we want to percolate information back from the HW, up through the GEM
> context, we need to wait until the intel_context is scheduled out for
> the last time. This is handled by the retirement of the intel_context's
> barrier, i.e. by listening to the pulse after the notional unpin.
> 
> To accommodate this, we need to be able to flush the i915_active's
> barriers before awaiting on them. However, this also requires us to
> ensure the context is unpinned *before* the barrier request can be
> signaled, so mark it as a sentinel.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gem/i915_gem_context.c | 17 ++++------
>   drivers/gpu/drm/i915/i915_active.c          | 37 ++++++++++++++++-----
>   drivers/gpu/drm/i915/i915_active.h          |  3 +-
>   3 files changed, 37 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index c0e476fcd1fa..05fed8797d37 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -570,23 +570,20 @@ static void engines_idle_release(struct i915_gem_context *ctx,
>   	engines->ctx = i915_gem_context_get(ctx);
>   
>   	for_each_gem_engine(ce, engines, it) {
> -		struct dma_fence *fence;
> -		int err = 0;
> +		int err;
>   
>   		/* serialises with execbuf */
>   		RCU_INIT_POINTER(ce->gem_context, NULL);
>   		if (!intel_context_pin_if_active(ce))
>   			continue;
>   
> -		fence = i915_active_fence_get(&ce->timeline->last_request);
> -		if (fence) {
> -			err = i915_sw_fence_await_dma_fence(&engines->fence,
> -							    fence, 0,
> -							    GFP_KERNEL);
> -			dma_fence_put(fence);
> -		}
> +		/* Wait until context is finally scheduled out and retired */
> +		err = i915_sw_fence_await_active(&engines->fence,
> +						 &ce->active,
> +						 I915_ACTIVE_AWAIT_ACTIVE |
> +						 I915_ACTIVE_AWAIT_BARRIER);
>   		intel_context_unpin(ce);
> -		if (err < 0)
> +		if (err)
>   			goto kill;
>   	}
>   
> diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
> index c4048628188a..da7d35f66dd0 100644
> --- a/drivers/gpu/drm/i915/i915_active.c
> +++ b/drivers/gpu/drm/i915/i915_active.c
> @@ -518,19 +518,18 @@ int i915_active_wait(struct i915_active *ref)
>   	return 0;
>   }
>   
> -static int __await_active(struct i915_active_fence *active,
> -			  int (*fn)(void *arg, struct dma_fence *fence),
> -			  void *arg)
> +static int __await_fence(struct i915_active_fence *active,
> +			 int (*fn)(void *arg, struct dma_fence *fence),
> +			 void *arg)
>   {
>   	struct dma_fence *fence;
> +	int err;
>   
> -	if (is_barrier(active)) /* XXX flush the barrier? */
> +	if (is_barrier(active))
>   		return 0;
>   
>   	fence = i915_active_fence_get(active);
>   	if (fence) {
> -		int err;
> -
>   		err = fn(arg, fence);
>   		dma_fence_put(fence);
>   		if (err < 0)
> @@ -540,6 +539,22 @@ static int __await_active(struct i915_active_fence *active,
>   	return 0;
>   }
>   
> +static int __await_active(struct active_node *it,
> +			  unsigned int flags,
> +			  int (*fn)(void *arg, struct dma_fence *fence),
> +			  void *arg)
> +{
> +	int err;
> +
> +	if (flags & I915_ACTIVE_AWAIT_BARRIER) {
> +		err = flush_barrier(it);
> +		if (err)
> +			return err;
> +	}
> +
> +	return __await_fence(&it->base, fn, arg);
> +}
> +
>   static int await_active(struct i915_active *ref,
>   			unsigned int flags,
>   			int (*fn)(void *arg, struct dma_fence *fence),
> @@ -549,16 +564,17 @@ static int await_active(struct i915_active *ref,
>   
>   	/* We must always wait for the exclusive fence! */
>   	if (rcu_access_pointer(ref->excl.fence)) {
> -		err = __await_active(&ref->excl, fn, arg);
> +		err = __await_fence(&ref->excl, fn, arg);
>   		if (err)
>   			return err;
>   	}
>   
> -	if (flags & I915_ACTIVE_AWAIT_ALL && i915_active_acquire_if_busy(ref)) {
> +	if (flags & I915_ACTIVE_AWAIT_ACTIVE &&
> +	    i915_active_acquire_if_busy(ref)) {
>   		struct active_node *it, *n;
>   
>   		rbtree_postorder_for_each_entry_safe(it, n, &ref->tree, node) {
> -			err = __await_active(&it->base, fn, arg);
> +			err = __await_active(it, flags, fn, arg);
>   			if (err)
>   				break;
>   		}
> @@ -852,6 +868,9 @@ void i915_request_add_active_barriers(struct i915_request *rq)
>   		list_add_tail((struct list_head *)node, &rq->fence.cb_list);
>   	}
>   	spin_unlock_irqrestore(&rq->lock, flags);
> +
> +	/* Ensure that all who came before the barrier are flushed out */
> +	__set_bit(I915_FENCE_FLAG_SENTINEL, &rq->fence.flags);
>   }
>   
>   /*
> diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h
> index b3282ae7913c..9697592235fa 100644
> --- a/drivers/gpu/drm/i915/i915_active.h
> +++ b/drivers/gpu/drm/i915/i915_active.h
> @@ -189,7 +189,8 @@ int i915_sw_fence_await_active(struct i915_sw_fence *fence,
>   int i915_request_await_active(struct i915_request *rq,
>   			      struct i915_active *ref,
>   			      unsigned int flags);
> -#define I915_ACTIVE_AWAIT_ALL BIT(0)
> +#define I915_ACTIVE_AWAIT_ACTIVE BIT(0)
> +#define I915_ACTIVE_AWAIT_BARRIER BIT(1)
>   
>   int i915_active_acquire(struct i915_active *ref);
>   bool i915_active_acquire_if_busy(struct i915_active *ref);
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed
  2020-03-19  9:19 ` [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed Chris Wilson
@ 2020-03-19 14:40   ` Tvrtko Ursulin
  2020-03-19 14:43     ` Tvrtko Ursulin
  0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 14:40 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 09:19, Chris Wilson wrote:
> Use the restored ability to check if a context is closed to decide
> whether or not to immediately ban the context from further execution
> after a hang.
> 
> Fixes: be90e344836a ("drm/i915/gt: Cancel banned contexts after GT reset")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_reset.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
> index 9a15bdf31c7f..003f26b42998 100644
> --- a/drivers/gpu/drm/i915/gt/intel_reset.c
> +++ b/drivers/gpu/drm/i915/gt/intel_reset.c
> @@ -88,6 +88,11 @@ static bool mark_guilty(struct i915_request *rq)
>   	bool banned;
>   	int i;
>   
> +	if (intel_context_is_closed(rq->context)) {
> +		intel_context_set_banned(rq->context);
> +		return true;
> +	}
> +
>   	rcu_read_lock();
>   	ctx = rcu_dereference(rq->context->gem_context);
>   	if (ctx && !kref_get_unless_zero(&ctx->ref))
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed
  2020-03-19 14:40   ` Tvrtko Ursulin
@ 2020-03-19 14:43     ` Tvrtko Ursulin
  0 siblings, 0 replies; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 14:43 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 14:40, Tvrtko Ursulin wrote:
> 
> On 19/03/2020 09:19, Chris Wilson wrote:
>> Use the restored ability to check if a context is closed to decide
>> whether or not to immediately ban the context from further execution
>> after a hang.
>>
>> Fixes: be90e344836a ("drm/i915/gt: Cancel banned contexts after GT 
>> reset")
>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/intel_reset.c | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
>> b/drivers/gpu/drm/i915/gt/intel_reset.c
>> index 9a15bdf31c7f..003f26b42998 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_reset.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_reset.c
>> @@ -88,6 +88,11 @@ static bool mark_guilty(struct i915_request *rq)
>>       bool banned;
>>       int i;
>> +    if (intel_context_is_closed(rq->context)) {
>> +        intel_context_set_banned(rq->context);
>> +        return true;
>> +    }
>> +
>>       rcu_read_lock();
>>       ctx = rcu_dereference(rq->context->gem_context);
>>       if (ctx && !kref_get_unless_zero(&ctx->ref))
>>
> 
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Although shards are reporting something is not quite right. Is it this 
patch? Doesn't look like it..

Regards,

Tvrtko


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup
  2020-03-19 14:20   ` Tvrtko Ursulin
@ 2020-03-19 14:52     ` Chris Wilson
  2020-03-19 16:02       ` Chris Wilson
  0 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19 14:52 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2020-03-19 14:20:04)
> 
> On 19/03/2020 09:19, Chris Wilson wrote:
> > +static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle)
> > +{
> > +     do {
> > +             struct drm_i915_gem_object *obj;
> >               struct i915_vma *vma;
> > +             int err;
> >   
> > -             vma = radix_tree_lookup(handles_vma, handle);
> > +             rcu_read_lock();
> > +             vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle);
> 
> radix_tree_lookup is documented to be RCU safe okay. How about freeing 
> VMAs - is that done after a RCU grace period?

As we are still stuck with the horrible i915_vma.kref semantics (yes, I
know I'm supposed to be fixing that), there are 3 paths which may
destroy i915_vma: the object (RCU safe), the vm (RCU safe) and
i915_vma_parked, not safe in any way shape or form.

A quick and dirty solution would be to move i915_vma_parked behind an
rcu work.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19 14:31   ` Tvrtko Ursulin
@ 2020-03-19 15:02     ` Chris Wilson
  2020-03-19 15:11       ` Tvrtko Ursulin
  0 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19 15:02 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2020-03-19 14:31:36)
> 
> On 19/03/2020 09:19, Chris Wilson wrote:
> > +                             if (has_sentinel(last, rq))
> >                                       goto done;
> 
> I am only confused by can_merge_rq saying two sentinel requests can be
> merged together:
> 
>         if (unlikely((i915_request_flags(prev) ^ i915_request_flags(next)) &
>                      (BIT(I915_FENCE_FLAG_NOPREEMPT) |
>                       BIT(I915_FENCE_FLAG_SENTINEL))))
>                 return false;
> 
> What am I missing?

I thought it was fine to coalesce consecutive sentinels within the
context into one.

Except you're thinking about gvt, and not my usage :|
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19 15:02     ` Chris Wilson
@ 2020-03-19 15:11       ` Tvrtko Ursulin
  2020-03-19 15:21         ` Chris Wilson
  0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2020-03-19 15:11 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 19/03/2020 15:02, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-19 14:31:36)
>>
>> On 19/03/2020 09:19, Chris Wilson wrote:
>>> +                             if (has_sentinel(last, rq))
>>>                                        goto done;
>>
>> I am only confused by can_merge_rq saying two sentinel requests can be
>> merged together:
>>
>>          if (unlikely((i915_request_flags(prev) ^ i915_request_flags(next)) &
>>                       (BIT(I915_FENCE_FLAG_NOPREEMPT) |
>>                        BIT(I915_FENCE_FLAG_SENTINEL))))
>>                  return false;
>>
>> What am I missing?
> 
> I thought it was fine to coalesce consecutive sentinels within the
> context into one.
> 
> Except you're thinking about gvt, and not my usage :|

Sentinel is like "only one context in elsp at a time", right? This is 
what GVT wants. And for the active barrier we want single elsp and not 
coalesced with non-sentinel from the same context. But sentinels are 
kernel context, so should be fine. Although it still may be clearer to 
make then not coalesce as well.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19 15:11       ` Tvrtko Ursulin
@ 2020-03-19 15:21         ` Chris Wilson
  2020-03-19 15:31           ` Chris Wilson
  0 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19 15:21 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2020-03-19 15:11:49)
> 
> On 19/03/2020 15:02, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-03-19 14:31:36)
> >>
> >> On 19/03/2020 09:19, Chris Wilson wrote:
> >>> +                             if (has_sentinel(last, rq))
> >>>                                        goto done;
> >>
> >> I am only confused by can_merge_rq saying two sentinel requests can be
> >> merged together:
> >>
> >>          if (unlikely((i915_request_flags(prev) ^ i915_request_flags(next)) &
> >>                       (BIT(I915_FENCE_FLAG_NOPREEMPT) |
> >>                        BIT(I915_FENCE_FLAG_SENTINEL))))
> >>                  return false;
> >>
> >> What am I missing?
> > 
> > I thought it was fine to coalesce consecutive sentinels within the
> > context into one.
> > 
> > Except you're thinking about gvt, and not my usage :|
> 
> Sentinel is like "only one context in elsp at a time", right?
> This is what GVT wants.

GVT wants one request. For my purpose, it was just one context.

> And for the active barrier we want single elsp and not 
> coalesced with non-sentinel from the same context. But sentinels are 
> kernel context, so should be fine. Although it still may be clearer to 
> make then not coalesce as well.

The frequency should of non-barrier operations along the kernel context
should not be high enough that we gain anything by coalescing mixed
barrier/non-barrier request streams. I hope.

On the other hand we do want to coalesce NOPREEMPT streams. Oh well, my
hope for pulling it all in one bitops seems to be fading away.

First though, I have to answer the question of how I broke persistence.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19 15:21         ` Chris Wilson
@ 2020-03-19 15:31           ` Chris Wilson
  2020-03-19 16:58             ` Chris Wilson
  0 siblings, 1 reply; 21+ messages in thread
From: Chris Wilson @ 2020-03-19 15:31 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Chris Wilson (2020-03-19 15:21:41)
> First though, I have to answer the question of how I broke persistence.

Fwiw, it's the await_active update. Time to double check whether it is
flushing the barrier on demand.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup
  2020-03-19 14:52     ` Chris Wilson
@ 2020-03-19 16:02       ` Chris Wilson
  0 siblings, 0 replies; 21+ messages in thread
From: Chris Wilson @ 2020-03-19 16:02 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Chris Wilson (2020-03-19 14:52:21)
> Quoting Tvrtko Ursulin (2020-03-19 14:20:04)
> > 
> > On 19/03/2020 09:19, Chris Wilson wrote:
> > > +static struct i915_vma *eb_lookup_vma(struct i915_execbuffer *eb, u32 handle)
> > > +{
> > > +     do {
> > > +             struct drm_i915_gem_object *obj;
> > >               struct i915_vma *vma;
> > > +             int err;
> > >   
> > > -             vma = radix_tree_lookup(handles_vma, handle);
> > > +             rcu_read_lock();
> > > +             vma = radix_tree_lookup(&eb->gem_context->handles_vma, handle);
> > 
> > radix_tree_lookup is documented to be RCU safe okay. How about freeing 
> > VMAs - is that done after a RCU grace period?
> 
> As we are still stuck with the horrible i915_vma.kref semantics (yes, I
> know I'm supposed to be fixing that), there are 3 paths which may
> destroy i915_vma: the object (RCU safe), the vm (RCU safe) and
> i915_vma_parked, not safe in any way shape or form.

Actually, the nasty fact I keep forgetting is that i915_vma_parked is
serialised with execbuf by virtue of the engine-pm. That has caught me
out many times, but is why this is safe (and takes extra effort to
convert to kref).
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels
  2020-03-19 15:31           ` Chris Wilson
@ 2020-03-19 16:58             ` Chris Wilson
  0 siblings, 0 replies; 21+ messages in thread
From: Chris Wilson @ 2020-03-19 16:58 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Chris Wilson (2020-03-19 15:31:41)
> Quoting Chris Wilson (2020-03-19 15:21:41)
> > First though, I have to answer the question of how I broke persistence.
> 
> Fwiw, it's the await_active update. Time to double check whether it is
> flushing the barrier on demand.

And it's because I didn't take the lingering preallocated barriers into
account.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-03-19 16:58 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-19  9:19 [Intel-gfx] [PATCH 1/6] drm/i915: Prefer '%ps' for printing function symbol names Chris Wilson
2020-03-19  9:19 ` [Intel-gfx] [PATCH 2/6] drm/i915/gem: Avoid gem_context->mutex for simple vma lookup Chris Wilson
2020-03-19 14:20   ` Tvrtko Ursulin
2020-03-19 14:52     ` Chris Wilson
2020-03-19 16:02       ` Chris Wilson
2020-03-19  9:19 ` [Intel-gfx] [PATCH 3/6] drm/i915/execlists: Force single submission for sentinels Chris Wilson
2020-03-19 14:31   ` Tvrtko Ursulin
2020-03-19 15:02     ` Chris Wilson
2020-03-19 15:11       ` Tvrtko Ursulin
2020-03-19 15:21         ` Chris Wilson
2020-03-19 15:31           ` Chris Wilson
2020-03-19 16:58             ` Chris Wilson
2020-03-19  9:19 ` [Intel-gfx] [PATCH 4/6] drm/i915/gem: Wait until the context is finally retired before releasing engines Chris Wilson
2020-03-19 14:36   ` Tvrtko Ursulin
2020-03-19  9:19 ` [Intel-gfx] [PATCH 5/6] drm/i915: Use explicit flag to mark unreachable intel_context Chris Wilson
2020-03-19  9:19 ` [Intel-gfx] [PATCH 6/6] drm/i915/gt: Cancel a hung context if already closed Chris Wilson
2020-03-19 14:40   ` Tvrtko Ursulin
2020-03-19 14:43     ` Tvrtko Ursulin
2020-03-19 10:20 ` [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/6] drm/i915: Prefer '%ps' for printing function symbol names Patchwork
2020-03-19 11:39 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-03-19 13:53 ` [Intel-gfx] [PATCH 1/6] " Tvrtko Ursulin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).