All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
@ 2018-03-27 21:01 Chris Wilson
  2018-03-27 21:01 ` [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume Chris Wilson
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Chris Wilson @ 2018-03-27 21:01 UTC (permalink / raw)
  To: intel-gfx

Tvrtko uncovered a fun issue with recovering from a wedge device. In his
tests, he wedged the driver by injecting an unrecoverable hang whilst a
batch was spinning. As we reset the gpu in the middle of the spinner,
when resumed it would continue on from the next instruction in the ring
and write it's breadcrumb. However, on wedging we updated our
bookkeeping to indicate that the GPU had completed executing and would
restart from after the breadcrumb; so the emission of the stale
breadcrumb from before the reset came as a bit of a surprise.

A simple fix is to when rebinding the context into the GPU, we update
the ring register state in the context image to match our bookkeeping.
We already have to update the RING_START and RING_TAIL, so updating
RING_HEAD as well is trivial. This works because whenever we unbind the
context, we keep the bookkeeping in check; and on wedging we unbind all
contexts.

Testcase: igt/gem_eio
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index ba7f7831f934..654634254b64 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1272,6 +1272,7 @@ execlists_context_pin(struct intel_engine_cs *engine,
 	ce->lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
 	ce->lrc_reg_state[CTX_RING_BUFFER_START+1] =
 		i915_ggtt_offset(ce->ring->vma);
+	ce->lrc_reg_state[CTX_RING_HEAD+1] = ce->ring->head;
 
 	ce->state->obj->pin_global++;
 	i915_gem_context_get(ctx);
-- 
2.16.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
@ 2018-03-27 21:01 ` Chris Wilson
  2018-03-27 23:05   ` Chris Wilson
  2018-03-27 23:10   ` [PATCH v2] " Chris Wilson
  2018-03-27 21:01 ` [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno Chris Wilson
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 16+ messages in thread
From: Chris Wilson @ 2018-03-27 21:01 UTC (permalink / raw)
  To: intel-gfx

Now that we reload both RING_HEAD and RING_TAIL when rebinding the
context, we do not need to scrub those registers immediately on resume.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 27 ++++++++-------------------
 1 file changed, 8 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 654634254b64..ed2c833a8b20 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2536,13 +2536,14 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 	return ret;
 }
 
-void intel_lr_context_resume(struct drm_i915_private *dev_priv)
+void intel_lr_context_resume(struct drm_i915_private *i915)
 {
 	struct intel_engine_cs *engine;
 	struct i915_gem_context *ctx;
 	enum intel_engine_id id;
 
-	/* Because we emit WA_TAIL_DWORDS there may be a disparity
+	/*
+	 * Because we emit WA_TAIL_DWORDS there may be a disparity
 	 * between our bookkeeping in ce->ring->head and ce->ring->tail and
 	 * that stored in context. As we only write new commands from
 	 * ce->ring->tail onwards, everything before that is junk. If the GPU
@@ -2552,26 +2553,14 @@ void intel_lr_context_resume(struct drm_i915_private *dev_priv)
 	 * So to avoid that we reset the context images upon resume. For
 	 * simplicity, we just zero everything out.
 	 */
-	list_for_each_entry(ctx, &dev_priv->contexts.list, link) {
-		for_each_engine(engine, dev_priv, id) {
-			struct intel_context *ce = &ctx->engine[engine->id];
-			u32 *reg;
-
-			if (!ce->state)
-				continue;
+	list_for_each_entry(ctx, &i915->contexts.list, link) {
+		for_each_engine(engine, i915, id) {
+			struct intel_context *ce = &ctx->engine[id];
 
-			reg = i915_gem_object_pin_map(ce->state->obj,
-						      I915_MAP_WB);
-			if (WARN_ON(IS_ERR(reg)))
+			GEM_BUG_ON(ce->pin_count);
+			if (!ce->ring)
 				continue;
 
-			reg += LRC_STATE_PN * PAGE_SIZE / sizeof(*reg);
-			reg[CTX_RING_HEAD+1] = 0;
-			reg[CTX_RING_TAIL+1] = 0;
-
-			ce->state->obj->mm.dirty = true;
-			i915_gem_object_unpin_map(ce->state->obj);
-
 			intel_ring_reset(ce->ring, 0);
 		}
 	}
-- 
2.16.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
  2018-03-27 21:01 ` [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume Chris Wilson
@ 2018-03-27 21:01 ` Chris Wilson
  2018-03-29  8:42   ` Tvrtko Ursulin
  2018-03-27 22:38 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Patchwork
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Chris Wilson @ 2018-03-27 21:01 UTC (permalink / raw)
  To: intel-gfx

When we include a request's global_seqno in a GEM_TRACE it often helps
to know how that relates to the current breadcrumb as seen by the
hardware.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_request.c | 28 +++++++++++++++++-----------
 drivers/gpu/drm/i915/intel_lrc.c    |  6 ++++--
 2 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 2314a26cd7f8..585242831974 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -214,8 +214,11 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
 		struct i915_gem_timeline *timeline;
 		struct intel_timeline *tl = engine->timeline;
 
-		GEM_TRACE("%s seqno %d -> %d\n",
-			  engine->name, tl->seqno, seqno);
+		GEM_TRACE("%s seqno %d (current %d) -> %d\n",
+			  engine->name,
+			  tl->seqno,
+			  intel_engine_get_seqno(engine),
+			  seqno);
 
 		if (!i915_seqno_passed(seqno, tl->seqno)) {
 			/* Flush any waiters before we reuse the seqno */
@@ -386,10 +389,11 @@ static void i915_request_retire(struct i915_request *request)
 	struct intel_engine_cs *engine = request->engine;
 	struct i915_gem_active *active, *next;
 
-	GEM_TRACE("%s(%d) fence %llx:%d, global_seqno %d\n",
-		  engine->name, intel_engine_get_seqno(engine),
+	GEM_TRACE("%s fence %llx:%d, global_seqno %d, current %d\n",
+		  engine->name,
 		  request->fence.context, request->fence.seqno,
-		  request->global_seqno);
+		  request->global_seqno,
+		  intel_engine_get_seqno(engine));
 
 	lockdep_assert_held(&request->i915->drm.struct_mutex);
 	GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
@@ -508,10 +512,11 @@ void __i915_request_submit(struct i915_request *request)
 	struct intel_engine_cs *engine = request->engine;
 	u32 seqno;
 
-	GEM_TRACE("%s fence %llx:%d -> global_seqno %d\n",
-		  request->engine->name,
+	GEM_TRACE("%s fence %llx:%d -> global_seqno %d, current %d\n",
+		  engine->name,
 		  request->fence.context, request->fence.seqno,
-		  engine->timeline->seqno + 1);
+		  engine->timeline->seqno + 1,
+		  intel_engine_get_seqno(engine));
 
 	GEM_BUG_ON(!irqs_disabled());
 	lockdep_assert_held(&engine->timeline->lock);
@@ -557,10 +562,11 @@ void __i915_request_unsubmit(struct i915_request *request)
 {
 	struct intel_engine_cs *engine = request->engine;
 
-	GEM_TRACE("%s fence %llx:%d <- global_seqno %d\n",
-		  request->engine->name,
+	GEM_TRACE("%s fence %llx:%d <- global_seqno %d, current %d\n",
+		  engine->name,
 		  request->fence.context, request->fence.seqno,
-		  request->global_seqno);
+		  request->global_seqno,
+		  intel_engine_get_seqno(engine));
 
 	GEM_BUG_ON(!irqs_disabled());
 	lockdep_assert_held(&engine->timeline->lock);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index ed2c833a8b20..b5235f52a81b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -454,10 +454,11 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
 			desc = execlists_update_context(rq);
 			GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
 
-			GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%x, prio=%d\n",
+			GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
 				  engine->name, n,
 				  port[n].context_id, count,
 				  rq->global_seqno,
+				  intel_engine_get_seqno(engine),
 				  rq_prio(rq));
 		} else {
 			GEM_BUG_ON(!n);
@@ -999,10 +1000,11 @@ static void execlists_submission_tasklet(unsigned long data)
 							EXECLISTS_ACTIVE_USER));
 
 			rq = port_unpack(port, &count);
-			GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%x, prio=%d\n",
+			GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
 				  engine->name,
 				  port->context_id, count,
 				  rq ? rq->global_seqno : 0,
+				  intel_engine_get_seqno(engine),
 				  rq ? rq_prio(rq) : 0);
 
 			/* Check the context/desc id for this event matches */
-- 
2.16.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
  2018-03-27 21:01 ` [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume Chris Wilson
  2018-03-27 21:01 ` [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno Chris Wilson
@ 2018-03-27 22:38 ` Patchwork
  2018-03-27 22:53 ` ✗ Fi.CI.BAT: failure " Patchwork
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2018-03-27 22:38 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
URL   : https://patchwork.freedesktop.org/series/40764/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
f73f2b30c417 drm/i915/execlists: Reset ring registers on rebinding contexts
-:36: CHECK:SPACING: spaces preferred around that '+' (ctx:VxV)
#36: FILE: drivers/gpu/drm/i915/intel_lrc.c:1275:
+	ce->lrc_reg_state[CTX_RING_HEAD+1] = ce->ring->head;
 	                               ^

total: 0 errors, 0 warnings, 1 checks, 7 lines checked
a9473b05fca4 drm/i915/execlists: Delay updating ring register state after resume
515b83d66afc drm/i915: Include the HW breadcrumb whenever we trace the global_seqno

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
                   ` (2 preceding siblings ...)
  2018-03-27 22:38 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Patchwork
@ 2018-03-27 22:53 ` Patchwork
  2018-03-27 23:58 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2) Patchwork
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2018-03-27 22:53 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
URL   : https://patchwork.freedesktop.org/series/40764/
State : failure

== Summary ==

Series 40764v1 series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
https://patchwork.freedesktop.org/api/1.0/series/40764/revisions/1/mbox/

---- Possible new issues:

Test gem_exec_suspend:
        Subgroup basic-s3:
                pass       -> INCOMPLETE (fi-bdw-gvtdvm)
                pass       -> INCOMPLETE (fi-bsw-n3050)
                pass       -> INCOMPLETE (fi-bxt-j4205)
                pass       -> INCOMPLETE (fi-cfl-8700k)
                pass       -> INCOMPLETE (fi-cfl-s3)
                pass       -> INCOMPLETE (fi-cfl-u)
                pass       -> INCOMPLETE (fi-glk-1)
                pass       -> INCOMPLETE (fi-kbl-7500u)
                pass       -> INCOMPLETE (fi-kbl-r)

---- Known issues:

Test gem_exec_suspend:
        Subgroup basic-s3:
                pass       -> INCOMPLETE (fi-bdw-5557u) fdo#104311
                pass       -> INCOMPLETE (fi-cnl-y3) fdo#105086
                pass       -> INCOMPLETE (fi-kbl-7567u) fdo#103665
                pass       -> INCOMPLETE (fi-skl-6260u) fdo#104108 +5

fdo#104311 https://bugs.freedesktop.org/show_bug.cgi?id=104311
fdo#105086 https://bugs.freedesktop.org/show_bug.cgi?id=105086
fdo#103665 https://bugs.freedesktop.org/show_bug.cgi?id=103665
fdo#104108 https://bugs.freedesktop.org/show_bug.cgi?id=104108

fi-bdw-5557u     total:108  pass:104  dwarn:0   dfail:0   fail:0   skip:3  
fi-bdw-gvtdvm    total:108  pass:103  dwarn:0   dfail:0   fail:0   skip:4  
fi-blb-e6850     total:285  pass:220  dwarn:1   dfail:0   fail:0   skip:64  time:382s
fi-bsw-n3050     total:108  pass:95   dwarn:0   dfail:0   fail:0   skip:12 
fi-bwr-2160      total:285  pass:180  dwarn:0   dfail:0   fail:0   skip:105 time:299s
fi-bxt-j4205     total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-byt-j1900     total:285  pass:250  dwarn:0   dfail:0   fail:0   skip:35  time:529s
fi-byt-n2820     total:285  pass:246  dwarn:0   dfail:0   fail:0   skip:39  time:514s
fi-cfl-8700k     total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-cfl-s3        total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-cfl-u         total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-cnl-y3        total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-elk-e7500     total:285  pass:225  dwarn:1   dfail:0   fail:0   skip:59  time:429s
fi-gdg-551       total:285  pass:176  dwarn:0   dfail:0   fail:1   skip:108 time:321s
fi-glk-1         total:108  pass:95   dwarn:0   dfail:0   fail:0   skip:12 
fi-hsw-4770      total:285  pass:258  dwarn:0   dfail:0   fail:0   skip:27  time:404s
fi-ilk-650       total:285  pass:225  dwarn:0   dfail:0   fail:0   skip:60  time:422s
fi-ivb-3520m     total:285  pass:256  dwarn:0   dfail:0   fail:0   skip:29  time:469s
fi-ivb-3770      total:285  pass:252  dwarn:0   dfail:0   fail:0   skip:33  time:430s
fi-kbl-7500u     total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-kbl-7567u     total:108  pass:104  dwarn:0   dfail:0   fail:0   skip:3  
fi-kbl-r         total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-pnv-d510      total:285  pass:219  dwarn:1   dfail:0   fail:0   skip:65  time:658s
fi-skl-6260u     total:108  pass:104  dwarn:0   dfail:0   fail:0   skip:3  
fi-skl-6600u     total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-skl-6700k2    total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-skl-6770hq    total:108  pass:104  dwarn:0   dfail:0   fail:0   skip:3  
fi-skl-guc       total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-skl-gvtdvm    total:108  pass:103  dwarn:0   dfail:0   fail:0   skip:4  
fi-snb-2520m     total:285  pass:245  dwarn:0   dfail:0   fail:0   skip:40  time:561s
Blacklisted hosts:
fi-cnl-psr       total:108  pass:96   dwarn:0   dfail:0   fail:0   skip:11 
fi-glk-j4005 failed to collect. IGT log at Patchwork_8509/fi-glk-j4005/run0.log

0539b52e05cd0abe697d45f2a2373ec42af7ebcb drm-tip: 2018y-03m-27d-18h-45m-40s UTC integration manifest
515b83d66afc drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
a9473b05fca4 drm/i915/execlists: Delay updating ring register state after resume
f73f2b30c417 drm/i915/execlists: Reset ring registers on rebinding contexts

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8509/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume
  2018-03-27 21:01 ` [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume Chris Wilson
@ 2018-03-27 23:05   ` Chris Wilson
  2018-03-27 23:10   ` [PATCH v2] " Chris Wilson
  1 sibling, 0 replies; 16+ messages in thread
From: Chris Wilson @ 2018-03-27 23:05 UTC (permalink / raw)
  To: intel-gfx

Quoting Chris Wilson (2018-03-27 22:01:56)
> Now that we reload both RING_HEAD and RING_TAIL when rebinding the
> context, we do not need to scrub those registers immediately on resume.

So CI reports that contrary to my belief, we didn't do a
i915_gem_contexts_lost() across suspend. Hmm, nope that's definitely
there in i915_gem_suspend, so all contexts should have been unpinned.
Oh, silly me, that doesn't account for the extra perma-pin we keep on
the kernel contexts to avoid them being evicted.

Ok, that's annoying and has some ramifications for the first patch if we
can contrive a wedging to be injected into the kernel context. Hmm, an
outside possibility to be sure as it would need a full device reset at
precisely the same time as a i915_gem_switch_to_kernel_context,
exceedingly rare. Not so rare as having the kernel context be an
innocent victim (the last context on an idle engine) - that's not
impacted by this, as there we know the breadcrumb has already been
emitted so RING_HEAD will not roll back and reemit on recovery.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2] drm/i915/execlists: Delay updating ring register state after resume
  2018-03-27 21:01 ` [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume Chris Wilson
  2018-03-27 23:05   ` Chris Wilson
@ 2018-03-27 23:10   ` Chris Wilson
  2018-03-28 13:40     ` Mika Kuoppala
  1 sibling, 1 reply; 16+ messages in thread
From: Chris Wilson @ 2018-03-27 23:10 UTC (permalink / raw)
  To: intel-gfx

Now that we reload both RING_HEAD and RING_TAIL when rebinding the
context, we do not need to scrub those registers immediately on resume.

v2: Handle the perma-pinned contexts.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 654634254b64..2bf5128efb26 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2536,13 +2536,14 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 	return ret;
 }
 
-void intel_lr_context_resume(struct drm_i915_private *dev_priv)
+void intel_lr_context_resume(struct drm_i915_private *i915)
 {
 	struct intel_engine_cs *engine;
 	struct i915_gem_context *ctx;
 	enum intel_engine_id id;
 
-	/* Because we emit WA_TAIL_DWORDS there may be a disparity
+	/*
+	 * Because we emit WA_TAIL_DWORDS there may be a disparity
 	 * between our bookkeeping in ce->ring->head and ce->ring->tail and
 	 * that stored in context. As we only write new commands from
 	 * ce->ring->tail onwards, everything before that is junk. If the GPU
@@ -2552,27 +2553,19 @@ void intel_lr_context_resume(struct drm_i915_private *dev_priv)
 	 * So to avoid that we reset the context images upon resume. For
 	 * simplicity, we just zero everything out.
 	 */
-	list_for_each_entry(ctx, &dev_priv->contexts.list, link) {
-		for_each_engine(engine, dev_priv, id) {
-			struct intel_context *ce = &ctx->engine[engine->id];
-			u32 *reg;
-
-			if (!ce->state)
-				continue;
+	list_for_each_entry(ctx, &i915->contexts.list, link) {
+		for_each_engine(engine, i915, id) {
+			struct intel_context *ce = &ctx->engine[id];
 
-			reg = i915_gem_object_pin_map(ce->state->obj,
-						      I915_MAP_WB);
-			if (WARN_ON(IS_ERR(reg)))
+			if (!ce->ring)
 				continue;
 
-			reg += LRC_STATE_PN * PAGE_SIZE / sizeof(*reg);
-			reg[CTX_RING_HEAD+1] = 0;
-			reg[CTX_RING_TAIL+1] = 0;
-
-			ce->state->obj->mm.dirty = true;
-			i915_gem_object_unpin_map(ce->state->obj);
-
 			intel_ring_reset(ce->ring, 0);
+
+			if (ce->pin_count) { /* otherwise done in context_pin */
+				ce->lrc_reg_state[CTX_RING_HEAD+1] = 0;
+				ce->lrc_reg_state[CTX_RING_TAIL+1] = 0;
+			}
 		}
 	}
 }
-- 
2.16.3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2)
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
                   ` (3 preceding siblings ...)
  2018-03-27 22:53 ` ✗ Fi.CI.BAT: failure " Patchwork
@ 2018-03-27 23:58 ` Patchwork
  2018-03-28  0:13 ` ✓ Fi.CI.BAT: success " Patchwork
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2018-03-27 23:58 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2)
URL   : https://patchwork.freedesktop.org/series/40764/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
ef3a65fcec9e drm/i915/execlists: Reset ring registers on rebinding contexts
-:36: CHECK:SPACING: spaces preferred around that '+' (ctx:VxV)
#36: FILE: drivers/gpu/drm/i915/intel_lrc.c:1275:
+	ce->lrc_reg_state[CTX_RING_HEAD+1] = ce->ring->head;
 	                               ^

total: 0 errors, 0 warnings, 1 checks, 7 lines checked
46afadd21cba drm/i915/execlists: Delay updating ring register state after resume
-:68: CHECK:SPACING: spaces preferred around that '+' (ctx:VxV)
#68: FILE: drivers/gpu/drm/i915/intel_lrc.c:2566:
+				ce->lrc_reg_state[CTX_RING_HEAD+1] = 0;
 				                               ^

-:69: CHECK:SPACING: spaces preferred around that '+' (ctx:VxV)
#69: FILE: drivers/gpu/drm/i915/intel_lrc.c:2567:
+				ce->lrc_reg_state[CTX_RING_TAIL+1] = 0;
 				                               ^

total: 0 errors, 0 warnings, 2 checks, 52 lines checked
f0a06fa9b3f8 drm/i915: Include the HW breadcrumb whenever we trace the global_seqno

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2)
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
                   ` (4 preceding siblings ...)
  2018-03-27 23:58 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2) Patchwork
@ 2018-03-28  0:13 ` Patchwork
  2018-03-28 10:12 ` ✗ Fi.CI.IGT: failure " Patchwork
  2018-03-28 11:31 ` [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Mika Kuoppala
  7 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2018-03-28  0:13 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2)
URL   : https://patchwork.freedesktop.org/series/40764/
State : success

== Summary ==

Series 40764v2 series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
https://patchwork.freedesktop.org/api/1.0/series/40764/revisions/2/mbox/

---- Known issues:

Test gem_mmap_gtt:
        Subgroup basic-small-bo-tiledx:
                fail       -> PASS       (fi-gdg-551) fdo#102575
Test kms_flip:
        Subgroup basic-flip-vs-wf_vblank:
                pass       -> FAIL       (fi-cfl-s3) fdo#100368

fdo#102575 https://bugs.freedesktop.org/show_bug.cgi?id=102575
fdo#100368 https://bugs.freedesktop.org/show_bug.cgi?id=100368

fi-bdw-5557u     total:285  pass:264  dwarn:0   dfail:0   fail:0   skip:21  time:436s
fi-bdw-gvtdvm    total:285  pass:261  dwarn:0   dfail:0   fail:0   skip:24  time:443s
fi-blb-e6850     total:285  pass:220  dwarn:1   dfail:0   fail:0   skip:64  time:380s
fi-bsw-n3050     total:285  pass:239  dwarn:0   dfail:0   fail:0   skip:46  time:544s
fi-bwr-2160      total:285  pass:180  dwarn:0   dfail:0   fail:0   skip:105 time:304s
fi-bxt-j4205     total:285  pass:256  dwarn:0   dfail:0   fail:0   skip:29  time:515s
fi-byt-j1900     total:285  pass:250  dwarn:0   dfail:0   fail:0   skip:35  time:524s
fi-byt-n2820     total:285  pass:246  dwarn:0   dfail:0   fail:0   skip:39  time:510s
fi-cfl-8700k     total:285  pass:257  dwarn:0   dfail:0   fail:0   skip:28  time:410s
fi-cfl-s3        total:285  pass:258  dwarn:0   dfail:0   fail:1   skip:26  time:561s
fi-cfl-u         total:285  pass:259  dwarn:0   dfail:0   fail:0   skip:26  time:517s
fi-cnl-y3        total:285  pass:259  dwarn:0   dfail:0   fail:0   skip:26  time:594s
fi-elk-e7500     total:285  pass:225  dwarn:1   dfail:0   fail:0   skip:59  time:429s
fi-gdg-551       total:285  pass:177  dwarn:0   dfail:0   fail:0   skip:108 time:321s
fi-glk-1         total:285  pass:257  dwarn:0   dfail:0   fail:0   skip:28  time:538s
fi-hsw-4770      total:285  pass:258  dwarn:0   dfail:0   fail:0   skip:27  time:405s
fi-ilk-650       total:285  pass:225  dwarn:0   dfail:0   fail:0   skip:60  time:425s
fi-ivb-3520m     total:285  pass:256  dwarn:0   dfail:0   fail:0   skip:29  time:476s
fi-ivb-3770      total:285  pass:252  dwarn:0   dfail:0   fail:0   skip:33  time:432s
fi-kbl-7500u     total:285  pass:260  dwarn:1   dfail:0   fail:0   skip:24  time:477s
fi-kbl-7567u     total:285  pass:265  dwarn:0   dfail:0   fail:0   skip:20  time:471s
fi-kbl-r         total:285  pass:258  dwarn:0   dfail:0   fail:0   skip:27  time:513s
fi-pnv-d510      total:285  pass:219  dwarn:1   dfail:0   fail:0   skip:65  time:662s
fi-skl-6260u     total:285  pass:265  dwarn:0   dfail:0   fail:0   skip:20  time:441s
fi-skl-6600u     total:285  pass:258  dwarn:0   dfail:0   fail:0   skip:27  time:533s
fi-skl-6700k2    total:285  pass:261  dwarn:0   dfail:0   fail:0   skip:24  time:503s
fi-skl-6770hq    total:285  pass:265  dwarn:0   dfail:0   fail:0   skip:20  time:498s
fi-skl-guc       total:285  pass:257  dwarn:0   dfail:0   fail:0   skip:28  time:430s
fi-skl-gvtdvm    total:285  pass:262  dwarn:0   dfail:0   fail:0   skip:23  time:444s
fi-snb-2520m     total:285  pass:245  dwarn:0   dfail:0   fail:0   skip:40  time:591s
Blacklisted hosts:
fi-cnl-psr       total:285  pass:256  dwarn:3   dfail:0   fail:0   skip:26  time:523s
fi-glk-j4005     total:285  pass:256  dwarn:0   dfail:0   fail:0   skip:29  time:486s

0539b52e05cd0abe697d45f2a2373ec42af7ebcb drm-tip: 2018y-03m-27d-18h-45m-40s UTC integration manifest
f0a06fa9b3f8 drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
46afadd21cba drm/i915/execlists: Delay updating ring register state after resume
ef3a65fcec9e drm/i915/execlists: Reset ring registers on rebinding contexts

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8513/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.IGT: failure for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2)
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
                   ` (5 preceding siblings ...)
  2018-03-28  0:13 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2018-03-28 10:12 ` Patchwork
  2018-03-28 11:31 ` [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Mika Kuoppala
  7 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2018-03-28 10:12 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2)
URL   : https://patchwork.freedesktop.org/series/40764/
State : failure

== Summary ==

---- Possible new issues:

Test kms_flip:
        Subgroup flip-vs-blocking-wf-vblank:
                pass       -> FAIL       (shard-hsw)

---- Known issues:

Test kms_cursor_crc:
        Subgroup cursor-128x128-suspend:
                dmesg-warn -> PASS       (shard-snb) fdo#102365
Test kms_flip:
        Subgroup 2x-dpms-vs-vblank-race:
                pass       -> FAIL       (shard-hsw) fdo#103060 +1
        Subgroup 2x-flip-vs-absolute-wf_vblank-interruptible:
                pass       -> FAIL       (shard-hsw) fdo#100368 +1
        Subgroup 2x-flip-vs-expired-vblank-interruptible:
                fail       -> PASS       (shard-hsw) fdo#102887 +2
Test kms_plane_multiple:
        Subgroup atomic-pipe-a-tiling-x:
                pass       -> FAIL       (shard-snb) fdo#103166
Test kms_vblank:
        Subgroup pipe-b-accuracy-idle:
                pass       -> FAIL       (shard-hsw) fdo#102583
Test perf:
        Subgroup polling:
                fail       -> PASS       (shard-hsw) fdo#102252

fdo#102365 https://bugs.freedesktop.org/show_bug.cgi?id=102365
fdo#103060 https://bugs.freedesktop.org/show_bug.cgi?id=103060
fdo#100368 https://bugs.freedesktop.org/show_bug.cgi?id=100368
fdo#102887 https://bugs.freedesktop.org/show_bug.cgi?id=102887
fdo#103166 https://bugs.freedesktop.org/show_bug.cgi?id=103166
fdo#102583 https://bugs.freedesktop.org/show_bug.cgi?id=102583
fdo#102252 https://bugs.freedesktop.org/show_bug.cgi?id=102252

shard-apl        total:3495 pass:1831 dwarn:1   dfail:0   fail:7   skip:1655 time:12914s
shard-hsw        total:3495 pass:1777 dwarn:1   dfail:0   fail:7   skip:1709 time:11595s
shard-snb        total:3495 pass:1373 dwarn:1   dfail:0   fail:4   skip:2117 time:6999s

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8513/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts
  2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
                   ` (6 preceding siblings ...)
  2018-03-28 10:12 ` ✗ Fi.CI.IGT: failure " Patchwork
@ 2018-03-28 11:31 ` Mika Kuoppala
  7 siblings, 0 replies; 16+ messages in thread
From: Mika Kuoppala @ 2018-03-28 11:31 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Tvrtko uncovered a fun issue with recovering from a wedge device. In his
> tests, he wedged the driver by injecting an unrecoverable hang whilst a
> batch was spinning. As we reset the gpu in the middle of the spinner,
> when resumed it would continue on from the next instruction in the ring
> and write it's breadcrumb. However, on wedging we updated our
> bookkeeping to indicate that the GPU had completed executing and would
> restart from after the breadcrumb; so the emission of the stale
> breadcrumb from before the reset came as a bit of a surprise.
>
> A simple fix is to when rebinding the context into the GPU, we update
> the ring register state in the context image to match our bookkeeping.
> We already have to update the RING_START and RING_TAIL, so updating
> RING_HEAD as well is trivial. This works because whenever we unbind the
> context, we keep the bookkeeping in check; and on wedging we unbind all
> contexts.

s/wedging/unwedging. The context lost markup is on unwedge side tho
it should not matter on which stage the unbind happends between
wedge and unwedge so this minor change to commit message and

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

>
> Testcase: igt/gem_eio
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index ba7f7831f934..654634254b64 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -1272,6 +1272,7 @@ execlists_context_pin(struct intel_engine_cs *engine,
>  	ce->lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
>  	ce->lrc_reg_state[CTX_RING_BUFFER_START+1] =
>  		i915_ggtt_offset(ce->ring->vma);
> +	ce->lrc_reg_state[CTX_RING_HEAD+1] = ce->ring->head;
>  
>  	ce->state->obj->pin_global++;
>  	i915_gem_context_get(ctx);
> -- 
> 2.16.3
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2] drm/i915/execlists: Delay updating ring register state after resume
  2018-03-27 23:10   ` [PATCH v2] " Chris Wilson
@ 2018-03-28 13:40     ` Mika Kuoppala
  0 siblings, 0 replies; 16+ messages in thread
From: Mika Kuoppala @ 2018-03-28 13:40 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

Chris Wilson <chris@chris-wilson.co.uk> writes:

> Now that we reload both RING_HEAD and RING_TAIL when rebinding the
> context, we do not need to scrub those registers immediately on resume.
>
> v2: Handle the perma-pinned contexts.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 31 ++++++++++++-------------------
>  1 file changed, 12 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 654634254b64..2bf5128efb26 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -2536,13 +2536,14 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
>  	return ret;
>  }
>  
> -void intel_lr_context_resume(struct drm_i915_private *dev_priv)
> +void intel_lr_context_resume(struct drm_i915_private *i915)
>  {
>  	struct intel_engine_cs *engine;
>  	struct i915_gem_context *ctx;
>  	enum intel_engine_id id;
>  
> -	/* Because we emit WA_TAIL_DWORDS there may be a disparity
> +	/*
> +	 * Because we emit WA_TAIL_DWORDS there may be a disparity
>  	 * between our bookkeeping in ce->ring->head and ce->ring->tail and
>  	 * that stored in context. As we only write new commands from
>  	 * ce->ring->tail onwards, everything before that is junk. If the GPU
> @@ -2552,27 +2553,19 @@ void intel_lr_context_resume(struct drm_i915_private *dev_priv)
>  	 * So to avoid that we reset the context images upon resume. For
>  	 * simplicity, we just zero everything out.
>  	 */
> -	list_for_each_entry(ctx, &dev_priv->contexts.list, link) {
> -		for_each_engine(engine, dev_priv, id) {
> -			struct intel_context *ce = &ctx->engine[engine->id];
> -			u32 *reg;
> -
> -			if (!ce->state)
> -				continue;
> +	list_for_each_entry(ctx, &i915->contexts.list, link) {
> +		for_each_engine(engine, i915, id) {
> +			struct intel_context *ce = &ctx->engine[id];
>  
> -			reg = i915_gem_object_pin_map(ce->state->obj,
> -						      I915_MAP_WB);
> -			if (WARN_ON(IS_ERR(reg)))
> +			if (!ce->ring)
>  				continue;
>  
> -			reg += LRC_STATE_PN * PAGE_SIZE / sizeof(*reg);
> -			reg[CTX_RING_HEAD+1] = 0;
> -			reg[CTX_RING_TAIL+1] = 0;
> -
> -			ce->state->obj->mm.dirty = true;
> -			i915_gem_object_unpin_map(ce->state->obj);
> -
>  			intel_ring_reset(ce->ring, 0);
> +
> +			if (ce->pin_count) { /* otherwise done in context_pin */

From my understanding this is for kernel context only. So the comment
should mention the kernel context.

Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>

> +				ce->lrc_reg_state[CTX_RING_HEAD+1] = 0;
> +				ce->lrc_reg_state[CTX_RING_TAIL+1] = 0;
> +			}
>  		}
>  	}
>  }
> -- 
> 2.16.3
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
  2018-03-27 21:01 ` [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno Chris Wilson
@ 2018-03-29  8:42   ` Tvrtko Ursulin
  2018-03-29  8:55     ` Chris Wilson
  0 siblings, 1 reply; 16+ messages in thread
From: Tvrtko Ursulin @ 2018-03-29  8:42 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 27/03/2018 22:01, Chris Wilson wrote:
> When we include a request's global_seqno in a GEM_TRACE it often helps
> to know how that relates to the current breadcrumb as seen by the
> hardware.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_request.c | 28 +++++++++++++++++-----------
>   drivers/gpu/drm/i915/intel_lrc.c    |  6 ++++--
>   2 files changed, 21 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 2314a26cd7f8..585242831974 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -214,8 +214,11 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
>   		struct i915_gem_timeline *timeline;
>   		struct intel_timeline *tl = engine->timeline;
>   
> -		GEM_TRACE("%s seqno %d -> %d\n",
> -			  engine->name, tl->seqno, seqno);
> +		GEM_TRACE("%s seqno %d (current %d) -> %d\n",
> +			  engine->name,
> +			  tl->seqno,
> +			  intel_engine_get_seqno(engine),
> +			  seqno);
>   
>   		if (!i915_seqno_passed(seqno, tl->seqno)) {
>   			/* Flush any waiters before we reuse the seqno */
> @@ -386,10 +389,11 @@ static void i915_request_retire(struct i915_request *request)
>   	struct intel_engine_cs *engine = request->engine;
>   	struct i915_gem_active *active, *next;
>   
> -	GEM_TRACE("%s(%d) fence %llx:%d, global_seqno %d\n",
> -		  engine->name, intel_engine_get_seqno(engine),
> +	GEM_TRACE("%s fence %llx:%d, global_seqno %d, current %d\n",
> +		  engine->name,
>   		  request->fence.context, request->fence.seqno,
> -		  request->global_seqno);
> +		  request->global_seqno,
> +		  intel_engine_get_seqno(engine));
>   
>   	lockdep_assert_held(&request->i915->drm.struct_mutex);
>   	GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
> @@ -508,10 +512,11 @@ void __i915_request_submit(struct i915_request *request)
>   	struct intel_engine_cs *engine = request->engine;
>   	u32 seqno;
>   
> -	GEM_TRACE("%s fence %llx:%d -> global_seqno %d\n",
> -		  request->engine->name,
> +	GEM_TRACE("%s fence %llx:%d -> global_seqno %d, current %d\n",
> +		  engine->name,
>   		  request->fence.context, request->fence.seqno,
> -		  engine->timeline->seqno + 1);
> +		  engine->timeline->seqno + 1,
> +		  intel_engine_get_seqno(engine));
>   
>   	GEM_BUG_ON(!irqs_disabled());
>   	lockdep_assert_held(&engine->timeline->lock);
> @@ -557,10 +562,11 @@ void __i915_request_unsubmit(struct i915_request *request)
>   {
>   	struct intel_engine_cs *engine = request->engine;
>   
> -	GEM_TRACE("%s fence %llx:%d <- global_seqno %d\n",
> -		  request->engine->name,
> +	GEM_TRACE("%s fence %llx:%d <- global_seqno %d, current %d\n",
> +		  engine->name,
>   		  request->fence.context, request->fence.seqno,
> -		  request->global_seqno);
> +		  request->global_seqno,
> +		  intel_engine_get_seqno(engine));
>   
>   	GEM_BUG_ON(!irqs_disabled());
>   	lockdep_assert_held(&engine->timeline->lock);
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index ed2c833a8b20..b5235f52a81b 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -454,10 +454,11 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
>   			desc = execlists_update_context(rq);
>   			GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
>   
> -			GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%x, prio=%d\n",
> +			GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
>   				  engine->name, n,
>   				  port[n].context_id, count,
>   				  rq->global_seqno,
> +				  intel_engine_get_seqno(engine),
>   				  rq_prio(rq));
>   		} else {
>   			GEM_BUG_ON(!n);
> @@ -999,10 +1000,11 @@ static void execlists_submission_tasklet(unsigned long data)
>   							EXECLISTS_ACTIVE_USER));
>   
>   			rq = port_unpack(port, &count);
> -			GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%x, prio=%d\n",
> +			GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
>   				  engine->name,
>   				  port->context_id, count,
>   				  rq ? rq->global_seqno : 0,
> +				  intel_engine_get_seqno(engine),
>   				  rq ? rq_prio(rq) : 0);
>   
>   			/* Check the context/desc id for this event matches */
> 

The only thing I am not sure if HWS seqno is interesting in the two 
above. But if you think it is it doesn't harm much.

Also in these two I had fence context:seqno instead to correlate easily 
with (un)submit&co. That way when I select fence:context in an editor it 
nicely highlights the whole flow relating to this request. Anyway:

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
  2018-03-29  8:42   ` Tvrtko Ursulin
@ 2018-03-29  8:55     ` Chris Wilson
  2018-03-29  9:43       ` Tvrtko Ursulin
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Wilson @ 2018-03-29  8:55 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2018-03-29 09:42:52)
> 
> On 27/03/2018 22:01, Chris Wilson wrote:
> > When we include a request's global_seqno in a GEM_TRACE it often helps
> > to know how that relates to the current breadcrumb as seen by the
> > hardware.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/i915/i915_request.c | 28 +++++++++++++++++-----------
> >   drivers/gpu/drm/i915/intel_lrc.c    |  6 ++++--
> >   2 files changed, 21 insertions(+), 13 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > index 2314a26cd7f8..585242831974 100644
> > --- a/drivers/gpu/drm/i915/i915_request.c
> > +++ b/drivers/gpu/drm/i915/i915_request.c
> > @@ -214,8 +214,11 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
> >               struct i915_gem_timeline *timeline;
> >               struct intel_timeline *tl = engine->timeline;
> >   
> > -             GEM_TRACE("%s seqno %d -> %d\n",
> > -                       engine->name, tl->seqno, seqno);
> > +             GEM_TRACE("%s seqno %d (current %d) -> %d\n",
> > +                       engine->name,
> > +                       tl->seqno,
> > +                       intel_engine_get_seqno(engine),
> > +                       seqno);
> >   
> >               if (!i915_seqno_passed(seqno, tl->seqno)) {
> >                       /* Flush any waiters before we reuse the seqno */
> > @@ -386,10 +389,11 @@ static void i915_request_retire(struct i915_request *request)
> >       struct intel_engine_cs *engine = request->engine;
> >       struct i915_gem_active *active, *next;
> >   
> > -     GEM_TRACE("%s(%d) fence %llx:%d, global_seqno %d\n",
> > -               engine->name, intel_engine_get_seqno(engine),
> > +     GEM_TRACE("%s fence %llx:%d, global_seqno %d, current %d\n",
> > +               engine->name,
> >                 request->fence.context, request->fence.seqno,
> > -               request->global_seqno);
> > +               request->global_seqno,
> > +               intel_engine_get_seqno(engine));
> >   
> >       lockdep_assert_held(&request->i915->drm.struct_mutex);
> >       GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
> > @@ -508,10 +512,11 @@ void __i915_request_submit(struct i915_request *request)
> >       struct intel_engine_cs *engine = request->engine;
> >       u32 seqno;
> >   
> > -     GEM_TRACE("%s fence %llx:%d -> global_seqno %d\n",
> > -               request->engine->name,
> > +     GEM_TRACE("%s fence %llx:%d -> global_seqno %d, current %d\n",
> > +               engine->name,
> >                 request->fence.context, request->fence.seqno,
> > -               engine->timeline->seqno + 1);
> > +               engine->timeline->seqno + 1,
> > +               intel_engine_get_seqno(engine));
> >   
> >       GEM_BUG_ON(!irqs_disabled());
> >       lockdep_assert_held(&engine->timeline->lock);
> > @@ -557,10 +562,11 @@ void __i915_request_unsubmit(struct i915_request *request)
> >   {
> >       struct intel_engine_cs *engine = request->engine;
> >   
> > -     GEM_TRACE("%s fence %llx:%d <- global_seqno %d\n",
> > -               request->engine->name,
> > +     GEM_TRACE("%s fence %llx:%d <- global_seqno %d, current %d\n",
> > +               engine->name,
> >                 request->fence.context, request->fence.seqno,
> > -               request->global_seqno);
> > +               request->global_seqno,
> > +               intel_engine_get_seqno(engine));
> >   
> >       GEM_BUG_ON(!irqs_disabled());
> >       lockdep_assert_held(&engine->timeline->lock);
> > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> > index ed2c833a8b20..b5235f52a81b 100644
> > --- a/drivers/gpu/drm/i915/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > @@ -454,10 +454,11 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
> >                       desc = execlists_update_context(rq);
> >                       GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
> >   
> > -                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%x, prio=%d\n",
> > +                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
> >                                 engine->name, n,
> >                                 port[n].context_id, count,
> >                                 rq->global_seqno,
> > +                               intel_engine_get_seqno(engine),
> >                                 rq_prio(rq));
> >               } else {
> >                       GEM_BUG_ON(!n);
> > @@ -999,10 +1000,11 @@ static void execlists_submission_tasklet(unsigned long data)
> >                                                       EXECLISTS_ACTIVE_USER));
> >   
> >                       rq = port_unpack(port, &count);
> > -                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%x, prio=%d\n",
> > +                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
> >                                 engine->name,
> >                                 port->context_id, count,
> >                                 rq ? rq->global_seqno : 0,
> > +                               intel_engine_get_seqno(engine),
> >                                 rq ? rq_prio(rq) : 0);
> >   
> >                       /* Check the context/desc id for this event matches */
> > 
> 
> The only thing I am not sure if HWS seqno is interesting in the two 
> above. But if you think it is it doesn't harm much.

I was thinking consistency, anyway I had a global_seqno in the trace, we
would also have the current breadcrumb.
 
> Also in these two I had fence context:seqno instead to correlate easily 
> with (un)submit&co. That way when I select fence:context in an editor it 
> nicely highlights the whole flow relating to this request. Anyway:

Right, which I think was a very useful improvement.
 
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Do you want to take this patch into yours or apply this and then fixup
yours?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
  2018-03-29  8:55     ` Chris Wilson
@ 2018-03-29  9:43       ` Tvrtko Ursulin
  2018-03-29 11:22         ` Chris Wilson
  0 siblings, 1 reply; 16+ messages in thread
From: Tvrtko Ursulin @ 2018-03-29  9:43 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 29/03/2018 09:55, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-03-29 09:42:52)
>>
>> On 27/03/2018 22:01, Chris Wilson wrote:
>>> When we include a request's global_seqno in a GEM_TRACE it often helps
>>> to know how that relates to the current breadcrumb as seen by the
>>> hardware.
>>>
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> ---
>>>    drivers/gpu/drm/i915/i915_request.c | 28 +++++++++++++++++-----------
>>>    drivers/gpu/drm/i915/intel_lrc.c    |  6 ++++--
>>>    2 files changed, 21 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>> index 2314a26cd7f8..585242831974 100644
>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>> @@ -214,8 +214,11 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
>>>                struct i915_gem_timeline *timeline;
>>>                struct intel_timeline *tl = engine->timeline;
>>>    
>>> -             GEM_TRACE("%s seqno %d -> %d\n",
>>> -                       engine->name, tl->seqno, seqno);
>>> +             GEM_TRACE("%s seqno %d (current %d) -> %d\n",
>>> +                       engine->name,
>>> +                       tl->seqno,
>>> +                       intel_engine_get_seqno(engine),
>>> +                       seqno);
>>>    
>>>                if (!i915_seqno_passed(seqno, tl->seqno)) {
>>>                        /* Flush any waiters before we reuse the seqno */
>>> @@ -386,10 +389,11 @@ static void i915_request_retire(struct i915_request *request)
>>>        struct intel_engine_cs *engine = request->engine;
>>>        struct i915_gem_active *active, *next;
>>>    
>>> -     GEM_TRACE("%s(%d) fence %llx:%d, global_seqno %d\n",
>>> -               engine->name, intel_engine_get_seqno(engine),
>>> +     GEM_TRACE("%s fence %llx:%d, global_seqno %d, current %d\n",
>>> +               engine->name,
>>>                  request->fence.context, request->fence.seqno,
>>> -               request->global_seqno);
>>> +               request->global_seqno,
>>> +               intel_engine_get_seqno(engine));
>>>    
>>>        lockdep_assert_held(&request->i915->drm.struct_mutex);
>>>        GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
>>> @@ -508,10 +512,11 @@ void __i915_request_submit(struct i915_request *request)
>>>        struct intel_engine_cs *engine = request->engine;
>>>        u32 seqno;
>>>    
>>> -     GEM_TRACE("%s fence %llx:%d -> global_seqno %d\n",
>>> -               request->engine->name,
>>> +     GEM_TRACE("%s fence %llx:%d -> global_seqno %d, current %d\n",
>>> +               engine->name,
>>>                  request->fence.context, request->fence.seqno,
>>> -               engine->timeline->seqno + 1);
>>> +               engine->timeline->seqno + 1,
>>> +               intel_engine_get_seqno(engine));
>>>    
>>>        GEM_BUG_ON(!irqs_disabled());
>>>        lockdep_assert_held(&engine->timeline->lock);
>>> @@ -557,10 +562,11 @@ void __i915_request_unsubmit(struct i915_request *request)
>>>    {
>>>        struct intel_engine_cs *engine = request->engine;
>>>    
>>> -     GEM_TRACE("%s fence %llx:%d <- global_seqno %d\n",
>>> -               request->engine->name,
>>> +     GEM_TRACE("%s fence %llx:%d <- global_seqno %d, current %d\n",
>>> +               engine->name,
>>>                  request->fence.context, request->fence.seqno,
>>> -               request->global_seqno);
>>> +               request->global_seqno,
>>> +               intel_engine_get_seqno(engine));
>>>    
>>>        GEM_BUG_ON(!irqs_disabled());
>>>        lockdep_assert_held(&engine->timeline->lock);
>>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>>> index ed2c833a8b20..b5235f52a81b 100644
>>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>>> @@ -454,10 +454,11 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
>>>                        desc = execlists_update_context(rq);
>>>                        GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
>>>    
>>> -                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%x, prio=%d\n",
>>> +                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
>>>                                  engine->name, n,
>>>                                  port[n].context_id, count,
>>>                                  rq->global_seqno,
>>> +                               intel_engine_get_seqno(engine),
>>>                                  rq_prio(rq));
>>>                } else {
>>>                        GEM_BUG_ON(!n);
>>> @@ -999,10 +1000,11 @@ static void execlists_submission_tasklet(unsigned long data)
>>>                                                        EXECLISTS_ACTIVE_USER));
>>>    
>>>                        rq = port_unpack(port, &count);
>>> -                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%x, prio=%d\n",
>>> +                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
>>>                                  engine->name,
>>>                                  port->context_id, count,
>>>                                  rq ? rq->global_seqno : 0,
>>> +                               intel_engine_get_seqno(engine),
>>>                                  rq ? rq_prio(rq) : 0);
>>>    
>>>                        /* Check the context/desc id for this event matches */
>>>
>>
>> The only thing I am not sure if HWS seqno is interesting in the two
>> above. But if you think it is it doesn't harm much.
> 
> I was thinking consistency, anyway I had a global_seqno in the trace, we
> would also have the current breadcrumb.
>   
>> Also in these two I had fence context:seqno instead to correlate easily
>> with (un)submit&co. That way when I select fence:context in an editor it
>> nicely highlights the whole flow relating to this request. Anyway:
> 
> Right, which I think was a very useful improvement.
>   
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Do you want to take this patch into yours or apply this and then fixup
> yours?

You can go with this one and I'll send a follow up at some point.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno
  2018-03-29  9:43       ` Tvrtko Ursulin
@ 2018-03-29 11:22         ` Chris Wilson
  0 siblings, 0 replies; 16+ messages in thread
From: Chris Wilson @ 2018-03-29 11:22 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2018-03-29 10:43:11)
> 
> On 29/03/2018 09:55, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-03-29 09:42:52)
> >>
> >> On 27/03/2018 22:01, Chris Wilson wrote:
> >>> When we include a request's global_seqno in a GEM_TRACE it often helps
> >>> to know how that relates to the current breadcrumb as seen by the
> >>> hardware.
> >>>
> >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> ---
> >>>    drivers/gpu/drm/i915/i915_request.c | 28 +++++++++++++++++-----------
> >>>    drivers/gpu/drm/i915/intel_lrc.c    |  6 ++++--
> >>>    2 files changed, 21 insertions(+), 13 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> >>> index 2314a26cd7f8..585242831974 100644
> >>> --- a/drivers/gpu/drm/i915/i915_request.c
> >>> +++ b/drivers/gpu/drm/i915/i915_request.c
> >>> @@ -214,8 +214,11 @@ static int reset_all_global_seqno(struct drm_i915_private *i915, u32 seqno)
> >>>                struct i915_gem_timeline *timeline;
> >>>                struct intel_timeline *tl = engine->timeline;
> >>>    
> >>> -             GEM_TRACE("%s seqno %d -> %d\n",
> >>> -                       engine->name, tl->seqno, seqno);
> >>> +             GEM_TRACE("%s seqno %d (current %d) -> %d\n",
> >>> +                       engine->name,
> >>> +                       tl->seqno,
> >>> +                       intel_engine_get_seqno(engine),
> >>> +                       seqno);
> >>>    
> >>>                if (!i915_seqno_passed(seqno, tl->seqno)) {
> >>>                        /* Flush any waiters before we reuse the seqno */
> >>> @@ -386,10 +389,11 @@ static void i915_request_retire(struct i915_request *request)
> >>>        struct intel_engine_cs *engine = request->engine;
> >>>        struct i915_gem_active *active, *next;
> >>>    
> >>> -     GEM_TRACE("%s(%d) fence %llx:%d, global_seqno %d\n",
> >>> -               engine->name, intel_engine_get_seqno(engine),
> >>> +     GEM_TRACE("%s fence %llx:%d, global_seqno %d, current %d\n",
> >>> +               engine->name,
> >>>                  request->fence.context, request->fence.seqno,
> >>> -               request->global_seqno);
> >>> +               request->global_seqno,
> >>> +               intel_engine_get_seqno(engine));
> >>>    
> >>>        lockdep_assert_held(&request->i915->drm.struct_mutex);
> >>>        GEM_BUG_ON(!i915_sw_fence_signaled(&request->submit));
> >>> @@ -508,10 +512,11 @@ void __i915_request_submit(struct i915_request *request)
> >>>        struct intel_engine_cs *engine = request->engine;
> >>>        u32 seqno;
> >>>    
> >>> -     GEM_TRACE("%s fence %llx:%d -> global_seqno %d\n",
> >>> -               request->engine->name,
> >>> +     GEM_TRACE("%s fence %llx:%d -> global_seqno %d, current %d\n",
> >>> +               engine->name,
> >>>                  request->fence.context, request->fence.seqno,
> >>> -               engine->timeline->seqno + 1);
> >>> +               engine->timeline->seqno + 1,
> >>> +               intel_engine_get_seqno(engine));
> >>>    
> >>>        GEM_BUG_ON(!irqs_disabled());
> >>>        lockdep_assert_held(&engine->timeline->lock);
> >>> @@ -557,10 +562,11 @@ void __i915_request_unsubmit(struct i915_request *request)
> >>>    {
> >>>        struct intel_engine_cs *engine = request->engine;
> >>>    
> >>> -     GEM_TRACE("%s fence %llx:%d <- global_seqno %d\n",
> >>> -               request->engine->name,
> >>> +     GEM_TRACE("%s fence %llx:%d <- global_seqno %d, current %d\n",
> >>> +               engine->name,
> >>>                  request->fence.context, request->fence.seqno,
> >>> -               request->global_seqno);
> >>> +               request->global_seqno,
> >>> +               intel_engine_get_seqno(engine));
> >>>    
> >>>        GEM_BUG_ON(!irqs_disabled());
> >>>        lockdep_assert_held(&engine->timeline->lock);
> >>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> >>> index ed2c833a8b20..b5235f52a81b 100644
> >>> --- a/drivers/gpu/drm/i915/intel_lrc.c
> >>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> >>> @@ -454,10 +454,11 @@ static void execlists_submit_ports(struct intel_engine_cs *engine)
> >>>                        desc = execlists_update_context(rq);
> >>>                        GEM_DEBUG_EXEC(port[n].context_id = upper_32_bits(desc));
> >>>    
> >>> -                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%x, prio=%d\n",
> >>> +                     GEM_TRACE("%s in[%d]:  ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
> >>>                                  engine->name, n,
> >>>                                  port[n].context_id, count,
> >>>                                  rq->global_seqno,
> >>> +                               intel_engine_get_seqno(engine),
> >>>                                  rq_prio(rq));
> >>>                } else {
> >>>                        GEM_BUG_ON(!n);
> >>> @@ -999,10 +1000,11 @@ static void execlists_submission_tasklet(unsigned long data)
> >>>                                                        EXECLISTS_ACTIVE_USER));
> >>>    
> >>>                        rq = port_unpack(port, &count);
> >>> -                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%x, prio=%d\n",
> >>> +                     GEM_TRACE("%s out[0]: ctx=%d.%d, seqno=%d (current %d), prio=%d\n",
> >>>                                  engine->name,
> >>>                                  port->context_id, count,
> >>>                                  rq ? rq->global_seqno : 0,
> >>> +                               intel_engine_get_seqno(engine),
> >>>                                  rq ? rq_prio(rq) : 0);
> >>>    
> >>>                        /* Check the context/desc id for this event matches */
> >>>
> >>
> >> The only thing I am not sure if HWS seqno is interesting in the two
> >> above. But if you think it is it doesn't harm much.
> > 
> > I was thinking consistency, anyway I had a global_seqno in the trace, we
> > would also have the current breadcrumb.
> >   
> >> Also in these two I had fence context:seqno instead to correlate easily
> >> with (un)submit&co. That way when I select fence:context in an editor it
> >> nicely highlights the whole flow relating to this request. Anyway:
> > 
> > Right, which I think was a very useful improvement.
> >   
> >> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > 
> > Do you want to take this patch into yours or apply this and then fixup
> > yours?
> 
> You can go with this one and I'll send a follow up at some point.

Done. Thanks again for pointing out the deficiencies with such a well
crafted test.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-03-29 11:22 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-27 21:01 [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Chris Wilson
2018-03-27 21:01 ` [PATCH 2/3] drm/i915/execlists: Delay updating ring register state after resume Chris Wilson
2018-03-27 23:05   ` Chris Wilson
2018-03-27 23:10   ` [PATCH v2] " Chris Wilson
2018-03-28 13:40     ` Mika Kuoppala
2018-03-27 21:01 ` [PATCH 3/3] drm/i915: Include the HW breadcrumb whenever we trace the global_seqno Chris Wilson
2018-03-29  8:42   ` Tvrtko Ursulin
2018-03-29  8:55     ` Chris Wilson
2018-03-29  9:43       ` Tvrtko Ursulin
2018-03-29 11:22         ` Chris Wilson
2018-03-27 22:38 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Patchwork
2018-03-27 22:53 ` ✗ Fi.CI.BAT: failure " Patchwork
2018-03-27 23:58 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/3] drm/i915/execlists: Reset ring registers on rebinding contexts (rev2) Patchwork
2018-03-28  0:13 ` ✓ Fi.CI.BAT: success " Patchwork
2018-03-28 10:12 ` ✗ Fi.CI.IGT: failure " Patchwork
2018-03-28 11:31 ` [PATCH 1/3] drm/i915/execlists: Reset ring registers on rebinding contexts Mika Kuoppala

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.