intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
@ 2021-01-06 12:39 Chris Wilson
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen Chris Wilson
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 12:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

AFter detecting a register mismatch between the protocontext and the
image generated by HW, immediately break out of the double loop.
(Otherwise we end up a second configuing error message.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 3485cb7c431d..920979a89413 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -164,7 +164,7 @@ static int live_lrc_layout(void *arg)
 
 		dw = 0;
 		do {
-			u32 lri = hw[dw];
+			u32 lri = READ_ONCE(hw[dw]);
 
 			if (lri == 0) {
 				dw++;
@@ -197,9 +197,11 @@ static int live_lrc_layout(void *arg)
 			dw++;
 
 			while (lri) {
-				if (hw[dw] != lrc[dw]) {
+				u32 offset = READ_ONCE(hw[dw]);
+
+				if (offset != lrc[dw]) {
 					pr_err("%s: Different registers found at dword %d, expected %x, found %x\n",
-					       engine->name, dw, hw[dw], lrc[dw]);
+					       engine->name, dw, offset, lrc[dw]);
 					err = -EINVAL;
 					break;
 				}
@@ -211,7 +213,7 @@ static int live_lrc_layout(void *arg)
 				dw += 2;
 				lri -= 2;
 			}
-		} while ((lrc[dw] & ~BIT(0)) != MI_BATCH_BUFFER_END);
+		} while (!err && (lrc[dw] & ~BIT(0)) != MI_BATCH_BUFFER_END);
 
 		if (err) {
 			pr_info("%s: HW register image:\n", engine->name);
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
@ 2021-01-06 12:39 ` Chris Wilson
  2021-01-06 15:12   ` Tvrtko Ursulin
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 3/4] drm/i915/gt: Restore ce->signal flush before releasing virtual engine Chris Wilson
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 12:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Use memset_io() on the iomem, and silence sparse as we copy from the
iomem to normal system pages.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/selftest_reset.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c
index 5ec8d4e9983f..b7befcfbdcde 100644
--- a/drivers/gpu/drm/i915/gt/selftest_reset.c
+++ b/drivers/gpu/drm/i915/gt/selftest_reset.c
@@ -96,10 +96,10 @@ __igt_reset_stolen(struct intel_gt *gt,
 		if (!__drm_mm_interval_first(&gt->i915->mm.stolen,
 					     page << PAGE_SHIFT,
 					     ((page + 1) << PAGE_SHIFT) - 1))
-			memset32(s, STACK_MAGIC, PAGE_SIZE / sizeof(u32));
+			memset_io(s, STACK_MAGIC, PAGE_SIZE);
 
-		in = s;
-		if (i915_memcpy_from_wc(tmp, s, PAGE_SIZE))
+		in = (void __force *)s;
+		if (i915_memcpy_from_wc(tmp, in, PAGE_SIZE))
 			in = tmp;
 		crc[page] = crc32_le(0, in, PAGE_SIZE);
 
@@ -134,8 +134,8 @@ __igt_reset_stolen(struct intel_gt *gt,
 				      ggtt->error_capture.start,
 				      PAGE_SIZE);
 
-		in = s;
-		if (i915_memcpy_from_wc(tmp, s, PAGE_SIZE))
+		in = (void __force *)s;
+		if (i915_memcpy_from_wc(tmp, in, PAGE_SIZE))
 			in = tmp;
 		x = crc32_le(0, in, PAGE_SIZE);
 
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH 3/4] drm/i915/gt: Restore ce->signal flush before releasing virtual engine
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen Chris Wilson
@ 2021-01-06 12:39 ` Chris Wilson
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression Chris Wilson
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 12:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

Before we mark the virtual engine as no longer inflight, flush any
ongoing signaling that may be using the ce->signal_link along the
previous breadcrumbs. On switch to a new physical engine, that link will
be inserted into the new set of breadcrumbs, causing confusion to an
ongoing iterator.

This patch undoes a last minute mistake introduced into commit
bab0557c8dca ("drm/i915/gt: Remove virtual breadcrumb before transfer"),
whereby instead of unconditionally applying the flush, it was only
applied if the request itself was going to be reused.

v2: Generalise and cancel all remaining ce->signals

Fixes: bab0557c8dca ("drm/i915/gt: Remove virtual breadcrumb before transfer")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.c   | 33 +++++++++++++++++++
 drivers/gpu/drm/i915/gt/intel_breadcrumbs.h   |  4 +++
 .../drm/i915/gt/intel_execlists_submission.c  | 25 ++++++--------
 3 files changed, 47 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 2eabb9ab5d47..7137b6f24f55 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -472,6 +472,39 @@ void i915_request_cancel_breadcrumb(struct i915_request *rq)
 	i915_request_put(rq);
 }
 
+void intel_context_remove_breadcrumbs(struct intel_context *ce,
+				      struct intel_breadcrumbs *b)
+{
+	struct i915_request *rq, *rn;
+	bool release = false;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ce->signal_lock, flags);
+
+	if (list_empty(&ce->signals))
+		goto unlock;
+
+	list_for_each_entry_safe(rq, rn, &ce->signals, signal_link) {
+		GEM_BUG_ON(!__i915_request_is_complete(rq));
+		if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL,
+					&rq->fence.flags))
+			continue;
+
+		list_del_rcu(&rq->signal_link);
+		irq_signal_request(rq, b);
+		i915_request_put(rq);
+	}
+	release = remove_signaling_context(b, ce);
+
+unlock:
+	spin_unlock_irqrestore(&ce->signal_lock, flags);
+	if (release)
+		intel_context_put(ce);
+
+	while (atomic_read(&b->signaler_active))
+		cpu_relax();
+}
+
 static void print_signals(struct intel_breadcrumbs *b, struct drm_printer *p)
 {
 	struct intel_context *ce;
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
index 75cc9cff3ae3..3ce5ce270b04 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
@@ -6,6 +6,7 @@
 #ifndef __INTEL_BREADCRUMBS__
 #define __INTEL_BREADCRUMBS__
 
+#include <linux/atomic.h>
 #include <linux/irq_work.h>
 
 #include "intel_engine_types.h"
@@ -44,4 +45,7 @@ void intel_engine_print_breadcrumbs(struct intel_engine_cs *engine,
 bool i915_request_enable_breadcrumb(struct i915_request *request);
 void i915_request_cancel_breadcrumb(struct i915_request *request);
 
+void intel_context_remove_breadcrumbs(struct intel_context *ce,
+				      struct intel_breadcrumbs *b);
+
 #endif /* __INTEL_BREADCRUMBS__ */
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index a5b442683c18..ba3114fd4389 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -581,21 +581,6 @@ resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
 {
 	struct intel_engine_cs *engine = rq->engine;
 
-	/* Flush concurrent rcu iterators in signal_irq_work */
-	if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &rq->fence.flags)) {
-		/*
-		 * After this point, the rq may be transferred to a new
-		 * sibling, so before we clear ce->inflight make sure that
-		 * the context has been removed from the b->signalers and
-		 * furthermore we need to make sure that the concurrent
-		 * iterator in signal_irq_work is no longer following
-		 * ce->signal_link.
-		 */
-		i915_request_cancel_breadcrumb(rq);
-		while (atomic_read(&engine->breadcrumbs->signaler_active))
-			cpu_relax();
-	}
-
 	spin_lock_irq(&engine->active.lock);
 
 	clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
@@ -610,6 +595,16 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
 	struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
 	struct intel_engine_cs *engine = rq->engine;
 
+	/*
+	 * After this point, the rq may be transferred to a new sibling, so
+	 * before we clear ce->inflight make sure that the context has been
+	 * removed from the b->signalers and furthermore we need to make sure
+	 * that the concurrent iterator in signal_irq_work is no longer
+	 * following ce->signal_link.
+	 */
+	if (!list_empty(&ce->signals))
+		intel_context_remove_breadcrumbs(ce, engine->breadcrumbs);
+
 	/*
 	 * This engine is now too busy to run this virtual request, so
 	 * see if we can find an alternative engine for it to execute on.
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen Chris Wilson
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 3/4] drm/i915/gt: Restore ce->signal flush before releasing virtual engine Chris Wilson
@ 2021-01-06 12:39 ` Chris Wilson
  2021-01-06 15:57   ` Tvrtko Ursulin
  2021-01-06 13:01 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Patchwork
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 12:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Chris Wilson

In the next^W future patch, we remove the strict priority system and
continuously re-evaluate the relative priority of tasks. As such we need
to enable the timeslice whenever there is more than one context in the
pipeline. This simplifies the decision and removes some of the tweaks to
suppress timeslicing, allowing us to lift the timeslice enabling to a
common spot at the end of running the submission tasklet.

One consequence of the suppression is that it was reducing fairness
between virtual engines on an over saturated system; undermining the
principle for timeslicing.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2802
Testcase: igt/gem_exec_balancer/fairslice
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_engine_types.h  |  10 -
 .../drm/i915/gt/intel_execlists_submission.c  | 173 +++++++-----------
 2 files changed, 68 insertions(+), 115 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 430066e5884c..df62e793e747 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -238,16 +238,6 @@ struct intel_engine_execlists {
 	 */
 	unsigned int port_mask;
 
-	/**
-	 * @switch_priority_hint: Second context priority.
-	 *
-	 * We submit multiple contexts to the HW simultaneously and would
-	 * like to occasionally switch between them to emulate timeslicing.
-	 * To know when timeslicing is suitable, we track the priority of
-	 * the context submitted second.
-	 */
-	int switch_priority_hint;
-
 	/**
 	 * @queue_priority_hint: Highest pending priority.
 	 *
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index ba3114fd4389..50d4308023f3 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -1143,25 +1143,6 @@ static void defer_active(struct intel_engine_cs *engine)
 	defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
 }
 
-static bool
-need_timeslice(const struct intel_engine_cs *engine,
-	       const struct i915_request *rq)
-{
-	int hint;
-
-	if (!intel_engine_has_timeslices(engine))
-		return false;
-
-	hint = max(engine->execlists.queue_priority_hint,
-		   virtual_prio(&engine->execlists));
-
-	if (!list_is_last(&rq->sched.link, &engine->active.requests))
-		hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
-
-	GEM_BUG_ON(hint >= I915_PRIORITY_UNPREEMPTABLE);
-	return hint >= effective_prio(rq);
-}
-
 static bool
 timeslice_yield(const struct intel_engine_execlists *el,
 		const struct i915_request *rq)
@@ -1181,76 +1162,68 @@ timeslice_yield(const struct intel_engine_execlists *el,
 	return rq->context->lrc.ccid == READ_ONCE(el->yield);
 }
 
-static bool
-timeslice_expired(const struct intel_engine_execlists *el,
-		  const struct i915_request *rq)
+static bool needs_timeslice(const struct intel_engine_cs *engine,
+			    const struct i915_request *rq)
 {
+	if (!intel_engine_has_timeslices(engine))
+		return false;
+
+	/* If not currently active, or about to switch, wait for next event */
+	if (!rq || __i915_request_is_complete(rq))
+		return false;
+
+	/* We do not need to start the timeslice until after the ACK */
+	if (READ_ONCE(engine->execlists.pending[0]))
+		return false;
+
+	/* If ELSP[1] is occupied, always check to see if worth slicing */
+	if (!list_is_last_rcu(&rq->sched.link, &engine->active.requests))
+		return true;
+
+	/* Otherwise, ELSP[0] is by itself, but may be waiting in the queue */
+	if (!RB_EMPTY_ROOT(&engine->execlists.queue.rb_root))
+		return true;
+
+	return !RB_EMPTY_ROOT(&engine->execlists.virtual.rb_root);
+}
+
+static bool
+timeslice_expired(struct intel_engine_cs *engine, const struct i915_request *rq)
+{
+	const struct intel_engine_execlists *el = &engine->execlists;
+
+	if (i915_request_has_nopreempt(rq) && __i915_request_has_started(rq))
+		return false;
+
+	if (!needs_timeslice(engine, rq))
+		return false;
+
 	return timer_expired(&el->timer) || timeslice_yield(el, rq);
 }
 
-static int
-switch_prio(struct intel_engine_cs *engine, const struct i915_request *rq)
-{
-	if (list_is_last(&rq->sched.link, &engine->active.requests))
-		return engine->execlists.queue_priority_hint;
-
-	return rq_prio(list_next_entry(rq, sched.link));
-}
-
-static inline unsigned long
-timeslice(const struct intel_engine_cs *engine)
+static unsigned long timeslice(const struct intel_engine_cs *engine)
 {
 	return READ_ONCE(engine->props.timeslice_duration_ms);
 }
 
-static unsigned long active_timeslice(const struct intel_engine_cs *engine)
-{
-	const struct intel_engine_execlists *execlists = &engine->execlists;
-	const struct i915_request *rq = *execlists->active;
-
-	if (!rq || __i915_request_is_complete(rq))
-		return 0;
-
-	if (READ_ONCE(execlists->switch_priority_hint) < effective_prio(rq))
-		return 0;
-
-	return timeslice(engine);
-}
-
-static void set_timeslice(struct intel_engine_cs *engine)
+static void start_timeslice(struct intel_engine_cs *engine)
 {
+	struct intel_engine_execlists *el = &engine->execlists;
 	unsigned long duration;
 
-	if (!intel_engine_has_timeslices(engine))
-		return;
+	/* Disable the timer if there is nothing to switch to */
+	duration = 0;
+	if (needs_timeslice(engine, *el->active)) {
+		if (el->timer.expires) {
+			if (!timer_pending(&el->timer))
+				tasklet_hi_schedule(&engine->execlists.tasklet);
+			return;
+		}
 
-	duration = active_timeslice(engine);
-	ENGINE_TRACE(engine, "bump timeslicing, interval:%lu", duration);
+		duration = timeslice(engine);
+	}
 
-	set_timer_ms(&engine->execlists.timer, duration);
-}
-
-static void start_timeslice(struct intel_engine_cs *engine, int prio)
-{
-	struct intel_engine_execlists *execlists = &engine->execlists;
-	unsigned long duration;
-
-	if (!intel_engine_has_timeslices(engine))
-		return;
-
-	WRITE_ONCE(execlists->switch_priority_hint, prio);
-	if (prio == INT_MIN)
-		return;
-
-	if (timer_pending(&execlists->timer))
-		return;
-
-	duration = timeslice(engine);
-	ENGINE_TRACE(engine,
-		     "start timeslicing, prio:%d, interval:%lu",
-		     prio, duration);
-
-	set_timer_ms(&execlists->timer, duration);
+	set_timer_ms(&el->timer, duration);
 }
 
 static void record_preemption(struct intel_engine_execlists *execlists)
@@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 			__unwind_incomplete_requests(engine);
 
 			last = NULL;
-		} else if (need_timeslice(engine, last) &&
-			   timeslice_expired(execlists, last)) {
+		} else if (timeslice_expired(engine, last)) {
 			ENGINE_TRACE(engine,
-				     "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
-				     last->fence.context,
-				     last->fence.seqno,
-				     last->sched.attr.priority,
+				     "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
+				     yesno(timer_expired(&execlists->timer)),
+				     last->fence.context, last->fence.seqno,
+				     rq_prio(last),
 				     execlists->queue_priority_hint,
 				     yesno(timeslice_yield(execlists, last)));
 
+			cancel_timer(&execlists->timer);
 			ring_set_paused(engine, 1);
 			defer_active(engine);
 
@@ -1408,7 +1381,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 				 * of timeslices, our queue might be.
 				 */
 				spin_unlock(&engine->active.lock);
-				start_timeslice(engine, queue_prio(execlists));
 				return;
 			}
 		}
@@ -1435,7 +1407,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 		if (last && !can_merge_rq(last, rq)) {
 			spin_unlock(&ve->base.active.lock);
 			spin_unlock(&engine->active.lock);
-			start_timeslice(engine, rq_prio(rq));
 			return; /* leave this for another sibling */
 		}
 
@@ -1599,29 +1570,23 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
 	execlists->queue_priority_hint = queue_prio(execlists);
 	spin_unlock(&engine->active.lock);
 
-	if (submit) {
-		/*
-		 * Skip if we ended up with exactly the same set of requests,
-		 * e.g. trying to timeslice a pair of ordered contexts
-		 */
-		if (!memcmp(execlists->active,
-			    execlists->pending,
-			    (port - execlists->pending) * sizeof(*port)))
-			goto skip_submit;
-
+	/*
+	 * We can skip poking the HW if we ended up with exactly the same set
+	 * of requests as currently running, e.g. trying to timeslice a pair
+	 * of ordered contexts.
+	 */
+	if (submit &&
+	    memcmp(execlists->active,
+		   execlists->pending,
+		   (port - execlists->pending) * sizeof(*port))) {
 		*port = NULL;
 		while (port-- != execlists->pending)
 			execlists_schedule_in(*port, port - execlists->pending);
 
-		execlists->switch_priority_hint =
-			switch_prio(engine, *execlists->pending);
-
 		WRITE_ONCE(execlists->yield, -1);
 		set_preempt_timeout(engine, *execlists->active);
 		execlists_submit_ports(engine);
 	} else {
-		start_timeslice(engine, execlists->queue_priority_hint);
-skip_submit:
 		ring_set_paused(engine, 0);
 		while (port-- != execlists->pending)
 			i915_request_put(*port);
@@ -1979,8 +1944,6 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
 		}
 	} while (head != tail);
 
-	set_timeslice(engine);
-
 	/*
 	 * Gen11 has proven to fail wrt global observation point between
 	 * entry and tail update, failing on the ordering and thus
@@ -1993,6 +1956,7 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
 	 * invalidation before.
 	 */
 	invalidate_csb_entries(&buf[0], &buf[num_entries - 1]);
+	cancel_timer(&execlists->timer);
 
 	return inactive;
 }
@@ -2405,8 +2369,10 @@ static void execlists_submission_tasklet(unsigned long data)
 		execlists_reset(engine, msg);
 	}
 
-	if (!engine->execlists.pending[0])
+	if (!engine->execlists.pending[0]) {
 		execlists_dequeue_irq(engine);
+		start_timeslice(engine);
+	}
 
 	post_process_csb(post, inactive);
 	rcu_read_unlock();
@@ -3851,9 +3817,6 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine,
 		show_request(m, last, "\t\t", 0);
 	}
 
-	if (execlists->switch_priority_hint != INT_MIN)
-		drm_printf(m, "\t\tSwitch priority hint: %d\n",
-			   READ_ONCE(execlists->switch_priority_hint));
 	if (execlists->queue_priority_hint != INT_MIN)
 		drm_printf(m, "\t\tQueue priority hint: %d\n",
 			   READ_ONCE(execlists->queue_priority_hint));
-- 
2.20.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
                   ` (2 preceding siblings ...)
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression Chris Wilson
@ 2021-01-06 13:01 ` Patchwork
  2021-01-06 13:03 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2021-01-06 13:01 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
URL   : https://patchwork.freedesktop.org/series/85548/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
df520ae36315 drm/i915/selftests: Break out of the lrc layout test after register mismatch
3a8f3c7e5e40 drm/i915/selftests: Improve handling of iomem around stolen
9c4b26ad0411 drm/i915/gt: Restore ce->signal flush before releasing virtual engine
-:14: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ chars of sha1> ("<title line>")' - ie: 'commit bab0557c8dca ("drm/i915/gt: Remove virtual breadcrumb before transfer")'
#14: 
bab0557c8dca ("drm/i915/gt: Remove virtual breadcrumb before transfer"),

total: 1 errors, 0 warnings, 0 checks, 90 lines checked
22c93cdd73c1 drm/i915/gt: Remove timeslice suppression


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
                   ` (3 preceding siblings ...)
  2021-01-06 13:01 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Patchwork
@ 2021-01-06 13:03 ` Patchwork
  2021-01-06 13:30 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2021-01-06 13:03 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
URL   : https://patchwork.freedesktop.org/series/85548/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:101:20:    expected void *in
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:101:20:    got void [noderef] __iomem *[assigned] s
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:101:20: warning: incorrect type in assignment (different address spaces)
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:102:46:    expected void const *src
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:102:46:    got void [noderef] __iomem *[assigned] s
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:102:46: warning: incorrect type in argument 2 (different address spaces)
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:137:20:    expected void *in
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:137:20:    got void [noderef] __iomem *[assigned] s
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:137:20: warning: incorrect type in assignment (different address spaces)
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:138:46:    expected void const *src
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:138:46:    got void [noderef] __iomem *[assigned] s
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:138:46: warning: incorrect type in argument 2 (different address spaces)
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:99:34:    expected unsigned int [usertype] *s
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:99:34:    got void [noderef] __iomem *[assigned] s
-O:drivers/gpu/drm/i915/gt/selftest_reset.c:99:34: warning: incorrect type in argument 1 (different address spaces)


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
                   ` (4 preceding siblings ...)
  2021-01-06 13:03 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-01-06 13:30 ` Patchwork
  2021-01-06 15:10 ` [Intel-gfx] [PATCH 1/4] " Tvrtko Ursulin
  2021-01-06 16:38 ` [Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/4] " Patchwork
  7 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2021-01-06 13:30 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 6265 bytes --]

== Series Details ==

Series: series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
URL   : https://patchwork.freedesktop.org/series/85548/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9549 -> Patchwork_19269
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/index.html

Known issues
------------

  Here are the changes found in Patchwork_19269 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@semaphore:
    - fi-icl-y:           NOTRUN -> [SKIP][1] ([fdo#109315]) +17 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@amdgpu/amd_basic@semaphore.html

  * igt@gem_huc_copy@huc-copy:
    - fi-icl-y:           NOTRUN -> [SKIP][2] ([i915#2190])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@gem_huc_copy@huc-copy.html

  * igt@gem_ringfill@basic-all:
    - fi-tgl-y:           [PASS][3] -> [DMESG-WARN][4] ([i915#402])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/fi-tgl-y/igt@gem_ringfill@basic-all.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-tgl-y/igt@gem_ringfill@basic-all.html

  * igt@i915_pm_rpm@module-reload:
    - fi-byt-j1900:       [PASS][5] -> [INCOMPLETE][6] ([i915#142] / [i915#2405])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-icl-y:           NOTRUN -> [SKIP][7] ([fdo#109284] / [fdo#111827]) +8 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@kms_chamelium@dp-crc-fast.html

  * igt@kms_chamelium@hdmi-crc-fast:
    - fi-kbl-7500u:       [PASS][8] -> [DMESG-WARN][9] ([i915#2868])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/fi-kbl-7500u/igt@kms_chamelium@hdmi-crc-fast.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-kbl-7500u/igt@kms_chamelium@hdmi-crc-fast.html

  * igt@kms_force_connector_basic@force-load-detect:
    - fi-icl-y:           NOTRUN -> [SKIP][10] ([fdo#109285])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - fi-icl-y:           NOTRUN -> [SKIP][11] ([fdo#109278])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_psr@primary_mmap_gtt:
    - fi-icl-y:           NOTRUN -> [SKIP][12] ([fdo#110189]) +3 similar issues
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@kms_psr@primary_mmap_gtt.html

  * igt@runner@aborted:
    - fi-byt-j1900:       NOTRUN -> [FAIL][13] ([i915#1814] / [i915#2505])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-byt-j1900/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@gem_exec_parallel@engines@fds:
    - fi-icl-y:           [INCOMPLETE][14] ([i915#2295]) -> [PASS][15]
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/fi-icl-y/igt@gem_exec_parallel@engines@fds.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-icl-y/igt@gem_exec_parallel@engines@fds.html

  * igt@gem_mmap@basic:
    - fi-tgl-y:           [DMESG-WARN][16] ([i915#402]) -> [PASS][17]
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/fi-tgl-y/igt@gem_mmap@basic.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-tgl-y/igt@gem_mmap@basic.html

  * igt@i915_selftest@live@active:
    - fi-cfl-8109u:       [DMESG-FAIL][18] ([i915#2291] / [i915#666]) -> [PASS][19]
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/fi-cfl-8109u/igt@i915_selftest@live@active.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/fi-cfl-8109u/igt@i915_selftest@live@active.html

  
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#142]: https://gitlab.freedesktop.org/drm/intel/issues/142
  [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2291]: https://gitlab.freedesktop.org/drm/intel/issues/2291
  [i915#2295]: https://gitlab.freedesktop.org/drm/intel/issues/2295
  [i915#2405]: https://gitlab.freedesktop.org/drm/intel/issues/2405
  [i915#2505]: https://gitlab.freedesktop.org/drm/intel/issues/2505
  [i915#2868]: https://gitlab.freedesktop.org/drm/intel/issues/2868
  [i915#402]: https://gitlab.freedesktop.org/drm/intel/issues/402
  [i915#666]: https://gitlab.freedesktop.org/drm/intel/issues/666


Participating hosts (42 -> 37)
------------------------------

  Missing    (5): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9549 -> Patchwork_19269

  CI-20190529: 20190529
  CI_DRM_9549: 71d1067baaab27385b5fcc81c2b789eb8d1ca92c @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5944: e230cd8d481ea28ccc11b554d7a34ffca003fb25 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19269: 22c93cdd73c1fe518b8f881c07fe384061798d8a @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

22c93cdd73c1 drm/i915/gt: Remove timeslice suppression
9c4b26ad0411 drm/i915/gt: Restore ce->signal flush before releasing virtual engine
3a8f3c7e5e40 drm/i915/selftests: Improve handling of iomem around stolen
df520ae36315 drm/i915/selftests: Break out of the lrc layout test after register mismatch

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/index.html

[-- Attachment #1.2: Type: text/html, Size: 7266 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
                   ` (5 preceding siblings ...)
  2021-01-06 13:30 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2021-01-06 15:10 ` Tvrtko Ursulin
  2021-01-06 15:17   ` Chris Wilson
  2021-01-06 16:38 ` [Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/4] " Patchwork
  7 siblings, 1 reply; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-01-06 15:10 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 06/01/2021 12:39, Chris Wilson wrote:
> AFter detecting a register mismatch between the protocontext and the
> image generated by HW, immediately break out of the double loop.
> (Otherwise we end up a second configuing error message.)

s/configuing/confusing/?

No use of dumping all differences? Why it is confusing?

Regards,

Tvrtko

> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/selftest_lrc.c | 10 ++++++----
>   1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index 3485cb7c431d..920979a89413 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -164,7 +164,7 @@ static int live_lrc_layout(void *arg)
>   
>   		dw = 0;
>   		do {
> -			u32 lri = hw[dw];
> +			u32 lri = READ_ONCE(hw[dw]);
>   
>   			if (lri == 0) {
>   				dw++;
> @@ -197,9 +197,11 @@ static int live_lrc_layout(void *arg)
>   			dw++;
>   
>   			while (lri) {
> -				if (hw[dw] != lrc[dw]) {
> +				u32 offset = READ_ONCE(hw[dw]);
> +
> +				if (offset != lrc[dw]) {
>   					pr_err("%s: Different registers found at dword %d, expected %x, found %x\n",
> -					       engine->name, dw, hw[dw], lrc[dw]);
> +					       engine->name, dw, offset, lrc[dw]);
>   					err = -EINVAL;
>   					break;
>   				}
> @@ -211,7 +213,7 @@ static int live_lrc_layout(void *arg)
>   				dw += 2;
>   				lri -= 2;
>   			}
> -		} while ((lrc[dw] & ~BIT(0)) != MI_BATCH_BUFFER_END);
> +		} while (!err && (lrc[dw] & ~BIT(0)) != MI_BATCH_BUFFER_END);
>   
>   		if (err) {
>   			pr_info("%s: HW register image:\n", engine->name);
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen Chris Wilson
@ 2021-01-06 15:12   ` Tvrtko Ursulin
  0 siblings, 0 replies; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-01-06 15:12 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 06/01/2021 12:39, Chris Wilson wrote:
> Use memset_io() on the iomem, and silence sparse as we copy from the
> iomem to normal system pages.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/selftest_reset.c | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c
> index 5ec8d4e9983f..b7befcfbdcde 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_reset.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c
> @@ -96,10 +96,10 @@ __igt_reset_stolen(struct intel_gt *gt,
>   		if (!__drm_mm_interval_first(&gt->i915->mm.stolen,
>   					     page << PAGE_SHIFT,
>   					     ((page + 1) << PAGE_SHIFT) - 1))
> -			memset32(s, STACK_MAGIC, PAGE_SIZE / sizeof(u32));
> +			memset_io(s, STACK_MAGIC, PAGE_SIZE);
>   
> -		in = s;
> -		if (i915_memcpy_from_wc(tmp, s, PAGE_SIZE))
> +		in = (void __force *)s;
> +		if (i915_memcpy_from_wc(tmp, in, PAGE_SIZE))
>   			in = tmp;
>   		crc[page] = crc32_le(0, in, PAGE_SIZE);
>   
> @@ -134,8 +134,8 @@ __igt_reset_stolen(struct intel_gt *gt,
>   				      ggtt->error_capture.start,
>   				      PAGE_SIZE);
>   
> -		in = s;
> -		if (i915_memcpy_from_wc(tmp, s, PAGE_SIZE))
> +		in = (void __force *)s;
> +		if (i915_memcpy_from_wc(tmp, in, PAGE_SIZE))
>   			in = tmp;
>   		x = crc32_le(0, in, PAGE_SIZE);
>   
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 15:10 ` [Intel-gfx] [PATCH 1/4] " Tvrtko Ursulin
@ 2021-01-06 15:17   ` Chris Wilson
  2021-01-06 15:28     ` Tvrtko Ursulin
  0 siblings, 1 reply; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 15:17 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2021-01-06 15:10:02)
> 
> On 06/01/2021 12:39, Chris Wilson wrote:
> > AFter detecting a register mismatch between the protocontext and the
> > image generated by HW, immediately break out of the double loop.
> > (Otherwise we end up a second configuing error message.)
> 
> s/configuing/confusing/?
> 
> No use of dumping all differences? Why it is confusing?

rcs0: Different registers found at dword 2, expected 2244, found 2244
rcs0: Expected LRI command at dword 2, found 00002244

We then hexdump the entire page, both HW/SW, so we can try and figure out
what went wrong.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 15:17   ` Chris Wilson
@ 2021-01-06 15:28     ` Tvrtko Ursulin
  0 siblings, 0 replies; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-01-06 15:28 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 06/01/2021 15:17, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2021-01-06 15:10:02)
>>
>> On 06/01/2021 12:39, Chris Wilson wrote:
>>> AFter detecting a register mismatch between the protocontext and the
>>> image generated by HW, immediately break out of the double loop.
>>> (Otherwise we end up a second configuing error message.)
>>
>> s/configuing/confusing/?
>>
>> No use of dumping all differences? Why it is confusing?
> 
> rcs0: Different registers found at dword 2, expected 2244, found 2244
> rcs0: Expected LRI command at dword 2, found 00002244
> 
> We then hexdump the entire page, both HW/SW, so we can try and figure out
> what went wrong.

Right, I see it now.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-06 12:39 ` [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression Chris Wilson
@ 2021-01-06 15:57   ` Tvrtko Ursulin
  2021-01-06 16:08     ` Chris Wilson
  0 siblings, 1 reply; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-01-06 15:57 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx



On 06/01/2021 12:39, Chris Wilson wrote:
> In the next^W future patch, we remove the strict priority system and
> continuously re-evaluate the relative priority of tasks. As such we need
> to enable the timeslice whenever there is more than one context in the
> pipeline. This simplifies the decision and removes some of the tweaks to
> suppress timeslicing, allowing us to lift the timeslice enabling to a
> common spot at the end of running the submission tasklet.
> 
> One consequence of the suppression is that it was reducing fairness
> between virtual engines on an over saturated system; undermining the
> principle for timeslicing.
> 
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2802
> Testcase: igt/gem_exec_balancer/fairslice
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/intel_engine_types.h  |  10 -
>   .../drm/i915/gt/intel_execlists_submission.c  | 173 +++++++-----------
>   2 files changed, 68 insertions(+), 115 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> index 430066e5884c..df62e793e747 100644
> --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> @@ -238,16 +238,6 @@ struct intel_engine_execlists {
>   	 */
>   	unsigned int port_mask;
>   
> -	/**
> -	 * @switch_priority_hint: Second context priority.
> -	 *
> -	 * We submit multiple contexts to the HW simultaneously and would
> -	 * like to occasionally switch between them to emulate timeslicing.
> -	 * To know when timeslicing is suitable, we track the priority of
> -	 * the context submitted second.
> -	 */
> -	int switch_priority_hint;
> -
>   	/**
>   	 * @queue_priority_hint: Highest pending priority.
>   	 *
> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> index ba3114fd4389..50d4308023f3 100644
> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> @@ -1143,25 +1143,6 @@ static void defer_active(struct intel_engine_cs *engine)
>   	defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
>   }
>   
> -static bool
> -need_timeslice(const struct intel_engine_cs *engine,
> -	       const struct i915_request *rq)
> -{
> -	int hint;
> -
> -	if (!intel_engine_has_timeslices(engine))
> -		return false;
> -
> -	hint = max(engine->execlists.queue_priority_hint,
> -		   virtual_prio(&engine->execlists));
> -
> -	if (!list_is_last(&rq->sched.link, &engine->active.requests))
> -		hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
> -
> -	GEM_BUG_ON(hint >= I915_PRIORITY_UNPREEMPTABLE);
> -	return hint >= effective_prio(rq);
> -}
> -
>   static bool
>   timeslice_yield(const struct intel_engine_execlists *el,
>   		const struct i915_request *rq)
> @@ -1181,76 +1162,68 @@ timeslice_yield(const struct intel_engine_execlists *el,
>   	return rq->context->lrc.ccid == READ_ONCE(el->yield);
>   }
>   
> -static bool
> -timeslice_expired(const struct intel_engine_execlists *el,
> -		  const struct i915_request *rq)
> +static bool needs_timeslice(const struct intel_engine_cs *engine,
> +			    const struct i915_request *rq)
>   {
> +	if (!intel_engine_has_timeslices(engine))
> +		return false;
> +
> +	/* If not currently active, or about to switch, wait for next event */
> +	if (!rq || __i915_request_is_complete(rq))
> +		return false;
> +
> +	/* We do not need to start the timeslice until after the ACK */
> +	if (READ_ONCE(engine->execlists.pending[0]))
> +		return false;
> +
> +	/* If ELSP[1] is occupied, always check to see if worth slicing */
> +	if (!list_is_last_rcu(&rq->sched.link, &engine->active.requests))
> +		return true;
> +
> +	/* Otherwise, ELSP[0] is by itself, but may be waiting in the queue */
> +	if (!RB_EMPTY_ROOT(&engine->execlists.queue.rb_root))
> +		return true;
> +
> +	return !RB_EMPTY_ROOT(&engine->execlists.virtual.rb_root);
> +}
> +
> +static bool
> +timeslice_expired(struct intel_engine_cs *engine, const struct i915_request *rq)
> +{
> +	const struct intel_engine_execlists *el = &engine->execlists;
> +
> +	if (i915_request_has_nopreempt(rq) && __i915_request_has_started(rq))
> +		return false;
> +
> +	if (!needs_timeslice(engine, rq))
> +		return false;
> +
>   	return timer_expired(&el->timer) || timeslice_yield(el, rq);
>   }
>   
> -static int
> -switch_prio(struct intel_engine_cs *engine, const struct i915_request *rq)
> -{
> -	if (list_is_last(&rq->sched.link, &engine->active.requests))
> -		return engine->execlists.queue_priority_hint;
> -
> -	return rq_prio(list_next_entry(rq, sched.link));
> -}
> -
> -static inline unsigned long
> -timeslice(const struct intel_engine_cs *engine)
> +static unsigned long timeslice(const struct intel_engine_cs *engine)
>   {
>   	return READ_ONCE(engine->props.timeslice_duration_ms);
>   }
>   
> -static unsigned long active_timeslice(const struct intel_engine_cs *engine)
> -{
> -	const struct intel_engine_execlists *execlists = &engine->execlists;
> -	const struct i915_request *rq = *execlists->active;
> -
> -	if (!rq || __i915_request_is_complete(rq))
> -		return 0;
> -
> -	if (READ_ONCE(execlists->switch_priority_hint) < effective_prio(rq))
> -		return 0;
> -
> -	return timeslice(engine);
> -}
> -
> -static void set_timeslice(struct intel_engine_cs *engine)
> +static void start_timeslice(struct intel_engine_cs *engine)
>   {
> +	struct intel_engine_execlists *el = &engine->execlists;
>   	unsigned long duration;
>   
> -	if (!intel_engine_has_timeslices(engine))
> -		return;
> +	/* Disable the timer if there is nothing to switch to */
> +	duration = 0;
> +	if (needs_timeslice(engine, *el->active)) {
> +		if (el->timer.expires) {

Why not just timer_pending check? Are you sure timer->expires cannot 
legitimately be at jiffie 0 in wrap conditions?

> +			if (!timer_pending(&el->timer))
> +				tasklet_hi_schedule(&engine->execlists.tasklet);
> +			return;
> +		}
>   
> -	duration = active_timeslice(engine);
> -	ENGINE_TRACE(engine, "bump timeslicing, interval:%lu", duration);
> +		duration = timeslice(engine);
> +	}
>   
> -	set_timer_ms(&engine->execlists.timer, duration);
> -}
> -
> -static void start_timeslice(struct intel_engine_cs *engine, int prio)
> -{
> -	struct intel_engine_execlists *execlists = &engine->execlists;
> -	unsigned long duration;
> -
> -	if (!intel_engine_has_timeslices(engine))
> -		return;
> -
> -	WRITE_ONCE(execlists->switch_priority_hint, prio);
> -	if (prio == INT_MIN)
> -		return;
> -
> -	if (timer_pending(&execlists->timer))
> -		return;
> -
> -	duration = timeslice(engine);
> -	ENGINE_TRACE(engine,
> -		     "start timeslicing, prio:%d, interval:%lu",
> -		     prio, duration);
> -
> -	set_timer_ms(&execlists->timer, duration);
> +	set_timer_ms(&el->timer, duration);
>   }
>   
>   static void record_preemption(struct intel_engine_execlists *execlists)
> @@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   			__unwind_incomplete_requests(engine);
>   
>   			last = NULL;
> -		} else if (need_timeslice(engine, last) &&
> -			   timeslice_expired(execlists, last)) {
> +		} else if (timeslice_expired(engine, last)) {
>   			ENGINE_TRACE(engine,
> -				     "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
> -				     last->fence.context,
> -				     last->fence.seqno,
> -				     last->sched.attr.priority,
> +				     "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
> +				     yesno(timer_expired(&execlists->timer)),
> +				     last->fence.context, last->fence.seqno,
> +				     rq_prio(last),
>   				     execlists->queue_priority_hint,
>   				     yesno(timeslice_yield(execlists, last)));
>   
> +			cancel_timer(&execlists->timer);

What is this cancel for?

Regards,

Tvrtko

>   			ring_set_paused(engine, 1);
>   			defer_active(engine);
>   
> @@ -1408,7 +1381,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   				 * of timeslices, our queue might be.
>   				 */
>   				spin_unlock(&engine->active.lock);
> -				start_timeslice(engine, queue_prio(execlists));
>   				return;
>   			}
>   		}
> @@ -1435,7 +1407,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   		if (last && !can_merge_rq(last, rq)) {
>   			spin_unlock(&ve->base.active.lock);
>   			spin_unlock(&engine->active.lock);
> -			start_timeslice(engine, rq_prio(rq));
>   			return; /* leave this for another sibling */
>   		}
>   
> @@ -1599,29 +1570,23 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>   	execlists->queue_priority_hint = queue_prio(execlists);
>   	spin_unlock(&engine->active.lock);
>   
> -	if (submit) {
> -		/*
> -		 * Skip if we ended up with exactly the same set of requests,
> -		 * e.g. trying to timeslice a pair of ordered contexts
> -		 */
> -		if (!memcmp(execlists->active,
> -			    execlists->pending,
> -			    (port - execlists->pending) * sizeof(*port)))
> -			goto skip_submit;
> -
> +	/*
> +	 * We can skip poking the HW if we ended up with exactly the same set
> +	 * of requests as currently running, e.g. trying to timeslice a pair
> +	 * of ordered contexts.
> +	 */
> +	if (submit &&
> +	    memcmp(execlists->active,
> +		   execlists->pending,
> +		   (port - execlists->pending) * sizeof(*port))) {
>   		*port = NULL;
>   		while (port-- != execlists->pending)
>   			execlists_schedule_in(*port, port - execlists->pending);
>   
> -		execlists->switch_priority_hint =
> -			switch_prio(engine, *execlists->pending);
> -
>   		WRITE_ONCE(execlists->yield, -1);
>   		set_preempt_timeout(engine, *execlists->active);
>   		execlists_submit_ports(engine);
>   	} else {
> -		start_timeslice(engine, execlists->queue_priority_hint);
> -skip_submit:
>   		ring_set_paused(engine, 0);
>   		while (port-- != execlists->pending)
>   			i915_request_put(*port);
> @@ -1979,8 +1944,6 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
>   		}
>   	} while (head != tail);
>   
> -	set_timeslice(engine);
> -
>   	/*
>   	 * Gen11 has proven to fail wrt global observation point between
>   	 * entry and tail update, failing on the ordering and thus
> @@ -1993,6 +1956,7 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
>   	 * invalidation before.
>   	 */
>   	invalidate_csb_entries(&buf[0], &buf[num_entries - 1]);
> +	cancel_timer(&execlists->timer);
>   
>   	return inactive;
>   }
> @@ -2405,8 +2369,10 @@ static void execlists_submission_tasklet(unsigned long data)
>   		execlists_reset(engine, msg);
>   	}
>   
> -	if (!engine->execlists.pending[0])
> +	if (!engine->execlists.pending[0]) {
>   		execlists_dequeue_irq(engine);
> +		start_timeslice(engine);
> +	}
>   
>   	post_process_csb(post, inactive);
>   	rcu_read_unlock();
> @@ -3851,9 +3817,6 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine,
>   		show_request(m, last, "\t\t", 0);
>   	}
>   
> -	if (execlists->switch_priority_hint != INT_MIN)
> -		drm_printf(m, "\t\tSwitch priority hint: %d\n",
> -			   READ_ONCE(execlists->switch_priority_hint));
>   	if (execlists->queue_priority_hint != INT_MIN)
>   		drm_printf(m, "\t\tQueue priority hint: %d\n",
>   			   READ_ONCE(execlists->queue_priority_hint));
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-06 15:57   ` Tvrtko Ursulin
@ 2021-01-06 16:08     ` Chris Wilson
  2021-01-06 16:19       ` Chris Wilson
  2021-01-07 10:16       ` Tvrtko Ursulin
  0 siblings, 2 replies; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 16:08 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2021-01-06 15:57:49)
> 
> 
> On 06/01/2021 12:39, Chris Wilson wrote:
> > In the next^W future patch, we remove the strict priority system and
> > continuously re-evaluate the relative priority of tasks. As such we need
> > to enable the timeslice whenever there is more than one context in the
> > pipeline. This simplifies the decision and removes some of the tweaks to
> > suppress timeslicing, allowing us to lift the timeslice enabling to a
> > common spot at the end of running the submission tasklet.
> > 
> > One consequence of the suppression is that it was reducing fairness
> > between virtual engines on an over saturated system; undermining the
> > principle for timeslicing.
> > 
> > Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2802
> > Testcase: igt/gem_exec_balancer/fairslice
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_engine_types.h  |  10 -
> >   .../drm/i915/gt/intel_execlists_submission.c  | 173 +++++++-----------
> >   2 files changed, 68 insertions(+), 115 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> > index 430066e5884c..df62e793e747 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
> > +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> > @@ -238,16 +238,6 @@ struct intel_engine_execlists {
> >        */
> >       unsigned int port_mask;
> >   
> > -     /**
> > -      * @switch_priority_hint: Second context priority.
> > -      *
> > -      * We submit multiple contexts to the HW simultaneously and would
> > -      * like to occasionally switch between them to emulate timeslicing.
> > -      * To know when timeslicing is suitable, we track the priority of
> > -      * the context submitted second.
> > -      */
> > -     int switch_priority_hint;
> > -
> >       /**
> >        * @queue_priority_hint: Highest pending priority.
> >        *
> > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > index ba3114fd4389..50d4308023f3 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > @@ -1143,25 +1143,6 @@ static void defer_active(struct intel_engine_cs *engine)
> >       defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
> >   }
> >   
> > -static bool
> > -need_timeslice(const struct intel_engine_cs *engine,
> > -            const struct i915_request *rq)
> > -{
> > -     int hint;
> > -
> > -     if (!intel_engine_has_timeslices(engine))
> > -             return false;
> > -
> > -     hint = max(engine->execlists.queue_priority_hint,
> > -                virtual_prio(&engine->execlists));
> > -
> > -     if (!list_is_last(&rq->sched.link, &engine->active.requests))
> > -             hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
> > -
> > -     GEM_BUG_ON(hint >= I915_PRIORITY_UNPREEMPTABLE);
> > -     return hint >= effective_prio(rq);
> > -}
> > -
> >   static bool
> >   timeslice_yield(const struct intel_engine_execlists *el,
> >               const struct i915_request *rq)
> > @@ -1181,76 +1162,68 @@ timeslice_yield(const struct intel_engine_execlists *el,
> >       return rq->context->lrc.ccid == READ_ONCE(el->yield);
> >   }
> >   
> > -static bool
> > -timeslice_expired(const struct intel_engine_execlists *el,
> > -               const struct i915_request *rq)
> > +static bool needs_timeslice(const struct intel_engine_cs *engine,
> > +                         const struct i915_request *rq)
> >   {
> > +     if (!intel_engine_has_timeslices(engine))
> > +             return false;
> > +
> > +     /* If not currently active, or about to switch, wait for next event */
> > +     if (!rq || __i915_request_is_complete(rq))
> > +             return false;
> > +
> > +     /* We do not need to start the timeslice until after the ACK */
> > +     if (READ_ONCE(engine->execlists.pending[0]))
> > +             return false;
> > +
> > +     /* If ELSP[1] is occupied, always check to see if worth slicing */
> > +     if (!list_is_last_rcu(&rq->sched.link, &engine->active.requests))
> > +             return true;
> > +
> > +     /* Otherwise, ELSP[0] is by itself, but may be waiting in the queue */
> > +     if (!RB_EMPTY_ROOT(&engine->execlists.queue.rb_root))
> > +             return true;
> > +
> > +     return !RB_EMPTY_ROOT(&engine->execlists.virtual.rb_root);
> > +}
> > +
> > +static bool
> > +timeslice_expired(struct intel_engine_cs *engine, const struct i915_request *rq)
> > +{
> > +     const struct intel_engine_execlists *el = &engine->execlists;
> > +
> > +     if (i915_request_has_nopreempt(rq) && __i915_request_has_started(rq))
> > +             return false;
> > +
> > +     if (!needs_timeslice(engine, rq))
> > +             return false;
> > +
> >       return timer_expired(&el->timer) || timeslice_yield(el, rq);
> >   }
> >   
> > -static int
> > -switch_prio(struct intel_engine_cs *engine, const struct i915_request *rq)
> > -{
> > -     if (list_is_last(&rq->sched.link, &engine->active.requests))
> > -             return engine->execlists.queue_priority_hint;
> > -
> > -     return rq_prio(list_next_entry(rq, sched.link));
> > -}
> > -
> > -static inline unsigned long
> > -timeslice(const struct intel_engine_cs *engine)
> > +static unsigned long timeslice(const struct intel_engine_cs *engine)
> >   {
> >       return READ_ONCE(engine->props.timeslice_duration_ms);
> >   }
> >   
> > -static unsigned long active_timeslice(const struct intel_engine_cs *engine)
> > -{
> > -     const struct intel_engine_execlists *execlists = &engine->execlists;
> > -     const struct i915_request *rq = *execlists->active;
> > -
> > -     if (!rq || __i915_request_is_complete(rq))
> > -             return 0;
> > -
> > -     if (READ_ONCE(execlists->switch_priority_hint) < effective_prio(rq))
> > -             return 0;
> > -
> > -     return timeslice(engine);
> > -}
> > -
> > -static void set_timeslice(struct intel_engine_cs *engine)
> > +static void start_timeslice(struct intel_engine_cs *engine)
> >   {
> > +     struct intel_engine_execlists *el = &engine->execlists;
> >       unsigned long duration;
> >   
> > -     if (!intel_engine_has_timeslices(engine))
> > -             return;
> > +     /* Disable the timer if there is nothing to switch to */
> > +     duration = 0;
> > +     if (needs_timeslice(engine, *el->active)) {
> > +             if (el->timer.expires) {
> 
> Why not just timer_pending check? Are you sure timer->expires cannot 
> legitimately be at jiffie 0 in wrap conditions?

This is actually to test if we have set the timer or not, and avoid
extending an already active timeslice. We are abusing the jiffie wrap
being unlikely and of no great consequence (one missed timeslice/preempt
timer should be picked up by the next poke of the driver) as part of
set_timer_ms/cancel_timer.

> > +                     if (!timer_pending(&el->timer))
> > +                             tasklet_hi_schedule(&engine->execlists.tasklet);
> > +                     return;
> > +             }
> >   
> > -     duration = active_timeslice(engine);
> > -     ENGINE_TRACE(engine, "bump timeslicing, interval:%lu", duration);
> > +             duration = timeslice(engine);
> > +     }
> >   
> > -     set_timer_ms(&engine->execlists.timer, duration);
> > -}
> > -
> > -static void start_timeslice(struct intel_engine_cs *engine, int prio)
> > -{
> > -     struct intel_engine_execlists *execlists = &engine->execlists;
> > -     unsigned long duration;
> > -
> > -     if (!intel_engine_has_timeslices(engine))
> > -             return;
> > -
> > -     WRITE_ONCE(execlists->switch_priority_hint, prio);
> > -     if (prio == INT_MIN)
> > -             return;
> > -
> > -     if (timer_pending(&execlists->timer))
> > -             return;
> > -
> > -     duration = timeslice(engine);
> > -     ENGINE_TRACE(engine,
> > -                  "start timeslicing, prio:%d, interval:%lu",
> > -                  prio, duration);
> > -
> > -     set_timer_ms(&execlists->timer, duration);
> > +     set_timer_ms(&el->timer, duration);
> >   }
> >   
> >   static void record_preemption(struct intel_engine_execlists *execlists)
> > @@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
> >                       __unwind_incomplete_requests(engine);
> >   
> >                       last = NULL;
> > -             } else if (need_timeslice(engine, last) &&
> > -                        timeslice_expired(execlists, last)) {
> > +             } else if (timeslice_expired(engine, last)) {
> >                       ENGINE_TRACE(engine,
> > -                                  "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
> > -                                  last->fence.context,
> > -                                  last->fence.seqno,
> > -                                  last->sched.attr.priority,
> > +                                  "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
> > +                                  yesno(timer_expired(&execlists->timer)),
> > +                                  last->fence.context, last->fence.seqno,
> > +                                  rq_prio(last),
> >                                    execlists->queue_priority_hint,
> >                                    yesno(timeslice_yield(execlists, last)));
> >   
> > +                     cancel_timer(&execlists->timer);
> 
> What is this cancel for?

This branch is taken upon yielding the timeslice, but we may not submit
a new pair of contexts, leaving the timer active (and marked as
expired). Since the timer remains expired, we will continuously looped
until a context switch, or some other preemption event.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-06 16:08     ` Chris Wilson
@ 2021-01-06 16:19       ` Chris Wilson
  2021-01-07 10:16       ` Tvrtko Ursulin
  1 sibling, 0 replies; 18+ messages in thread
From: Chris Wilson @ 2021-01-06 16:19 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Chris Wilson (2021-01-06 16:08:40)
> Quoting Tvrtko Ursulin (2021-01-06 15:57:49)
> > 
> > 
> > On 06/01/2021 12:39, Chris Wilson wrote:
> > > In the next^W future patch, we remove the strict priority system and
> > > continuously re-evaluate the relative priority of tasks. As such we need
> > > to enable the timeslice whenever there is more than one context in the
> > > pipeline. This simplifies the decision and removes some of the tweaks to
> > > suppress timeslicing, allowing us to lift the timeslice enabling to a
> > > common spot at the end of running the submission tasklet.
> > > 
> > > One consequence of the suppression is that it was reducing fairness
> > > between virtual engines on an over saturated system; undermining the
> > > principle for timeslicing.
> > > 
> > > Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2802
> > > Testcase: igt/gem_exec_balancer/fairslice
> > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > ---
> > >   drivers/gpu/drm/i915/gt/intel_engine_types.h  |  10 -
> > >   .../drm/i915/gt/intel_execlists_submission.c  | 173 +++++++-----------
> > >   2 files changed, 68 insertions(+), 115 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> > > index 430066e5884c..df62e793e747 100644
> > > --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
> > > +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
> > > @@ -238,16 +238,6 @@ struct intel_engine_execlists {
> > >        */
> > >       unsigned int port_mask;
> > >   
> > > -     /**
> > > -      * @switch_priority_hint: Second context priority.
> > > -      *
> > > -      * We submit multiple contexts to the HW simultaneously and would
> > > -      * like to occasionally switch between them to emulate timeslicing.
> > > -      * To know when timeslicing is suitable, we track the priority of
> > > -      * the context submitted second.
> > > -      */
> > > -     int switch_priority_hint;
> > > -
> > >       /**
> > >        * @queue_priority_hint: Highest pending priority.
> > >        *
> > > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > index ba3114fd4389..50d4308023f3 100644
> > > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > @@ -1143,25 +1143,6 @@ static void defer_active(struct intel_engine_cs *engine)
> > >       defer_request(rq, i915_sched_lookup_priolist(engine, rq_prio(rq)));
> > >   }
> > >   
> > > -static bool
> > > -need_timeslice(const struct intel_engine_cs *engine,
> > > -            const struct i915_request *rq)
> > > -{
> > > -     int hint;
> > > -
> > > -     if (!intel_engine_has_timeslices(engine))
> > > -             return false;
> > > -
> > > -     hint = max(engine->execlists.queue_priority_hint,
> > > -                virtual_prio(&engine->execlists));
> > > -
> > > -     if (!list_is_last(&rq->sched.link, &engine->active.requests))
> > > -             hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
> > > -
> > > -     GEM_BUG_ON(hint >= I915_PRIORITY_UNPREEMPTABLE);
> > > -     return hint >= effective_prio(rq);
> > > -}
> > > -
> > >   static bool
> > >   timeslice_yield(const struct intel_engine_execlists *el,
> > >               const struct i915_request *rq)
> > > @@ -1181,76 +1162,68 @@ timeslice_yield(const struct intel_engine_execlists *el,
> > >       return rq->context->lrc.ccid == READ_ONCE(el->yield);
> > >   }
> > >   
> > > -static bool
> > > -timeslice_expired(const struct intel_engine_execlists *el,
> > > -               const struct i915_request *rq)
> > > +static bool needs_timeslice(const struct intel_engine_cs *engine,
> > > +                         const struct i915_request *rq)
> > >   {
> > > +     if (!intel_engine_has_timeslices(engine))
> > > +             return false;
> > > +
> > > +     /* If not currently active, or about to switch, wait for next event */
> > > +     if (!rq || __i915_request_is_complete(rq))
> > > +             return false;
> > > +
> > > +     /* We do not need to start the timeslice until after the ACK */
> > > +     if (READ_ONCE(engine->execlists.pending[0]))
> > > +             return false;
> > > +
> > > +     /* If ELSP[1] is occupied, always check to see if worth slicing */
> > > +     if (!list_is_last_rcu(&rq->sched.link, &engine->active.requests))
> > > +             return true;
> > > +
> > > +     /* Otherwise, ELSP[0] is by itself, but may be waiting in the queue */
> > > +     if (!RB_EMPTY_ROOT(&engine->execlists.queue.rb_root))
> > > +             return true;
> > > +
> > > +     return !RB_EMPTY_ROOT(&engine->execlists.virtual.rb_root);
> > > +}
> > > +
> > > +static bool
> > > +timeslice_expired(struct intel_engine_cs *engine, const struct i915_request *rq)
> > > +{
> > > +     const struct intel_engine_execlists *el = &engine->execlists;
> > > +
> > > +     if (i915_request_has_nopreempt(rq) && __i915_request_has_started(rq))
> > > +             return false;
> > > +
> > > +     if (!needs_timeslice(engine, rq))
> > > +             return false;
> > > +
> > >       return timer_expired(&el->timer) || timeslice_yield(el, rq);
> > >   }
> > >   
> > > -static int
> > > -switch_prio(struct intel_engine_cs *engine, const struct i915_request *rq)
> > > -{
> > > -     if (list_is_last(&rq->sched.link, &engine->active.requests))
> > > -             return engine->execlists.queue_priority_hint;
> > > -
> > > -     return rq_prio(list_next_entry(rq, sched.link));
> > > -}
> > > -
> > > -static inline unsigned long
> > > -timeslice(const struct intel_engine_cs *engine)
> > > +static unsigned long timeslice(const struct intel_engine_cs *engine)
> > >   {
> > >       return READ_ONCE(engine->props.timeslice_duration_ms);
> > >   }
> > >   
> > > -static unsigned long active_timeslice(const struct intel_engine_cs *engine)
> > > -{
> > > -     const struct intel_engine_execlists *execlists = &engine->execlists;
> > > -     const struct i915_request *rq = *execlists->active;
> > > -
> > > -     if (!rq || __i915_request_is_complete(rq))
> > > -             return 0;
> > > -
> > > -     if (READ_ONCE(execlists->switch_priority_hint) < effective_prio(rq))
> > > -             return 0;
> > > -
> > > -     return timeslice(engine);
> > > -}
> > > -
> > > -static void set_timeslice(struct intel_engine_cs *engine)
> > > +static void start_timeslice(struct intel_engine_cs *engine)
> > >   {
> > > +     struct intel_engine_execlists *el = &engine->execlists;
> > >       unsigned long duration;
> > >   
> > > -     if (!intel_engine_has_timeslices(engine))
> > > -             return;
> > > +     /* Disable the timer if there is nothing to switch to */
> > > +     duration = 0;
> > > +     if (needs_timeslice(engine, *el->active)) {
> > > +             if (el->timer.expires) {
> > 
> > Why not just timer_pending check? Are you sure timer->expires cannot 
> > legitimately be at jiffie 0 in wrap conditions?
> 
> This is actually to test if we have set the timer or not, and avoid
> extending an already active timeslice. We are abusing the jiffie wrap
> being unlikely and of no great consequence (one missed timeslice/preempt
> timer should be picked up by the next poke of the driver) as part of
> set_timer_ms/cancel_timer.

I see you've asked this before:

void set_timer_ms(struct timer_list *t, unsigned long timeout)
{
        if (!timeout) {
                cancel_timer(t);
                return;
        }

        timeout = msecs_to_jiffies(timeout);

        /*
         * Paranoia to make sure the compiler computes the timeout before
         * loading 'jiffies' as jiffies is volatile and may be updated in
         * the background by a timer tick. All to reduce the complexity
         * of the addition and reduce the risk of losing a jiffie.
         */
        barrier();

        /* Keep t->expires = 0 reserved to indicate a canceled timer. */
        mod_timer(t, jiffies + timeout ?: 1);
}

:)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
  2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
                   ` (6 preceding siblings ...)
  2021-01-06 15:10 ` [Intel-gfx] [PATCH 1/4] " Tvrtko Ursulin
@ 2021-01-06 16:38 ` Patchwork
  7 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2021-01-06 16:38 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 27424 bytes --]

== Series Details ==

Series: series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch
URL   : https://patchwork.freedesktop.org/series/85548/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9549_full -> Patchwork_19269_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_19269_full:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@perf@non-zero-reason}:
    - shard-skl:          [PASS][1] -> [TIMEOUT][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl6/igt@perf@non-zero-reason.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl4/igt@perf@non-zero-reason.html

  
Known issues
------------

  Here are the changes found in Patchwork_19269_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_persistence@engines-cleanup:
    - shard-hsw:          NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#1099]) +2 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-hsw7/igt@gem_ctx_persistence@engines-cleanup.html

  * igt@gem_exec_reloc@basic-many-active@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][4] ([i915#2389])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb2/igt@gem_exec_reloc@basic-many-active@vcs1.html

  * igt@gem_render_copy@y-tiled-ccs-to-yf-tiled-mc-ccs:
    - shard-iclb:         NOTRUN -> [SKIP][5] ([i915#768]) +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@gem_render_copy@y-tiled-ccs-to-yf-tiled-mc-ccs.html

  * igt@gem_userptr_blits@process-exit-mmap@wc:
    - shard-hsw:          NOTRUN -> [SKIP][6] ([fdo#109271]) +208 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-hsw7/igt@gem_userptr_blits@process-exit-mmap@wc.html

  * igt@gen9_exec_parse@bb-secure:
    - shard-iclb:         NOTRUN -> [SKIP][7] ([fdo#112306]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@gen9_exec_parse@bb-secure.html

  * igt@kms_big_fb@linear-16bpp-rotate-270:
    - shard-iclb:         NOTRUN -> [SKIP][8] ([fdo#110725] / [fdo#111614])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_big_fb@linear-16bpp-rotate-270.html

  * igt@kms_chamelium@hdmi-hpd-with-enabled-mode:
    - shard-hsw:          NOTRUN -> [SKIP][9] ([fdo#109271] / [fdo#111827]) +15 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-hsw7/igt@kms_chamelium@hdmi-hpd-with-enabled-mode.html

  * igt@kms_color@pipe-b-ctm-0-25:
    - shard-iclb:         NOTRUN -> [FAIL][10] ([i915#1149] / [i915#315]) +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_color@pipe-b-ctm-0-25.html

  * igt@kms_color@pipe-d-degamma:
    - shard-iclb:         NOTRUN -> [SKIP][11] ([fdo#109278] / [i915#1149])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_color@pipe-d-degamma.html

  * igt@kms_color_chamelium@pipe-c-ctm-limited-range:
    - shard-iclb:         NOTRUN -> [SKIP][12] ([fdo#109284] / [fdo#111827]) +2 similar issues
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_color_chamelium@pipe-c-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-d-ctm-0-75:
    - shard-skl:          NOTRUN -> [SKIP][13] ([fdo#109271] / [fdo#111827]) +4 similar issues
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl9/igt@kms_color_chamelium@pipe-d-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes:
    - shard-glk:          NOTRUN -> [SKIP][14] ([fdo#109271] / [fdo#111827]) +3 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk2/igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes.html

  * igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen:
    - shard-skl:          [PASS][15] -> [FAIL][16] ([i915#54]) +10 similar issues
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl9/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
    - shard-skl:          [PASS][17] -> [INCOMPLETE][18] ([i915#2405] / [i915#300])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl8/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl2/igt@kms_cursor_crc@pipe-b-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-d-cursor-dpms:
    - shard-iclb:         NOTRUN -> [SKIP][19] ([fdo#109278]) +8 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_cursor_crc@pipe-d-cursor-dpms.html

  * igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge:
    - shard-skl:          NOTRUN -> [SKIP][20] ([fdo#109271]) +32 similar issues
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic:
    - shard-glk:          [PASS][21] -> [FAIL][22] ([i915#2346])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk5/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk1/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-skl:          [PASS][23] -> [FAIL][24] ([i915#2346])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl1/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor-varying-size:
    - shard-tglb:         [PASS][25] -> [FAIL][26] ([i915#2346]) +1 similar issue
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-tglb1/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-tglb2/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html

  * igt@kms_flip@2x-flip-vs-modeset-vs-hang:
    - shard-iclb:         NOTRUN -> [SKIP][27] ([fdo#109274]) +1 similar issue
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_flip@2x-flip-vs-modeset-vs-hang.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-skl:          [PASS][28] -> [FAIL][29] ([i915#79]) +2 similar issues
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl8/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl2/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip@plain-flip-fb-recreate@c-edp1:
    - shard-skl:          [PASS][30] -> [FAIL][31] ([i915#2122]) +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl10/igt@kms_flip@plain-flip-fb-recreate@c-edp1.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl8/igt@kms_flip@plain-flip-fb-recreate@c-edp1.html

  * igt@kms_flip@plain-flip-ts-check@c-edp1:
    - shard-skl:          NOTRUN -> [FAIL][32] ([i915#2122]) +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl3/igt@kms_flip@plain-flip-ts-check@c-edp1.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-mmap-cpu:
    - shard-iclb:         NOTRUN -> [SKIP][33] ([fdo#109280]) +11 similar issues
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-mmap-cpu.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
    - shard-skl:          NOTRUN -> [SKIP][34] ([fdo#109271] / [i915#533])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html

  * igt@kms_plane_cursor@pipe-d-viewport-size-256:
    - shard-glk:          NOTRUN -> [SKIP][35] ([fdo#109271]) +31 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk2/igt@kms_plane_cursor@pipe-d-viewport-size-256.html

  * igt@kms_psr2_su@page_flip:
    - shard-iclb:         [PASS][36] -> [SKIP][37] ([fdo#109642] / [fdo#111068])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb2/igt@kms_psr2_su@page_flip.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_psr2_su@page_flip.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         [PASS][38] -> [SKIP][39] ([fdo#109441]) +1 similar issue
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb2/igt@kms_psr@psr2_cursor_plane_move.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb4/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         NOTRUN -> [SKIP][40] ([fdo#109441])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@kms_psr@psr2_cursor_render.html

  * igt@kms_sysfs_edid_timing:
    - shard-skl:          NOTRUN -> [FAIL][41] ([IGT#2])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_sysfs_edid_timing.html

  * igt@kms_writeback@writeback-pixel-formats:
    - shard-skl:          NOTRUN -> [SKIP][42] ([fdo#109271] / [i915#2437])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_writeback@writeback-pixel-formats.html

  * igt@nouveau_crc@pipe-d-ctx-flip-skip-current-frame:
    - shard-iclb:         NOTRUN -> [SKIP][43] ([fdo#109278] / [i915#2530])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@nouveau_crc@pipe-d-ctx-flip-skip-current-frame.html

  * igt@prime_nv_api@nv_i915_import_twice_check_flink_name:
    - shard-iclb:         NOTRUN -> [SKIP][44] ([fdo#109291]) +1 similar issue
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@prime_nv_api@nv_i915_import_twice_check_flink_name.html

  
#### Possible fixes ####

  * igt@gem_ctx_isolation@preservation-s3@rcs0:
    - shard-iclb:         [INCOMPLETE][45] ([i915#1373]) -> [PASS][46]
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb3/igt@gem_ctx_isolation@preservation-s3@rcs0.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@gem_ctx_isolation@preservation-s3@rcs0.html

  * igt@gem_ctx_persistence@replace@bcs0:
    - shard-apl:          [FAIL][47] ([i915#2410]) -> [PASS][48]
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-apl3/igt@gem_ctx_persistence@replace@bcs0.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-apl7/igt@gem_ctx_persistence@replace@bcs0.html

  * {igt@gem_exec_balancer@fairslice}:
    - shard-tglb:         [FAIL][49] ([i915#2802]) -> [PASS][50]
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-tglb5/igt@gem_exec_balancer@fairslice.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-tglb3/igt@gem_exec_balancer@fairslice.html
    - shard-iclb:         [FAIL][51] ([i915#2802]) -> [PASS][52]
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb2/igt@gem_exec_balancer@fairslice.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb1/igt@gem_exec_balancer@fairslice.html

  * {igt@gem_exec_fair@basic-deadline}:
    - shard-kbl:          [FAIL][53] ([i915#2846]) -> [PASS][54]
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-kbl4/igt@gem_exec_fair@basic-deadline.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-kbl2/igt@gem_exec_fair@basic-deadline.html

  * {igt@gem_exec_fair@basic-none-share@rcs0}:
    - shard-iclb:         [FAIL][55] ([i915#2842]) -> [PASS][56]
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb7/igt@gem_exec_fair@basic-none-share@rcs0.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb8/igt@gem_exec_fair@basic-none-share@rcs0.html
    - shard-apl:          [SKIP][57] ([fdo#109271]) -> [PASS][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-apl7/igt@gem_exec_fair@basic-none-share@rcs0.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-apl1/igt@gem_exec_fair@basic-none-share@rcs0.html
    - shard-glk:          [FAIL][59] ([i915#2842]) -> [PASS][60]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk9/igt@gem_exec_fair@basic-none-share@rcs0.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk3/igt@gem_exec_fair@basic-none-share@rcs0.html

  * {igt@gem_exec_fair@basic-pace@vcs1}:
    - shard-kbl:          [FAIL][61] ([i915#2842]) -> [PASS][62]
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-kbl3/igt@gem_exec_fair@basic-pace@vcs1.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-kbl1/igt@gem_exec_fair@basic-pace@vcs1.html
    - shard-tglb:         [FAIL][63] ([i915#2842]) -> [PASS][64]
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-tglb5/igt@gem_exec_fair@basic-pace@vcs1.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-tglb3/igt@gem_exec_fair@basic-pace@vcs1.html

  * {igt@gem_exec_schedule@u-fairslice@rcs0}:
    - shard-glk:          [DMESG-WARN][65] ([i915#1610]) -> [PASS][66]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk2/igt@gem_exec_schedule@u-fairslice@rcs0.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk9/igt@gem_exec_schedule@u-fairslice@rcs0.html

  * igt@gem_softpin@invalid:
    - shard-skl:          [DMESG-WARN][67] ([i915#1982]) -> [PASS][68] +1 similar issue
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl2/igt@gem_softpin@invalid.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl2/igt@gem_softpin@invalid.html

  * {igt@gem_vm_create@destroy-race}:
    - shard-tglb:         [TIMEOUT][69] ([i915#2795]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-tglb2/igt@gem_vm_create@destroy-race.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-tglb5/igt@gem_vm_create@destroy-race.html

  * igt@gen9_exec_parse@allowed-all:
    - shard-glk:          [DMESG-WARN][71] ([i915#1436] / [i915#716]) -> [PASS][72]
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk8/igt@gen9_exec_parse@allowed-all.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk2/igt@gen9_exec_parse@allowed-all.html

  * igt@i915_pm_rpm@fences:
    - shard-iclb:         [INCOMPLETE][73] ([i915#189]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb4/igt@i915_pm_rpm@fences.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb7/igt@i915_pm_rpm@fences.html

  * igt@kms_async_flips@async-flip-with-page-flip-events:
    - shard-skl:          [DMESG-FAIL][75] ([i915#2634]) -> [PASS][76]
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl6/igt@kms_async_flips@async-flip-with-page-flip-events.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_async_flips@async-flip-with-page-flip-events.html

  * igt@kms_cursor_crc@pipe-b-cursor-64x64-offscreen:
    - shard-skl:          [FAIL][77] ([i915#54]) -> [PASS][78] +4 similar issues
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl3/igt@kms_cursor_crc@pipe-b-cursor-64x64-offscreen.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl8/igt@kms_cursor_crc@pipe-b-cursor-64x64-offscreen.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic:
    - shard-skl:          [FAIL][79] ([i915#2346]) -> [PASS][80] +1 similar issue
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl10/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl5/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1:
    - shard-skl:          [FAIL][81] ([i915#79]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl1/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl5/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-tglb:         [FAIL][83] ([i915#2598]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-tglb2/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-tglb5/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1:
    - shard-skl:          [FAIL][85] ([i915#2122]) -> [PASS][86]
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl9/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl8/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-edp1.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-mmap-wc:
    - shard-skl:          [FAIL][87] ([i915#49]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-mmap-wc.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl7/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
    - shard-skl:          [FAIL][89] ([fdo#108145] / [i915#265]) -> [PASS][90] +2 similar issues
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl10/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html

  * igt@kms_psr2_su@frontbuffer:
    - shard-iclb:         [SKIP][91] ([fdo#109642] / [fdo#111068]) -> [PASS][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb4/igt@kms_psr2_su@frontbuffer.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb2/igt@kms_psr2_su@frontbuffer.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [SKIP][93] ([fdo#109441]) -> [PASS][94] +1 similar issue
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb3/igt@kms_psr@psr2_no_drrs.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb2/igt@kms_psr@psr2_no_drrs.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc3co-vpb-simulation:
    - shard-iclb:         [SKIP][95] ([i915#658]) -> [SKIP][96] ([i915#588])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb3/igt@i915_pm_dc@dc3co-vpb-simulation.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb2/igt@i915_pm_dc@dc3co-vpb-simulation.html

  * igt@i915_pm_rc6_residency@rc6-fence:
    - shard-iclb:         [WARN][97] ([i915#2681] / [i915#2684]) -> [WARN][98] ([i915#1804] / [i915#2684])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb8/igt@i915_pm_rc6_residency@rc6-fence.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb6/igt@i915_pm_rc6_residency@rc6-fence.html

  * igt@kms_cursor_crc@pipe-b-cursor-128x42-sliding:
    - shard-skl:          [DMESG-WARN][99] ([i915#1982]) -> [FAIL][100] ([i915#54])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl5/igt@kms_cursor_crc@pipe-b-cursor-128x42-sliding.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl1/igt@kms_cursor_crc@pipe-b-cursor-128x42-sliding.html

  * igt@runner@aborted:
    - shard-kbl:          [FAIL][101] ([i915#2295]) -> [FAIL][102] ([i915#2295] / [i915#2505] / [i915#483])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-kbl7/igt@runner@aborted.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-kbl6/igt@runner@aborted.html
    - shard-iclb:         ([FAIL][103], [FAIL][104]) ([i915#1814] / [i915#2295] / [i915#2724] / [i915#483]) -> ([FAIL][105], [FAIL][106], [FAIL][107]) ([i915#1814] / [i915#2295] / [i915#2426] / [i915#2724] / [i915#483])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb5/igt@runner@aborted.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-iclb2/igt@runner@aborted.html
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb6/igt@runner@aborted.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb3/igt@runner@aborted.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-iclb3/igt@runner@aborted.html
    - shard-glk:          ([FAIL][108], [FAIL][109], [FAIL][110]) ([i915#2295] / [i915#2426] / [k.org#202321]) -> [FAIL][111] ([i915#2295] / [i915#483] / [k.org#202321])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk9/igt@runner@aborted.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk2/igt@runner@aborted.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-glk8/igt@runner@aborted.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-glk3/igt@runner@aborted.html
    - shard-skl:          ([FAIL][112], [FAIL][113]) ([i915#2295]) -> ([FAIL][114], [FAIL][115]) ([i915#2295] / [i915#2426])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl6/igt@runner@aborted.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9549/shard-skl2/igt@runner@aborted.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl9/igt@runner@aborted.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/shard-skl2/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#2]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/2
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
  [fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642
  [fdo#110725]: https://bugs.freedesktop.org/show_bug.cgi?id=110725
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112306]: https://bugs.freedesktop.org/show_bug.cgi?id=112306
  [i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099
  [i915#1149]: https://gitlab.freedesktop.org/drm/intel/issues/1149
  [i915#1373]: https://gitlab.freedesktop.org/drm/intel/issues/1373
  [i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
  [i915#1610]: https://gitlab.freedesktop.org/drm/intel/issues/1610
  [i915#1804]: https://gitlab.freedesktop.org/drm/intel/issues/1804
  [i915#1814]: https://gitlab.freedesktop.org/drm/intel/issues/1814
  [i915#189]: https://gitlab.freedesktop.org/drm/intel/issues/189
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
  [i915#2295]: https://gitlab.freedesktop.org/drm/intel/issues/2295
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2389]: https://gitlab.freedesktop.org/drm/intel/issues/2389
  [i915#2405]: https://gitlab.freedesktop.org/drm/intel/issues/2405
  [i915#2410]: https://gitlab.freedesktop.org/drm/intel/issues/2410
  [i915#2426]: https://gitlab.freedesktop.org/drm/intel/issues/2426
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2505]: https://gitlab.freedesktop.org/drm/intel/issues/2505
  [i915#2530]: https://gitlab.freedesktop.org/drm/intel/issues/2530
  [i915#2598]: https://gitlab.freedesktop.org/drm/intel/issues/2598
  [i915#2634]: https://gitlab.freedesktop.org/drm/intel/issues/2634
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#2681]: https://gitlab.freedesktop.org/drm/intel/issues/2681
  [i915#2684]: https://gitlab.freedesktop.org/drm/intel/issues/2684
  [i915#2724]: https://gitlab.freedesktop.org/drm/intel/issues/2724
  [i915#2795]: https://gitlab.freedesktop.org/drm/intel/issues/2795
  [i915#2802]: https://gitlab.freedesktop.org/drm/intel/issues/2802
  [i915#2803]: https://gitlab.freedesktop.org/drm/intel/issues/2803
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/intel/issues/2846
  [i915#300]: https://gitlab.freedesktop.org/drm/intel/issues/300
  [i915#315]: https://gitlab.freedesktop.org/drm/intel/issues/315
  [i915#483]: https://gitlab.freedesktop.org/drm/intel/issues/483
  [i915#49]: https://gitlab.freedesktop.org/drm/intel/issues/49
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54
  [i915#588]: https://gitlab.freedesktop.org/drm/intel/issues/588
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#716]: https://gitlab.freedesktop.org/drm/intel/issues/716
  [i915#768]: https://gitlab.freedesktop.org/drm/intel/issues/768
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79
  [k.org#202321]: https://bugzilla.kernel.org/show_bug.cgi?id=202321


Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * Linux: CI_DRM_9549 -> Patchwork_19269

  CI-20190529: 20190529
  CI_DRM_9549: 71d1067baaab27385b5fcc81c2b789eb8d1ca92c @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5944: e230cd8d481ea28ccc11b554d7a34ffca003fb25 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_19269: 22c93cdd73c1fe518b8f881c07fe384061798d8a @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19269/index.html

[-- Attachment #1.2: Type: text/html, Size: 33089 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-06 16:08     ` Chris Wilson
  2021-01-06 16:19       ` Chris Wilson
@ 2021-01-07 10:16       ` Tvrtko Ursulin
  2021-01-07 10:27         ` Chris Wilson
  1 sibling, 1 reply; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-01-07 10:16 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 06/01/2021 16:08, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2021-01-06 15:57:49)

[snip]

>>> @@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>>>                        __unwind_incomplete_requests(engine);
>>>    
>>>                        last = NULL;
>>> -             } else if (need_timeslice(engine, last) &&
>>> -                        timeslice_expired(execlists, last)) {
>>> +             } else if (timeslice_expired(engine, last)) {
>>>                        ENGINE_TRACE(engine,
>>> -                                  "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
>>> -                                  last->fence.context,
>>> -                                  last->fence.seqno,
>>> -                                  last->sched.attr.priority,
>>> +                                  "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
>>> +                                  yesno(timer_expired(&execlists->timer)),
>>> +                                  last->fence.context, last->fence.seqno,
>>> +                                  rq_prio(last),
>>>                                     execlists->queue_priority_hint,
>>>                                     yesno(timeslice_yield(execlists, last)));
>>>    
>>> +                     cancel_timer(&execlists->timer);
>>
>> What is this cancel for?
> 
> This branch is taken upon yielding the timeslice, but we may not submit
> a new pair of contexts, leaving the timer active (and marked as
> expired). Since the timer remains expired, we will continuously looped
> until a context switch, or some other preemption event.

Sorry I was looking at the cancel_timer in process_csb and ended up 
replying at the wrong spot. The situation there seems to be removing the 
single timeslice related call (set_timeslice) and adding a cancel_timer 
which is also not obvious to me what it is about.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-07 10:16       ` Tvrtko Ursulin
@ 2021-01-07 10:27         ` Chris Wilson
  2021-01-07 12:52           ` Tvrtko Ursulin
  0 siblings, 1 reply; 18+ messages in thread
From: Chris Wilson @ 2021-01-07 10:27 UTC (permalink / raw)
  To: Tvrtko Ursulin, intel-gfx

Quoting Tvrtko Ursulin (2021-01-07 10:16:57)
> 
> On 06/01/2021 16:08, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2021-01-06 15:57:49)
> 
> [snip]
> 
> >>> @@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
> >>>                        __unwind_incomplete_requests(engine);
> >>>    
> >>>                        last = NULL;
> >>> -             } else if (need_timeslice(engine, last) &&
> >>> -                        timeslice_expired(execlists, last)) {
> >>> +             } else if (timeslice_expired(engine, last)) {
> >>>                        ENGINE_TRACE(engine,
> >>> -                                  "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
> >>> -                                  last->fence.context,
> >>> -                                  last->fence.seqno,
> >>> -                                  last->sched.attr.priority,
> >>> +                                  "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
> >>> +                                  yesno(timer_expired(&execlists->timer)),
> >>> +                                  last->fence.context, last->fence.seqno,
> >>> +                                  rq_prio(last),
> >>>                                     execlists->queue_priority_hint,
> >>>                                     yesno(timeslice_yield(execlists, last)));
> >>>    
> >>> +                     cancel_timer(&execlists->timer);
> >>
> >> What is this cancel for?
> > 
> > This branch is taken upon yielding the timeslice, but we may not submit
> > a new pair of contexts, leaving the timer active (and marked as
> > expired). Since the timer remains expired, we will continuously looped
> > until a context switch, or some other preemption event.
> 
> Sorry I was looking at the cancel_timer in process_csb and ended up 
> replying at the wrong spot. The situation there seems to be removing the 
> single timeslice related call (set_timeslice) and adding a cancel_timer 
> which is also not obvious to me what it is about.

Yes, there the cancel_timer() is equivalent to the old set_timeslice().

After processing an event, we assume it is a change in context meriting
a new timeslice. To start a new timeslice rather than continue the old
one, we remove an existing timer and readd it for the end of the
timeslice.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression
  2021-01-07 10:27         ` Chris Wilson
@ 2021-01-07 12:52           ` Tvrtko Ursulin
  0 siblings, 0 replies; 18+ messages in thread
From: Tvrtko Ursulin @ 2021-01-07 12:52 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx


On 07/01/2021 10:27, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2021-01-07 10:16:57)
>>
>> On 06/01/2021 16:08, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2021-01-06 15:57:49)
>>
>> [snip]
>>
>>>>> @@ -1363,16 +1336,16 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
>>>>>                         __unwind_incomplete_requests(engine);
>>>>>     
>>>>>                         last = NULL;
>>>>> -             } else if (need_timeslice(engine, last) &&
>>>>> -                        timeslice_expired(execlists, last)) {
>>>>> +             } else if (timeslice_expired(engine, last)) {
>>>>>                         ENGINE_TRACE(engine,
>>>>> -                                  "expired last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
>>>>> -                                  last->fence.context,
>>>>> -                                  last->fence.seqno,
>>>>> -                                  last->sched.attr.priority,
>>>>> +                                  "expired:%s last=%llx:%lld, prio=%d, hint=%d, yield?=%s\n",
>>>>> +                                  yesno(timer_expired(&execlists->timer)),
>>>>> +                                  last->fence.context, last->fence.seqno,
>>>>> +                                  rq_prio(last),
>>>>>                                      execlists->queue_priority_hint,
>>>>>                                      yesno(timeslice_yield(execlists, last)));
>>>>>     
>>>>> +                     cancel_timer(&execlists->timer);
>>>>
>>>> What is this cancel for?
>>>
>>> This branch is taken upon yielding the timeslice, but we may not submit
>>> a new pair of contexts, leaving the timer active (and marked as
>>> expired). Since the timer remains expired, we will continuously looped
>>> until a context switch, or some other preemption event.
>>
>> Sorry I was looking at the cancel_timer in process_csb and ended up
>> replying at the wrong spot. The situation there seems to be removing the
>> single timeslice related call (set_timeslice) and adding a cancel_timer
>> which is also not obvious to me what it is about.
> 
> Yes, there the cancel_timer() is equivalent to the old set_timeslice().
> 
> After processing an event, we assume it is a change in context meriting
> a new timeslice. To start a new timeslice rather than continue the old
> one, we remove an existing timer and readd it for the end of the
> timeslice.

I was looking what is the resulting symmetry of start/cancel call sites. 
We end up with a single start_timeslice call from the tasklet, but 
guarded with !pending. That looked confusing at first until I remembered 
you mentioned (or a comment somewhere already says) that the idea is to 
only start the timeslice after the csb ack.

That explains the transition from timer disabled to timer enabled.

Then as long as there are contexts queued code relies on timeslice 
expiry, or re-submission with change, to temporarily suspend the timer.

It looks okay as far as I can see. Will tag the latest.

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-01-07 12:53 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-06 12:39 [Intel-gfx] [PATCH 1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Chris Wilson
2021-01-06 12:39 ` [Intel-gfx] [PATCH 2/4] drm/i915/selftests: Improve handling of iomem around stolen Chris Wilson
2021-01-06 15:12   ` Tvrtko Ursulin
2021-01-06 12:39 ` [Intel-gfx] [PATCH 3/4] drm/i915/gt: Restore ce->signal flush before releasing virtual engine Chris Wilson
2021-01-06 12:39 ` [Intel-gfx] [PATCH 4/4] drm/i915/gt: Remove timeslice suppression Chris Wilson
2021-01-06 15:57   ` Tvrtko Ursulin
2021-01-06 16:08     ` Chris Wilson
2021-01-06 16:19       ` Chris Wilson
2021-01-07 10:16       ` Tvrtko Ursulin
2021-01-07 10:27         ` Chris Wilson
2021-01-07 12:52           ` Tvrtko Ursulin
2021-01-06 13:01 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/4] drm/i915/selftests: Break out of the lrc layout test after register mismatch Patchwork
2021-01-06 13:03 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-01-06 13:30 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-01-06 15:10 ` [Intel-gfx] [PATCH 1/4] " Tvrtko Ursulin
2021-01-06 15:17   ` Chris Wilson
2021-01-06 15:28     ` Tvrtko Ursulin
2021-01-06 16:38 ` [Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [1/4] " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).